The payment card industry data security standard pci dss
Brian Svidergol
RHEL3, VCP, NCIE-SAN, MCT, MCSE
Domain 1 Review Questions ……………………….………………………………………………..………………………………………….………………..……20 Answers to Domain 1 Review Questions …………………………………………..…………………………………………………….………………..……21
Domain 2. Asset Security ………………………………………………………………..….…………………………………………………….………………..……22 2.1 Identify and classify information and assets …………………………….…………………....…………………………….………………..……22 2.2 Determine and maintain information and asset ownership ……………………....................………………….………………..……23 2.3 Protect privacy ……………………….……………………………………………………….…………………………………………….………………..……23 2.4 Ensure appropriate asset retention ……………………….……………...........................................…………………………………..……24 2.5 Determine data security controls ……………………….……………………..………………………………………………….………………..……24 2.6 Establish information and asset handling requirements ………………………….......................……………….………………..……25
Domain 3 Review Questions ……………………….…………………………………...........................................………………….………………..……42 Answers to Domain 3 Review Questions ……………………......................….…………………………………………………….………………..……43
Domain 4. Communication and Network Security ………….…………….………...…………………………………………….………………..……44 4.1 Implement secure design principles in network architecture ……………………….…………………………………………….……..…44 4.2 Secure network components ……………………….……………..............................……………………………………….………………..……46 4.3 Implement secure communication channels according to design ……………………….………………..…………………........……48
Answers to Domain 5 Review Questions ………………….....................…….…………………………………………………….………………..……60
4
8.3 Assess the effectiveness of software security ……………………..….…………………………………………………….………………..……88
8.4 Assess security impact of acquired software ………....……………….…………………………………………………….………………..……88
About the Author ……………………….…………………...............................................................………………………………….…..…………..……93
About Netwrix ……………………….………....................................................................…………………………………………….………………..……93
Preparing to take the Certified Information Systems Security Professional (CISSP) exam requires a great deal of time and
effort. The exam covers eight domains:
5. Identity and Access Management (IAM)
6. Security Assessment and Testing
two of the eight domains if you have either a four-year college degree or an approved credential or certification. See
for a complete list of approved credentials and
• Exams in languages other than English remain in a linear format. You get up to 6 hours to complete a series of 250
questions.
Using multiple study sources and methods improves your chances of passing the CISSP exam. For example, instead of reading three or four books, you might read one book, watch a series of videos, take some practice test questions and read a study guide. Or you might take a class, take practice test questions and read a study guide. Or you might join a study group and read a book. The combination of reading, hearing and doing helps your brain process and retain information. If your plan is to read this study guide and then drive over to the exam center, you should immediately rethink your plan!
There are a couple of ways you can use this study guide:
On April 15, 2018, the agency that provides the CISSP exam, the International Info System Security Certification Consortium, released an updated set of exam objectives (the exam blueprint). This blu
While most of the exam topics remain the same, there are some minor changes to reflect the latest industry trends and information. Most books for the new version of the exam will be released in May 2018 or later. This study guide has been updated to reflect the new blueprint. The updates are minor: A few small topics have been removed, a few new ones have been added, and some items have been reworded.
1.1 Understand and apply concepts of confidentiality, integrity and availability
Confidentiality, integrity and availability make up what’s known as the CIA triad. The CIA triad is a security model that helps organizations stay focused on the important aspects of maintaining a secure environment.
To establish security governance principles, adopt a framework such as the one from the National Institute of Standards and Technology (NIST). Be sure the framework you choose includes the following:
• Alignment of security function to strategy, goals, mission, and objectives. An organization has a mission and uses strategy, plans and objectives to try to meet that mission. These components flow down, with the ones below supporting the ones above. Business strategy is often focused 5 or more years out. In the shorter term, typically 1 to 2 years, you have tactical plans that are aligned with the strategic plan. Below that are operational plans — the detailed tactical plans that keep the business running day to day. Objectives are the closest to the ground and represent small efforts to help you achieve a mission. For example, a car manufacturer’s mission might be to build and sell as many high-quality cars as possible. The objectives might include expanding automation to reduce the
• Organizational roles and responsibilities. There are multiple roles to consider. Management has a responsibility to keep the business running and to maximize profits and shareholder value. The security architect or security engineer has a responsibility to understand the organization’s business needs, the existing IT environment, and the current state of security and vulnerability, as well as to think through strategies (improvements, configurations and countermeasures) that could maximize security and minimize risk. There is a need for people who can translate between technical and non-technical people. Costs must be justified and reasonable, based on the organization’s requirements and risk.
• Security control frameworks. A control framework helps ensure that your organization is covering all the bases around securing the environment. There are many frameworks to choose from, such as Control Objectives for Information Technology (COBIT) and the ISO 27000 series (27000, 27001, 27002, etc.). These frameworks fall into four categories:
• Due care / due diligence. Ensure you understand the difference between these two concepts. Due care is about your legal responsibility within the law or within organizational policies to implement your organization’s controls, follow security policies, do the right thing and make reasonable choices. Due diligence is about understanding your security governance principles (policies and procedures) and the risks to your organization. Due diligence often involves gathering information through discovery, risk assessments and review of existing documentation; creating documentation to establish written policies; and disseminating the information to the organization. Sometimes, people think of due diligence as the method by which due care can be exercised.
After you establish and document a framework for governance, you need security awareness training to bring everything together. All new hires should complete the security awareness training as they come on board, and existing employees should recertify on it regularly (typically yearly).
• Contractual, legal, industry standards, and regulatory requirements. Understand the legal systems. Civil law is most common; rulings from judges typically do not set precedents that impact other cases. With common law, which is used in the USA, Canada, the UK and former British colonies, rulings from judges can set precedents that have significant impact on other cases. An example of religious law is Sharia (Islamic law), which use the Qur’an and Hadith for the foundation of laws. Customary law takes common, local and accepted practices and sometimes makes them laws. Within common law, you have criminal law (laws against society) and civil law (typically person vs. person and results in a monetary compensation from the losing party). Compliance factors into laws, regulations, and industry standards such as Sarbanes-Oxley (SOX), the Gramm-Leach-Bliley Act (GLBA), the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), and the Federal Information Security Management Act (FISMA). As part of your exam preparation, familiarize yourself with these standards by reading their high-level summaries.
• Privacy requirements. Privacy is about protection of PII. Laws vary. The European Union has tough laws around privacy. Be familiar with the General Data Protection Regulation (GDPR). Be familiar with the requirements around healthcare data, credit card data and other PII data as it relates to various countries and their laws and regulations.
10
• Import/export controls. Every country has laws around the import and export of hardware and software. For example, the United States has restrictions around the export of cryptographic technology, and Russia requires a license to import encryption technologies manufactured outside the country.
• Trans-border data flow. If your organization adheres to specific security laws and regulations, then you should adhere to them no matter where the data resides — for example, even if you store a second copy of your data in another country. Be aware of the applicable laws in all countries where you store data and maintain computer systems. In some cases, data might need to remain in the country. In other cases, you need to be careful with your data because the technical teams might be unaware of the security and compliance requirements. The EU-US Privacy Shield (formerly the EU-US Safe Harbor agreement) controls data flow from the EU to the United States. The EU has more stringent privacy protections and without the Safe Harbor act, personal data flow from the EU to the United States would not be allowed.
• Protect society, the common good, necessary public trust and confidence, and the infrastructure. This is “do the right thing.” Put the common good ahead of yourself. Ensure that the public can have faith in your infrastructure and security.
• Act honorably, honestly, justly, responsibly, and legally. Always follow the laws. But what if you find yourself working on a project where conflicting laws from different countries or jurisdictions apply? In such a case, you should prioritize the local jurisdiction from which you are performing the services.
• Organizational code of ethics. You must also support ethics at your organization. This can be interpreted to mean evangelizing ethics throughout the organization, providing documentation and training around ethics, or looking for ways to enhance the existing organizational ethics. Some organizations might have slightly different ethics than others, so be sure to familiarize yourself with your organization’s ethics and guidelines.
1.6 Develop, document, and implement security policy, standards, procedures and guidelines
• Guidelines. These are recommended but optional. For example, your organization might have a guideline that recommends storing passwords in an encrypted password vault. It is a good idea to do that. But somebody might choose to store passwords in their brain or using another secure storage mechanism.
• Baselines. Although baselines are not explicitly mentioned in this section of the exam, don’t forget about them. Baselines automate implementation of your standards, thereby ensuring adherence to them. For example, if you have 152 configuration items for your server builds, you can configure all of them in a baseline that is applied to every server that is built. Group Policy objects (GPOs) are often used to comply with standards in a Windows network. Configuration management solutions can also help you establish baselines and spot configurations that drift away from them.
centers), recovery (if a service becomes unavailable, you need to recover it as soon as possible), and contingency (a last resort in case resilience and recovery prove ineffective).
• Develop and document scope and plan. Developing the project scope and plan starts with gaining support of the management team, making a business case (cost/benefit analysis, regulatory or compliance reasons, etc.), and ultimately gaining approval to move forward. Next, you need to form a team with representatives from the business as well as IT. Then you are ready to begin developing the plan. Start with a business continuity policy statement, then conduct a business impact analysis (as explained in the next bullet), and then develop the remaining components: preventive controls, relocation, the actual continuity plan, testing, training and maintenance). Be familiar with the difference between business continuity (resuming critical functions without regard for the site) and disaster recovery (recovering critical functions at the primary site, when possible).
• Employment agreements and policies. An employment agreement specifies job duties, expectations, rate of pay, benefits and information about termination. Sometimes, such agreements are for a set period (for example, in a contract or short-term job). Employment agreements facilitate termination when needed for an underperforming employee. The more information and detail in an employment agreement, the less risk (risk of a wrongful termination lawsuit, for example) the company has during a termination proceeding. For instance, a terminated employee might take a copy of their email with them without thinking of it as stealing, but they are less likely to do so if an employment agreement or another policy document clearly prohibits it.
• Onboarding and termination processes. Onboarding comprises all the processes tied to a new employee starting at your organization. Having a documented process in place enables new employees to be integrated as quickly and consistently as possible, which reduces risk. For example, if you have five IT admins performing the various onboarding processes, you might get different results each time if you don’t have the processes
• Compliance policy requirements. Organizations have to adhere to different compliance mandates, depending on their industry, country and other factors. All of them need to maintain documentation about their policies and procedures for meeting those requirements. Employees should be trained on the company’s compliance mandates at a high level upon hire and regularly thereafter (such as re-certifying once a year).
• Privacy policy requirements. Personally identifiable information about employees, partners, contractors, customers and other people should be stored in a secure way, accessible only to those who require the information to perform their jobs. For example, somebody in the Payroll department might need access to an employee’s banking information to have their pay automatically deposited, but no one else should be able to access that data. Organizations should maintain a documented privacy policy that outlines the types of data covered by the policy and who the policy applies to. Employees, contractors and anyone else who might have access to the data should be required to read and agree to the privacy policy upon hire and on a regular basis thereafter (such as annually).
• Assess risk. You have a risk when you have a threat and a vulnerability. In those cases, you need to figure out the
almost certain and the consequences are major, then the risk is extreme.
• Quantitative. This method is more objective than the qualitative method; it uses dollars or other metrics
risk mitigation (reduce the risk), risk assignment (assign the risk to a team or provider for action), risk acceptance
(accept the risk) or risk rejection (ignore the risk).
you have a password policy that a legacy application cannot technically meet (for example, the app is limited to 10
characters for the password). To reduce the likelihood of that password being compromised, you can implement
configuration, but you must understand the implementation process — where you start, the order of the steps you
take and how you finish.
the attacker.
• Corrective. A corrective control implements a fix after a security incident occurs.
enable outside users to get to your SharePoint site, which resides on your local area network. Instead of
15
• Asset valuation. When you think of assets, don’t just think of physical assets such as computers and office furniture (tangible assets). Assets also include the company’s data and intellectual property (intangible assets). While tangible assets are easy to assess for value (for example, you bought the disk drive for $250), data and intellectual property can be harder to place a value on. Be familiar with the following strategies of intangible asset valuation:
• Cost approach. How much would it cost to replace the asset?
• Reporting. One of the foundations of an enterprise-grade security solution is the ability to report on your environment (what you have, what the top risks are, what’s happening right now, what happened 3 days ago, etc.). Reporting provides information. And that information is sometimes used to start a continuous improvement process.
• Continuous improvement. Continuous improvement is an ongoing, never-ending effort to take what you have and improve it. Often, improvements are small and incremental. However, over time, small improvements can add up. Continuous improvement can be applied to products (for example, upgrading to the latest version), services
1.10 Understand and apply threat modeling concepts and methodologies
When you perform threat modeling for your organization, you document potential threats and prioritize those threats (often by putting yourself in an attacker’s mindset). There are four well-known methods. STRIDE, introduced at Microsoft in 1999, focuses on spoofing of user identity, tampering, repudiation, information disclosure, denial of service and elevation of privilege. PASTA (process for attack simulation and threat analysis) provides dynamic threat identification, enumeration and scoring. Trike uses threat models based on a requirements model. VAST (visual, agile and simple threat modeling) applies across IT infrastructure and software development without requiring security experts.
• Threat modeling concepts. If you understand the threats to your organization, then you are ready to document the potential attack vectors. You can use diagramming to list the various technologies under threat. For example, suppose you have a SharePoint server that stores confidential information and is therefore a potential target. You can diagram the environment integrating with SharePoint. You might list the edge firewalls, the reverse proxy in the perimeter network, the SharePoint servers in the farm and the database servers. Separately, you might have a diagram showing SharePoint’s integration with Active Directory and other applications. You can use these diagrams to identify attack vectors against the various technologies.
17
• Hardware. Is the company using antiquated hardware that introduces potential availability issues? Is the company using legacy hardware that isn’t being patched by the vendor? Will there be integration issues with the hardware?
• Software. Is the company using software that is out of support, or from a vendor that is no longer in business? Is the software up to date on security patches? Are there other security risks associated with the software?
18
• Periodic content reviews. Threats are complex and the training needs to be relevant and interesting to be effective. This means updating training materials and awareness training, and changing out the ways which security is tested and measured. If you always use the same phishing test campaign or send it from the same account on the same day of the year, it isn’t effective. The same applies to other material. Instead of relying on long and detailed security documentation for training and awareness, consider using internal social media tools, videos and interactive campaigns.
• Program effectiveness evaluation. Time and money must be allocated for evaluating the company’s security awareness and training. The company should track key metrics, such as the percentage of employees clicking on a link in a test phishing email. Is the awareness and training bringing the total number of clicks down? If so, the program is effective. If not, you need to re-evaluate it.
Then move on to Domain 2.
1. You are a security consultant. A large enterprise customer hires you to ensure that their security operations are
c. Detective
d. Corrective
chances of additional customers experiencing a security incident based on that data. Which type of approach
should you use for the risk analysis?
e. Market
3. You are working on a business continuity project for a company that generates a large amount of content each day
c. Maximum tolerable downtime (MTD)
d. Maximum data tolerance (MDT)
Explanation: Deterrent frameworks are technology-related and used to discourage malicious activities. For example, an intrusion prevention system or a firewall would be appropriate in this framework.
There are three other primary control frameworks. A preventative framework helps establish security policies and security awareness training. A detective framework is focused on finding unauthorized activity in your environment after a security incident. A corrective framework focuses on activities to get your environment back after a security incident. There isn’t an assessment framework.
Explanation: The RTO establishes the maximum amount of time the organization will be down (or how long it takes to recover), the RPO establishes the maximum data loss that is tolerable, the MTD covers the maximum tolerable downtime, and MDT is just a made-up phrase used as a distraction. In this scenario, with the focus on the data loss, the correct answer is RPO.
21
To improve security, you need to identify both your data and your physical assets and classify them according to their importance or sensitivity, so you can specify procedures for handling them appropriately based on their classification.
• Data classification. Organizations classify their data using labels. You might be familiar with two government classification labels, Secret and Top Secret. Non-government organizations generally use classification labels such as Public, Internal Use Only, Partner Use Only, or Company Confidential. However, data classification can be more granular; for example, you might label certain information as HR Only.
• Formal access approval. Whenever a user needs to gain access to data or assets that they don’t currently have access to, there should be a formal approval process. The process should involve approval from the data owner, who should be provided with details about the access being requested. Before a user is granted access to the data, they should be told the rules and limits of working with it. For example, they should be aware that they must not send documents outside the organization if they are classified as Internal Only.
22
Data owners are responsible for classifying the data they own. In larger companies, an asset management department handles asset classification. A custodian is a hands-on role that implements and operates solutions for data (e.g., backups and restores). A system owner is responsible for the computer environment (hardware, software) that houses data; this is typically a management role with operational tasks handed off to the custodian.
2.3 Protect privacy
• Secure deletion or overwriting of data. You can use a tool to overwrite the space that a file was using with random 1s and 0s, either in one pass or in multiple passes. The more passes you use, the less likely it is that the data can be recovered.
• Destroying the media. You can shred disk drives, smash them into tiny pieces, or use other means to physically destroy them. This is effective but renders the media unusable thereafter.
is limiting how much data your organization collects. For example, if you collect users’ birthdates or identification
card numbers, you then must protect that data. If your organization doesn’t need the data, it shouldn’t collect it.
There are two aspects to data retention: You should ensure that your organization holds data for as long as required —
and also that it securely deletes data that is no longer required, in order to reduce the risk of its exposure.
of unneeded data.
Besides data, this section also covers the hardware and personnel required to use the data. These are quite important.
readers and so on) needed to get to the data that you are saving.
• Personnel. Suppose your company is retaining data for the required time periods and maintaining hardware to
You need data security controls that protect your data as it is stored, used and transmitted.
• Understanding data states. The industry identifies three data states:
• Scoping and tailoring. Scoping is the process of finalizing which controls are in scope and which are out of scope
(not applicable). Tailoring is the process of customizing the implementation of controls for an organization.
• Data at rest. You can encrypt data at rest. You should consider encryption for operating system volumes and data volumes, and you should encrypt backups, too. Be sure to consider all locations for data at rest, such as tapes, USB drives, external drives, RAID arrays, SAN, NAS and optical media.
• Data in motion. Data is in motion when it is being transferred from one place to another. Sometimes, it is moving from your local area network to the internet, but it can also be internal to your network, such as from a server to a client computer. You can encrypt data in motion to protect it. For example, a web server uses a certificate to encrypt data being viewed by a user, and you can use IPsec to encrypt communications. There are many options. The most important point is to use encryption whenever possible, including for internal-only web sites available only to workers connected to your local area network.
• Storage. You can store data in many ways, including on paper, disk or tape. For each scenario, you must define the acceptable storage locations and inform users about those locations. It is common to provide a vault or safe for backup tapes stored on premises, for example. Personnel who deal with sensitive papers should have a locked
25
Domain 2 Review Questions
establishing a formal access approval process. Which role should you list to approve policies that dictate which
users can gain access to data?
e. System owner
2. Your organization has a goal to maximize the protection of organizational data. You need to recommend 3 methods
d. Degaussing
e. Physical destruction
c. Vendor screening
d. Vendor reviewing
Explanation: Each data owner is responsible for approving access to data that they own. This is typically handled via approving data access policies that are then implemented by the operations team. As part of a formal access approval process, a data owner should be the ultimate person responsible for the data access.
2. Answer: B, D, E
28
domains.
3.1 Implement and manage engineering processes using secure design principles
• Requirements. It is important to document all the requirements from the various business units and stakeholders. Establish both functional requirements (for example, the app will enable users to pay bills by taking a picture of their credit card) and non-functional requirements (for example, the app must be PCI DSS compliant).
• Design. Next, establish a design to meet the requirements. A design cannot be completed without all requirements. For example, to know how robust an infrastructure to design, you need to know how many users need to use the system simultaneously. Part of the design phase must be focused around security. For example, you must account for the principle of least privilege, fail-safe defaults and segregation of duties.
There are many other phases, such as user training, communication and compliance testing. Remember that skipping any
of these steps reduces the chances of having a successful and secure solution.
• Bell-LaPadula. This model was established in 1973 for the United States Air Force. It focuses on confidentiality. The goal is to ensure that information is exposed only to those with the right level of classification. For example, if you have a Secret clearance, you can read data classified as Secret, but not Top Secret data. This model has a “no read up” (users with a lower clearance cannot read data classified at a higher level) and a “no write down” (users with a clearance higher than the data cannot modify that data) methodology. Notice that Bell-LaPadula doesn’t address “write up,” which could enable a user with a lower clearance to write up to data classified at a higher level. To address this complexity, this model is often enhanced with other models that focus on integrity. Another downside to this model is that it doesn’t account for covert channels. A covert channel is a way of secretly sending data across an existing connection. For example, you can send a single letter inside the IP identification header. Sending a large message is slow. But often such communication isn’t monitored or caught.
• Biba. Released in 1977, this model was created to supplement Bell-LaPadula. Its focus is on integrity. The methodology is “no read down” (for example, users with a Top Secret clearance can’t read data classified as Secret) and “no write up” (for example, a user with a Secret clearance can’t write data to files classified as Top Secret). By combining it with Bell-LaPadula, you get both confidentiality and integrity.
• The evaluation process will look at the protection profile (PP), which is a document that outlines the security needs. A vendor might opt to use a specific protection profile for a particular solution.
• The evaluation process will look at the security target (ST), which identifies the security properties for the TOE. The ST is usually published to customers and partners and available to internal staff.
This section focuses on the capabilities of specific computing components. Thus, it isn’t a section where hands-on experience can give you an advantage. Some of these components are discussed in other sections, sometimes in more detail. Ensure that you are familiar with all the information in this section. For any topic in this section that is new to you, plan to dive deeper into the topic outside of this study guide.
• Memory protection. At any given time, a computing device might be running multiple applications and services. Each one occupies a segment of memory. The goal of memory protection is to prevent one application or service from impacting another application or service. There are two popular memory protection methods:
• Interfaces. In this context, an interface is the method by which two or more systems communicate. For example, when an LDAP client communicates with an LDAP directory server, it uses an interface. When a VPN client connects to a VPN server, it uses an interface. For this section, you need to be aware of the security capabilities of interfaces.
There are a couple of common capabilities across most interfaces:
• Fault tolerance. Fault tolerance is a capability used to keep a system available. In the event of an attack (such as a DoS attack), fault tolerance helps keep a system up and running. Complex attacks can target a system, knowing that the fallback method is an older system or communication method that is susceptible to attack.
3.5 Assess and mitigate the vulnerabilities of security architectures, designs and solution elements
• Cryptographic systems. The goal of a well-implemented cryptographic system is to make a compromise too time- consuming (such as 5,000 years) or too expensive (such as millions of dollars). Each component has vulnerabilities:
• Software. Software is used to encrypt and decrypt data. It can be a standalone application with a graphical interface, or software built into the operating system or other software. As with any software, there are sometimes bugs or other issues, so regular patching is important.
• Protocols. There are different protocols for performing cryptographic functions. Transport Layer Security (TLS) is a very popular protocol used across the internet, such as for banking sites or sites that require encryption. Today, most sites (even Google) use encryption. Other protocols include Kerberos and IPsec.
• Industrial Control Systems (ICS). Supervisory control and data acquisition (SCADA) systems are used to control physical devices such as those found in an electrical power plant or factory. SCADA systems are well suited for distributed environments, such as those spread out across continents. Some SCADA systems still rely on legacy or proprietary communications. These communications are at risk, especially as attackers are gaining knowledge of such systems and their vulnerabilities.
3.6 Assess and mitigate vulnerabilities in web-based systems
• Web server software. The web server software must be running the latest security patches. Running the latest
version of the software can provide enhanced (and optional) security features. You need to have logging, auditing
become compromised. To minimize the risk of compromise, you need a multi-layered approach that includes a
standardized browser configured for high security, web proxy servers to blacklist known bad web servers and track
for more information. Here are
two of the most important:
comment out the password check). Input validation can help minimize the chances of an injection attack.
But you need more than that. You need to properly test these types of scenarios prior to going live. One
this attack, you can disable document type definitions (DTDs).
3.7 Assess and mitigate vulnerabilities in mobile systems
You need to apply your organization’s standards and security policies, when applicable. For example, you need to ensure
that the devices are running the latest version of the software and have the latest patches. To deploy and maintain the
In addition to managing security for your computing infrastructure and computers, you also should think about other systems that interact with your computing infrastructure. Today, that includes everything from coffee makers to smart white boards to copiers. These devices are becoming more and more connected, and some of them are even IoT devices. While these devices have had computers embedded in them for some time, they used to be standalone devices, not connected to your network, so a compromise was extremely limited and quite rare. Today, you need to consider the following information when managing your embedded devices:
• Some devices are configured by default to contact the manufacturer to report health information or diagnostic data. You need to be aware of such communication. Disable it when possible. At a minimum, ensure that the configuration is such that additional information cannot be sent out alongside the expected information.
• Cryptographic lifecycle (e.g., cryptographic limitations, algorithm/protocol governance). When we think about the lifecycle of technologies, we often think about the hardware and software support, performance and reliability. When it comes to cryptography, things are a bit different: The lifecycle is focused squarely around security. As computing power goes up, the strength of cryptographic algorithms goes down. It is only a matter of
35
• Asymmetric. Asymmetric encryption uses different keys for encryption and decryption. Since one is a public key that is available to anybody, this method is sometimes referred to as “public key encryption.” Besides the public key, there is a private key that should remain private and protected. Asymmetric encryption doesn’t have any issues with distributing public keys. While asymmetric encryption is slower, it is best suited for sharing between two or more parties. RSA is one common asymmetric encryption standard.
• Elliptic curves. Elliptic Curve Cryptography (ECC) is a newer implementation of asymmetric encryption. The primary benefit is that you can use smaller keys, which enhances performance.
the more tiers, the more security (but proper configuration is critical). The more tiers you have, the more complex and costly the PKI is to build and maintain.
• Key protection and custody. Keys must be protected. You can use a method called split custody which enables two or more people to share access to a key — for example, with two people, each person can hold half the password to the key.
• Key rotation. If you use the same keys forever, you are at risk of having the keys lost or stolen or having your information decrypted. To mitigate these risks, you should retire old keys and implement new ones.
37
• Ciphertext only. In a ciphertext-only attack, you obtain samples of ciphertext (but not any plaintext). If you have enough ciphertext samples, the idea is that you can decrypt the target ciphertext based on the ciphertext samples. Today, such attacks are very difficult.
• Known plaintext. In a known plaintext attack, you have an existing plaintext file and the matching ciphertext. The goal is to derive the key. If you derive the key, you can use it to decrypt other ciphertext created by the same key.
• Restrict printing of a document to a defined set of people
• Provide portable document protection such that the protection remains with the document no matter where it is stored, how it is stored, or which computing device or user opens it
This section applies to applying secure principles to data centers, server rooms, network operations centers and offices across an organization’s locations. While some areas must be more secure than others, you must apply secure principles throughout your site to maximize security and reduce risk. Crime Prevention through Environmental Design (CPTED) is a well known set of guidelines for the secure design of buildings and office spaces. CPTED stresses three principles:
• Natural surveillance. Natural surveillance enables people to observe what’s going on around the building or campus while going about their day-to-day work. It also eliminates hidden areas, areas of darkness and obstacles such as solid fences. Instead, it stresses low or see-through fencing, extra lighting, and the proper place of doors, windows and walkways to maximize visibility and deter crime.
Physical security is a topic that covers all the interior and exterior of company facilities. While the subtopics are focused on the interior, many of the same common techniques are applicable to the exterior too.
• Wiring closets. A wiring closet is typically a small room that holds IT hardware. It is common to find telephony and network devices in a wiring closet. Occasionally, you also have a small number of servers in a wiring closet. Access
Data centers are protected like server rooms, but often with a bit more protection. For example, in some data centers, you might need to use your badge both to enter and to leave, whereas with a server room, it is common to be able to walk out by just opening the door. In a data center, it is common to have one security guard checking visitors in and another guard walking the interior or exterior. Some organizations set time limits for authorized people to remain inside the data center. Inside a data center, you should lock everything possible, such as storage cabinets and IT equipment racks.
• Media storage facilities. Media storage facilities often store backup tapes and other media, so they should be protected just like a server room. It is common to have video surveillance too.
40
41
1. | ||
---|---|---|
• | ||
|
||
• |
|
You need to choose a PKI design to meet the requirements. Which design should you choose?
Answers to Domain 3 Review Questions
3. Answer: C
Explanation: When designing a PKI, keep in mind the basic security tenets — the more tiers, the more security, and the more flexibility. Of course, having more tiers also means more cost and complexity. In this scenario, to maximize security and flexibility, you need to use a three-tier hierarchy with the root CAs and the policy CAs being offline. Offline CAs enhance security. Multiple tiers, especially with the use of policy CAs, enhance flexibility because you can revoke one section of the hierarchy without impacting the other (for example, if one of the issuing CAs had a key compromised).
4.1 Implement secure design principles in network architecture
This section addresses the design aspects of networking, focusing on security. While networking’s primary function is to enable communication, security will ensure that the communication is between authorized devices only and the communication is private when needed.
• Internet Protocol (IP) networking. IP networking is what enables devices to communicate. IP provides the foundation for other protocols to be able to communicate. IP itself is a connectionless protocol. IPv4 is for 32-bit addresses, and IPv6 is for 128-bit addresses. Regardless of which version you use to connect devices, you then typically use TCP or UDP to communicate over IP. TCP is a connection-oriented protocol that provides reliable communication, while UDP is a connectionless protocol that provides best-effort communication. Both protocols use standardized port numbers to enable applications to communicate over the IP network.
• Implications of multilayer protocols. Some protocols simultaneously use multiple layers of the OSI or TCP/IP model to communicate, and traverse the layers at different times. The process of traversing theses layers is called encapsulation. For example, when a Layer 2 frame is sent through an IP layer, the Layer 2 data is encapsulated into
• Software-defined networks. As networks, cloud services and multi-tenancy grow, the need to manage these networks has changed. Many networks follow either a two-tier (spine/leaf or core/access) or a three-tier (core, distribution, edge/access) topology. While the core network might not change that frequently, the edge or access devices can communicate with a variety of devices types and tenants. Increasingly, the edge or access switch is a virtual switch running on a hypervisor or virtual machine manager. You must be able to add a new subnet or VLAN or make other network changes on demand. You must be able to make configuration changes programmatically across multiple physical devices, as well as across the virtual switching devices in the topology. A software-defined network enables you to make these changes for all devices types with ease.
• Wireless networks. Wireless networks can be broken into the different 802.11 standards. The most common protocols within 802.11 are shown in the table below. Additional protocols have been proposed to IEEE, including ad, ah, aj, ax, ay and az. You should be aware of the frequency that each protocol uses.
|
|
|
|
|
|
|
|
45
Here are issues to pay attention to:
• Operation of hardware. Modems are a type of Channel Service Unit/Data Service Unit (CSU/DSU) typically used for converting analog signals into digital. In this scenario, the CSU handles communication to the provider network, while the DSU handles communication with the internal digital equipment (in most cases, a router). Modems typically operate on Layer 2 of the OSI model. Routers operate on Layer 3 of the OSI model, and make the connection from a modem available to multiple devices in a network topology, including switches, access points and endpoint devices. Switches are typically connected to a router to enable multiple devices to use the connection. Switches help provide internal connectivity, as well as create separate broadcast domains when configured with VLANs. Switches typically operate at Layer 2 of the OSI model, but many switches can operate at both Layer 2 and Layer 3. Access points can be configured in the network topology to provide wireless access using one of the protocols and encryption algorithms discussed in section 4.1.
46
• Automated patch management. Patch management is the most critical task for maintaining endpoints. You must patch the operating system as well as all third-party applications. Beyond patching, staying up to date on the latest versions can bring enhanced security.
• Content-distribution networks (CDNs). CDNs are used to distribute content globally. They are typically used for downloading large files from a repository. The repositories are synchronized globally, and then each incoming request for a file or service is directed to the nearest service location. For example, if a request comes from Asia, a local repository in Asia, rather than one in the United States. would provide the file access. This reduces the latency of the request and typically uses less bandwidth. CDNs are often more resistant to denial of service (DoS) attacks than typical corporate networks, and they are often more resilient.
This section focuses on securing data in motion. You need to understand both design and implementation aspects.
• Voice. As more organizations switch to VoIP, voice protocols such as SIP have become common on Ethernet networks. This has introduced additional management, either by using dedicated voice VLANs on networks, or establishing quality of service (QoS) levels to ensure that voice traffic has priority over non-voice traffic. Other web- based voice applications make it more difficult to manage voice as a separate entity. The consumer Skype app, for example, allows for video and voice calls over the internet. This can cause additional bandwidth consumption that isn’t typically planned for in the network topology design or purchased from an ISP.
48
communication isn’t taking the expected or most efficient route to the destination. Which layer of the OSI model
you should troubleshoot?
e. Layer 7
2. A wireless network has a single access point and two clients. One client is on the south side of the building toward
b. Collision avoidance
c. Channel service unit
b. DNS
c. H.263
Answers to Domain 4 Review Questions
1. Answer: B
Explanation: In this scenario, collision avoidance is used. Wireless networks use collision avoidance specifically to
address the issue described in the scenario (which is known as the “hidden node problem”).
Domain 5. Identity and Access Management (IAM)
• Authorization. Traditional authorization systems rely on security groups in a directory, such as an LDAP directory. Based on your group memberships, you have a specific type of access (or no access). For example, administrators might grant one security group read access to an asset, while a different security group might get read/write/execute access to the asset. This type of system has been around a long time and is still the primary authorization mechanism for on-premises technologies. Newer authorization systems incorporate dynamic authorization or automated authorization. For example, the authorization process might check to see if you are in the Sales department and in a management position before you can gain access to certain sales data. Other information can be incorporated into authorization. For example, you can authenticate and get read access to a web-based portal, but you can’t get into the admin area of the portal unless you are connected to the corporate network.
Next, let’s look at some key details around controlling access to specific assets.
authorization solutions. This centralized access control is quite common because it gives organizations complete control no matter where the systems are.
• Devices. Devices include computers, smartphones and tablets. Today, usernames and passwords (typically from an LDAP directory) are used to control access to most devices. Fingerprints and other biometric systems are common, too. In high-security environments, users might have to enter a username and password and then use a second authentication factor (such as a code from a smartcard) to gain access to a device. Beyond gaining access to devices, you also need to account for the level of access. In high-security environments, users should not have administrative access to devices, and only specified users should be able to gain access to particular devices.
• SSO. Single sign-on provides an enhanced user authentication experience as the user accesses multiple systems and data across a variety of systems. It is closely related to federated identity management (which is discussed later in this section). Instead of authenticating to each system individually, the recent sign-on is used to create a security token that can be reused across apps and systems. Thus, a user authenticates once and then can gain access to a variety of systems and data without having to authenticate again. Typically, the SSO experience will last for a specified period, such as 4 hours or 8 hours. SSO often takes advantage of the user’s authentication to their computing device. For example, a user signs into their device in the morning, and later when they launch a web browser to go to a time-tracking portal, the portal accepts their existing authentication. SSO can be more sophisticated. For example, a user might be able to use SSO to seamlessly gain access to a web-based portal, but if the user attempts to make a configuration change, the portal might prompt for authentication before allowing the change. Note that using the same username and password to access independent systems is not SSO. Instead, it is often referred to as “same sign-on” because you use the same credentials. The main benefit of SSO is also its main downside: It simplifies the process of gaining access to multiple systems for everyone. For example, if attackers compromise a user’s credentials, they can sign into the computer and then seamlessly gain access to all apps using SSO. Multi- factor authentication can help mitigate this risk.
• LDAP. Lightweight Directory Access Protocol (LDAP) is a standards-based protocol (RFC 4511) that traces its roots back to the X.500 standard that came out in the early 1990s. Many vendors have implemented LDAP-compliant systems and LDAP-compliant directories, often with vendor-specific enhancements. LDAP
• Accountability. In this context, accountability is the ability to track users’ actions as they access systems and data. You need to be able to identify the users on a system, know when they access it, and record what they do while on the system. This audit data must be captured and logged for later analysis and troubleshooting. Important information can be found in this data. For example, if a user successfully authenticates to a computer in New York and then successfully authenticates to a computer in London a few minutes later, that is suspicious and should be investigated. If an account has repeated bad password attempts, you need data to track down the source of the attempts. Today, many companies are centralizing accountability. For example, all servers and apps send their audit data to the centralized system, so admins can gain insight across multiple systems with a single query. Because of the enormous amount of data in these centralized systems, they are usually “big data” systems, and you can use analytics and machine learning to unearth insights into your environment.
• Session management. After users authenticate, you need to manage their sessions. If a user walks away from the computer, anybody can walk up and assume their identity. To reduce the chances of that happening, you can require users to lock their computers when stepping away. You can also use session timeouts to automatically lock computers. You can also use password-protected screen savers that require the user to re-authenticate. You also need to implement session management for remote sessions. For example, if users connect from their computers to a remote server over Secure Shell (SSH) or Remote Desktop Protocol (RDP), you can limit the idle time of those sessions.
• Federated Identity Management (FIM). Note that this topic does not refer to Microsoft Forefront Identity Manager, which has the same acronym.Traditionally, you authenticate to your company’s network and gain access to certain resources. When you use identity federation, two independent organizations share authentication and/or authorization information with each other. In such a relationship, one company provides the resources (such as a web portal) and the other company provides the identity and user information. The company providing the resources trusts the authentication coming from the identity provider. Federated identity systems provide an enhanced user experience because users don’t need to maintain multiple user accounts across multiple apps. Federated identity systems use Security Assertion Markup Language (SAML), OAuth, or other methods for exchanging authentication and authorization information. SAML is the most common method for authentication in use today. It is mostly limited to use with web browsers, while OAuth isn’t limited to web browsers. Federated identity management and SSO are closely related. You can’t reasonably provide SSO without a federated identity management system. Conversely, you use federated identities without SSO, but the user experience will be degraded because everyone must re-authenticate manually as they access various systems.
• Credentials management systems. A credentials management system centralizes the management of credentials. Such systems typically extend the functionality of the default features available in a typical directory service. For example, a credentials management system might automatically manage the passwords for account passwords, even if those accounts are in a third-party public cloud or in a directory service on premises. Credentials management systems often enable users to temporarily check out accounts to use for administrative purposes. For example, a database administrator might use a credentials management system to check out a database admin account in order to perform some administrative work using that account. When they are finished, they check the account back in and the system immediately resets the password. All activity is logged and access to the credentials is limited. Without a credentials management system, you run the risk of having multiple credentials management approaches in your organization. For example, one team might use an Excel spreadsheet to list accounts and passwords, while another team might use a third-party password safe application. Having multiple methods and unmanaged applications increases risks for your organization. Implementing a single credentials management system typically increases efficiency and security.
• On premises. To work with your existing solutions and help manage identities on premises, identity services often put servers, appliances or services on your internal network. This ensures a seamless integration and provides additional features, such as single sign-on. For example, you might integrate your Active Directory domain with a third-party identity provider and thereby enable certain users to authenticate through the third-party identity provider for SSO.
• Cloud. Organizations that want to take advantage of software-as-a-service (SaaS) and other cloud-based applications need to also manage identities in the cloud. Some of them choose identity federation — they federate their on-premises authentication system directly with the cloud providers. But there is another option: using a cloud-based identity service, such as Microsoft Azure Active Directory or Amazon AWS Identity and Access Management. There are some pros with using a cloud-based identity service:
•The cloud provider often offers features not commonly found in on-premises environments. For example, a cloud provider can automatically detect suspicious sign-ins attempts, such as those from a different type of operating system than normal or from a different location than usual, because they have a large amount of data and can use artificial intelligence to spot suspicious logins.
•For services in the cloud, authentication is local, which often results in better performance than sending all authentication requests back to an on-premises identity service.
•There might be a large effort required to use a cloud-based identity service. For example, you need to figure out new operational processes. You need to capture the auditing and log data and often bring it back to your on-premises environment for analysis. You might have to update, upgrade or deploy new
55
• Often, you still need an on-premises directory service.
• Many third-party identity services started off as solutions for web-based applications. They have since to cover other use cases but still can’t be used for many day-to-day authentication scenarios. For example, most of them can’t authenticate users to their corporate laptops.
This section focuses on access control methods. To prepare for the exam, you should understand the core methods and the differences between them.
• Role-based access control (RBAC). RBAC is a common access control method. For example, one role might be a desktop technician. The role has rights to workstations, the anti-virus software and a software installation shared folder. For instance, if a new desktop technician starts at your company, you simply add them to the role group and they immediately have the same access as other desktop technicians. RBAC is a non-discretionary access control method because there is no discretion — each role has what it has. RBAC is considered an industry- standard good practice and is in widespread use throughout organizations.
• Mandatory access control (MAC). MAC is a method to restrict access based on a person’s clearance and the data’s classification or label. For example, a person with a Top Secret clearance can read a document classified as Top Secret. The MAC method ensures confidentiality. MAC is not in widespread use but is considered to provide higher security than DAC because individual users cannot change access.
• Discretionary access control (DAC). When you configure a shared folder on a Windows or Linux server, you use DAC. You assign somebody specific rights to a volume, a folder or a file. Rights could include read-only, write, execute, list and more. You have granular control over the rights, including whether the rights are inherited by child objects (such as a folder inside another folder). DAC is flexible and easy. It is in widespread use. However, anybody with rights to change permissions can alter the permissions. It is difficult to reconcile all the various permissions throughout an organization. It can also be hard to determine all the assets that somebody has access to, because DAC is very decentralized.
2. The HR department creates a new employee record in the human capital management (HCM) system, which is the authoritative source for identity information such as legal name, address, title and manager.
3. The HCM syncs with the directory service. As part of the sync, any new users in HCM are provisioned in the directory service.
6. The employee leaves the company. The HR department flags the user as terminated in the HCM, and the HCM performs an immediate sync with the directory service. The directory service disables the user account to temporarily remove access.
7. The IT department, after a specific period (such as 7 days), permanently deletes the user account and all associated access.
58
authentication factors. What are they?
a. Something you make
f. Something you do
2. Your company is rapidly expanding its public cloud footprint, especially with Infrastructure as a Service (IaaS), and
• Minimize the overhead of managing the solution.
You need to choose the authentication solution for the company. Which solution should you choose?
3. A user reports that they cannot gain access to a shared folder. You investigate and find the following information:
• Neither the user nor any groups the user is a member of have been granted permissions to the folder.
b. Rule-based access control
c. MAC
1. Answer: B, C, E
Explanation: The three factors are something you know (such as a password), something you have (such as a smartcard or authentication app), and something you are (such as a fingerprint or retina). Using methods from multiple factors for authentication enhances security and mitigates the risk of a stolen or cracked password.
60
• Internal. An internal audit strategy should be aligned to the organization’s business and day-to-day operations. For example, a publicly traded company will have a more rigorous auditing strategy than a privately held company. However, the stakeholders in both companies have an interest in protecting intellectual property, customer data and employee information. Designing the audit strategy should include laying out applicable regulatory requirements and compliance goals.
• External. An external audit strategy should complement the internal strategy, providing regular checks to ensure that procedures are being followed and the organization is meeting its compliance goals.
• Penetration testing. A penetration test is a purposeful attack on systems to attempt to bypass automated controls. The goal of a penetration test is to uncover weaknesses in security so they can be addressed to mitigate risk. Attack techniques can include spoofing, bypassing authentication, privilege escalation and more. Like vulnerability assessments, penetration testing does not have to be purely logical. For example, you can use social engineering to try to gain physical access to a building or system.
61
generally thousands of them, and almost all of them follow existing policies. However, it can be important to show
that someone or something did indeed access a resource that they weren’t supposed to, either by mistake or
must also include code review and testing for security controls. These reviews and controls should be built into the
process just as unit tests and function tests are; otherwise, the application is at risk of being unsecure.
• Test coverage analysis. You should be aware of the following coverage testing types:
• Black box testing. The tester has no prior knowledge of the environment being tested.
• Automated testing. A script performs a set of actions.
• Structural testing. This can include statement, decision, condition, loop and data flow coverage.
that the system responds appropriately.
• Interface testing. This can include the server interfaces, as well as internal and external interfaces. The server
6.3 Collect security process data
• Backup verification data. A strict and rigorous backup procedure is almost useless without verification of the data. Backups should be restored regularly to ensure that the data can be recovered successfully. When using replication, you should also implement integrity checks to ensure that the data was not corrupted during the transfer process.
• Training and awareness. Training and awareness of security policies and procedures are half the battle when implementing or maintaining these policies. This extends beyond the security team that is collecting the data, and can impact every employee or user in an organization. The table below outlines different levels of training that can be used for an organization.
The teams that analyze the security procedures should be aware of the output and reporting capabilities for the data. Any information that is of concern must be reported to the management teams immediately so that they are aware of possible risks or alerts. The level of detail given to the management teams might vary depending on their roles and involvement.
The type of auditing being performed can also determine the type of reports that must be used. For example, for an SSAE 16 audit, a Service Organization Control (SOC) report is required. There are four types of SOC reports:
6.5 Conduct or facilitate security audits
Security audits should occur on a routine basis according to the policy set in place by the organization. Internal auditing typically occurs more frequently than external or third-party auditing.
Domain 6 Review Questions
• Testers must test all aspects of the email application.
• Testers must not have any knowledge of the new e-mail environment.
d. Static testing
e. Dynamic testing
b. External
c. Third-party
• The team can use technical methods or non-technical methods in attempting to bypass controls.
Which type of testing should you perform to meet the requirements?
65
Explanation: Third-party testing is specifically geared to ensuring that the other auditors (internal and external) are
properly following your policies and procedures.
Domain 7. Security Operations
This domain is focused on the day-to-day tasks of securing your environment. If you are in a role outside of operations (such as in engineering or architecture), you should spend extra time in this section to ensure familiarity with the information. You’ll notice more hands-on sections in this domain, specifically focused on how to do things instead of the design or planning considerations found in previous domains.
• Investigative techniques. When an incident occurs, you need to find out how it happened. A part of this process is the root cause analysis, in which you pinpoint the cause (for example, a user clicked on a malicious link in an email, or a web server was missing a security update and an attacker used an unpatched vulnerability to compromise the server). Often, teams are formed to help determine the root cause. Incident handling is the overall management of the investigation — think of it as project management but on a smaller level. NIST and others have published guidelines for incident handling. At a high level, it includes the following steps: detect, analyze, contain, eradicate and recover. Of course, there are other smaller parts to incident handling, such as preparation and post- incident analysis, like a “lessons learned” review meeting.
• Digital forensics tools, tactics and procedures. Forensics should preserve the crime scene, though in digital forensics, this means the computers, storage and other devices, instead of a room and a weapon, for example. Other investigators should be able to perform their own analyses and come to the same conclusions because they
Your investigation will vary based on the type of incident you are investigating. For example, if you work for a financial company and there was a compromise of a financial system, you might have a regulatory investigation. If a hacker defaces your company website, you might have a criminal investigation. Each type of investigation has special considerations:
• Administrative. The primary purpose of an administrative investigation is to provide the appropriate authorities with all relevant information so they can determine what, if any, action to take. Administrative investigations are often tied to HR scenarios, such as when a manager has been accused of improprieties.
68
• An intrusion prevention system (IPS) can help block an attack before it gets inside your network. In the worst case, it can identify an attack in progress. Like an IDS, an IPS is often a software or appliance. However, an IPS is typically placed in line on the network so it can analyze traffic coming into or leaving the network, whereas an IDS typically sees intrusions after they’ve occurred.
• Security information and event management (SIEM). Companies have security information stored in logs across multiple computers and appliances. Often, the information captured in the logs is so extensive that it can quickly become hard to manage and use. Many companies deploy a security information and event management (SIEM) solution to centralize the log data and make it simpler to work with. For example, if you need to find all failed logon attempts on your web servers, you could look through the logs on each web server individually. But if you have a SIEM solution, you can go to a portal and search across all web servers with a single query. A SIEM is a critical technology in large and security-conscious organizations.
• Watermarking is the act of embedding an identifying marker in a file. For example, you can embed a company name in a customer database file or add a watermark to a picture file with copyright information.
69
• Asset management. Assets, such as computers, desks and software applications, have a lifecycle — simply put, you buy it, you use it and then you retire it. Asset management is the process of managing that lifecycle. You keep track of all your assets, including when you got it, how much you paid for it, its support model and when you need to replace it. For example, asset management can help your IT team figure out which laptops to replace during the next upgrade cycle. It can also help you control costs by finding overlap in hardware, software or other assets.
• Configuration management. Configuration management helps you standardize a configuration across your devices. For example, you can use configuration management software to ensure that all desktop computers have anti-virus software and the latest patches, and that the screen will automatically be locked after 5 minutes of inactivity. The configuration management system should automatically remediate most changes users make to a system. The benefits of configuration management include having a single configuration (for example, all servers have the same baseline services running and the same patch level), being able to manage many systems as a single unit (for example, you can deploy an updated anti-malware application to all servers the same amount of time it takes to deploy it to a single server), and being able to report on the configuration throughout your network (which can help to identify anomalies). Many configuration management solutions are OS-agnostic, meaning that they can be used across Windows, Linux and Mac computers. Without a configuration management solution, the chances of having a consistent and standardized deployment plummets, and you lose the efficiencies of configuring many computers as a single unit.
70
• Information lifecycle. Information lifecycle is made up of the following phases:
• Collect data. Data is gathered from sources such as log files and inbound email, and when users produce data such as a new spreadsheet.
• Delete data. The default delete action in most operating systems is not secure: The data is marked as deleted, but it still resides on the disks and can be easily recovered with off-the-shelf software. To have an effective information lifecycle, you must use secure deletion techniques such as disk wiping (for example, by overwriting the data multiple times), degaussing and physical destruction (shredding a disk).
• Source files. If you rely on software for critical functions, you need to be able to reinstall that software at any time. Despite the advent of downloadable software, many organizations rely on legacy software that they purchased on disk years ago and that is no longer available for purchase. To protect your organization, you need to maintain copies of the media along with copies of any license keys.
• Operating system images. You need a method to manage your operating system images so that you can maintain clean images, update the images regularly (for example, with security updates), and use the images for deployments. Not only should you maintain multiple copies at multiple sites, but you should also test the images from time to time. While you can always rebuild an image from your step-by-step documentation, that lost time could cost your company money during an outage or other major issue.
• Capture as much data as you reasonably can. You need to know where a given product is installed. But you also need to know when it was installed (for example, whether a vulnerable version was installed after the company announced the vulnerability), the precise version number (because without that, you might not be able to effectively determine whether you are susceptible), and other details.
• Have a robust reporting system. You need to be able to use all the asset management data you collect, so you need a robust reporting system that you can query on demand. For example, you should be able to quickly get a report listing all computers running a specific version of a specific software product. And you should then be able to filter that data to only corporate-owned devices or laptop computers.
• Response. When you receive a notification about an incident, you should start by verifying the incident. For example, if an alarm was triggered at a company facility, a security guard can physically check the surroundings for an intrusion and check the security cameras for anomalies. For computer-related incidents, it is advisable to keep compromised systems powered on to gather forensic data. Along with the verification process, during the response phase you should also kick off the initial communication with teams or people that can help with mitigation. For example, you should contact the information security team initially during a denial-of-service attack.
• Mitigation. The next step is to contain the incident. For example, if a computer has been compromised and is actively attempting to compromise other computers, the compromised computer should be removed from the network to mitigate the damage.
• Remediation. In this phase, you take additional steps to minimize the chances of the same or a similar attack being successful. For example, if you suspect that an attacker launched attacks from the company’s wireless network, you should update the wireless password or authentication mechanism. If an attacker gained access to sensitive plain text data during an incident, you should encrypt the data in the future.
• Lessons learned. During this phase, all team members who worked on the security incident gather to review the incident. You want to find out which parts of the incident management were effective and which were not. For example, you might find that your security software detected an attack immediately (effective) but you were unable to contain the incident without powering off all the company’s computers (less effective). The goal is to review the details to ensure that the team is better prepared for the next incident.
• Whitelisting and blacklisting. Whitelisting is the process of marking applications as allowed, while blacklisting is the process of marking applications as disallowed. Whitelisting and blacklisting can be automated. It is common to whitelist all the applications included on a corporate computer image and disallow all others.
• Security services provided by third parties. Some vendors offer security services that ingest the security-related logs from your entire environment and handle detection and response using artificial intelligence or a large network operations center. Other services perform assessments, audits or forensics. Finally, there are third-party security services that offer code review, remediation or reporting.
However, honeypots and honeynets have been called unethical because of their similarities to entrapment. While many security-conscious organizations stay away from running their own honeypots and honeynets, they can still take advantage of the information gained from other companies that use them.
• Anti-malware. Anti-malware is a broad term that often includes anti-virus, anti-spam and anti-malware (with malware being any other code, app or service created to cause harm). You should deploy anti-malware to every possible device, including servers, client computers, tablets and smartphones, and be vigilant about product and definition updates.
• Automatic distribution of patches. Initially, deploy patches to a few computers in a lab environment and run them through system testing. Then expand the distribution to a larger number of non-production computers. If everything is functional and no issues are found, distribute the patches to the rest of the non- production environment and then move to production. It is a good practice to patch your production systems within 7 days of a patch release. In critical scenarios where there is known exploit code for a remote code execution vulnerability, you should deploy patches to your production systems the day of the patch release to maximize security.
• Reporting on patch compliance. Even if you might have an automatic patch distribution method, you need a way to assess your overall compliance. Do 100% of your computers have the patch? Or 90%? Which specific computers are missing a specific patch? Reporting can be used by the management team to evaluate the effectiveness of a patch management system.
suddenly found themselves vulnerable and needed to take action (by replacing the certificates). Many vulnerability management solutions can scan the environment looking for vulnerabilities. Such solutions complement, but do not replace, patch management systems and other security systems (such as anti-virus or anti-malware systems).
Be aware of the following definitions:
• Identify the need for a change. For example, you might find out that your routers are vulnerable to a denial of service attack and you need to update the configuration to remedy that.
• Test the change in a lab. Test the change in a non-production environment to ensure that the proposed change does what you think it will. Also use the test to document the implementation process and other key details.
76
• Backup storage strategies. While most organizations back up their data in some way, many do not have an official strategy or policy regarding where the backup data is stored or how long the data is retained. In most cases, backup data should be stored offsite. Offsite backup storage provides the following benefits:
• If your data center is destroyed (earthquake, flood, fire), your backup data isn’t destroyed with it. In some cases, third-party providers of off-site storage services also provide recovery facilities to enable organizations to recover their systems to the provider’s environment.
77
• Fault tolerance. As part of providing a highly available solution, you need to ensure that your computing devices have multiple components — network cards, processors, disk drives, etc. —of the same type and kind to provide fault tolerance. Fault tolerance, by itself, isn’t valuable. For example, imagine a server with fault-tolerant CPUs. The server’s power supply fails. Now the server is done even though you have fault tolerance. As you can see, you must account for fault tolerance across your entire system and across your entire network.
7.12 Implement disaster recovery (DR) recovery processes
communication services or software to facilitate emergency company-wide communications or mass communications with personnel involved in the disaster recovery operation.
• Assessment. During the response phase, the teams verified that recovery procedures had to be initiated. In the assessment phase, the teams dive deeper to look at the specific technologies and services to find out details of the disaster. For example, if during the response phase, the team found email to be completely down, then they might check to find out if other technologies are impacted along with email.
• Restoration. During the restoration phase, the team performs the recovery operations to bring all services back to their normal state. In many situations, this means failing over to a secondary data center. In others, it might mean recovering from backups. After a successful failover to a secondary data center, it is common to start planning the failback to the primary data center once it is ready. For example, if the primary data center flooded, you would recover to the second data center, recover from the flood, then fail back to the primary data center.
79
• Full interruption. In a full interruption recovery, the organizations halt regular operations on a separate network, sometimes in a separate facility. Many times, a full interruption operation involves failing over from the primary data center to the secondary data center. This type of recovery testing is the most expensive, takes the most time, and exposes the company to the most risk of something going wrong. While those drawbacks are serious, full interruption tests are a good practice for most organizations.
7.14 Participate in business continuity (BC) planning and exercises
• Coordinate with external entities. Work with relevant external entities, such as the police department, government agencies, partner companies and the community.
80
• Access control. To maximize security, your facilities should restrict who can enter. This is often handled by key cards and card readers on doors. Other common methods are a visitor center or reception area with security guards and biometric scanners for entry (often required for data centers).
• Monitoring. As part of your perimeter security, you should have a solution to monitor for anomalies. For example, if a door with a card reader is open for more than 60 seconds, it could indicate that it has been propped open. If a person scans a data center door with a badge but that badge wasn’t used to enter any other exterior door on that day, it could be a scenario to investigate — for example, maybe the card was stolen by somebody who gained access to the building through the air vents. A monitoring system can alert you to unusual scenarios and provide a historical look at your perimeter activities.
This section covers personnel safety — making sure employees can safely work and travel. While some of the techniques are common sense, others are less obvious.
• Travel. The laws and policies in other countries can sometimes be drastically different than your own country. Employees must be familiar with the differences prior to traveling. For example, something you see as benign might be illegal and punishable by jail in another country. Other laws could make it difficult to do business in another country or put your company at risk. When traveling to other countries, you should familiarize yourself with the local laws to minimize danger to yourself and your company. Another key concern when traveling is protecting company data. To protect company data during travel, encryption should be used for both data in transit and data at rest. It is also a good practice (although impractical) to limit connectivity via wireless networks while traveling. Take your computing devices with you, when possible, since devices left in a hotel are subject to
• Emergency management. Imagine a large earthquake strikes your primary office building. The power is out, and workers have evacuated the buildings; many go home to check on their families. Other employees might be flying to the office for meetings the next day. You need to be able to find out if all employees are safe and accounted for; notify employees, partners, customers, and visitors; and initiate business continuity and/or disaster recovery procedures. An effective emergency management system enables you to send out emergency alerts to employees (many solutions rely on TXT or SMS messages to cellular phones), track their responses and locations, and initiate emergency response measures, such as activating a secondary data center or a contingent workforce in an alternate site.
• Duress. Duress refers forcing somebody to perform an act that they normally wouldn’t, due to a threat of harm, such as a bank teller giving money to a bank robber who brandishes a weapon. Training personnel about duress and implementing countermeasures can help. For example, at a retail store, the last twenty-dollar bill in the cash register can be attached to a silent alarm mechanism; when an employee removes it for a robber, the silent alarm alerts the authorities. Another example is a building alarm system that must be deactivated quickly once you enter the building. If the owner of a business is met at opening time by a crook who demands that she deactivates the alarm, instead of entering her regular disarm code, the owner can use a special code that deactivates the alarm and notifies the authorities that it was disarmed under duress. In many cases, to protect personnel safety, it is a good practice to have personnel fully comply with all reasonable demands, especially in situations where the loss is a laptop computer or something similar.
subject. Then move on to Domain 8.
1. You are conducting an analysis of a compromised computer. You figure out that the computer had all known
g. The computer does not have a configuration management agent.
h. The computer does not have anti-malware.
a. System resilience
b. Quality of service
3. You are preparing your company for disaster recovery. The company issues the following requirements for disaster
recovery testing:
a. Partial interruption
b. Tabletop
Answers to Domain 7 Review Questions
1. Answer: A, B
Explanation: The first key requirement in this scenario is that the data center must not be impacted by the testing. This eliminates the partial interruption and full interruption tests because those impact the data center. The other key requirement is that IT teams must perform recovery steps. This requirement eliminates the tabletop testing because tabletop testing involves walking through the plans, but not performing recovery operations.
84
8.1 | Understand | and | integrate | security | throughout | the | |
---|---|---|---|---|---|---|---|
development lifecycle (SDLC) |
This section discusses the various methods and considerations when developing an application. The lifecycle of development does not typically have a final goal or destination. Instead, it is a continuous loop of efforts that must include steps at different phases of a project.
• Development methodologies. There are many different development methodologies that organizations can use as part of the development lifecycle. The following table lists the most common methodologies and the key related concepts.
|
|
---|---|
|
|
|
5. Optimizing. There is a model of continuous improvement for the development cycle.
• Operation and maintenance. After a product has been developed, tested and released, the next phase of the process is to provide operational support and maintenance of the released product. This can include resolving unforeseen problems or developing new features to address new requirements.
8.2 Identify and apply security controls in development environments
The source code and repositories that make up an application can represent hundreds or thousands of hours of work and comprise important intellectual property for an organization. Organizations must be prepared to take multiple levels of risk mitigation to protect the code, as well as the applications.
• Security of application programming interfaces. There are five generations of programming languages. The higher the generation, the more abstract the language is and the less a developer needs to know about the details of the operating system or hardware behind the code. The five generations are:
1: Machine language. This is the binary representation that is understood and used by the computer processor.
4: Very high-level language. Generation 4 languages further reduce the amount of code that is required, so programmers can focus on algorithms. Python, C++, C# and Java are examples of generation 4 programming languages.
5: Natural language. Generation 5 languages enable a system to learn and change on its own, as with artificial intelligence. Instead of developing code with a specific purpose or goal, programmers only define the constraints and goal; the application then solves the problem on its own based on this information. Prolog and Mercury are examples of generation 5 programming languages.
8.4 Assess security impact of acquired software
When an organization merges with or purchases another organization, the acquired source code, repository access and design, and intellectual property should be analyzed and reviewed. The phases of the development cycle should also be reviewed. You should try to identify any new risks that have appeared by acquiring the new software development process.
process, since it’s more difficult to rework code after it is in production. However, be aware that these tools can’t
to make calls to other applications. Without proper security, APIs are a perfect way for malicious individuals to
compromise your environment or application. The security of APIs starts with requiring authentication using a
secure your APIs is to use throttling (which protects against DoS or similar misuse), scan your APIs for weaknesses,
and use encryption (such as with an API gateway).
address all warnings that are generated.
• Deny by default. By default, everybody should be denied access. Grant access as needed.
• Cryptographic practices. Protect secrets and master keys by establishing and enforcing cryptographic
standards for your organization.
• System configuration. Lock down servers and devices. Keep software versions up to date with fast
turnaround for security fixes. You can find good information for securing your servers and devices from
Domain 8 Review Questions
and requirements are discovered. Which development methodology should you use?
a. Agile
contain several forms that allow users to enter information to be saved in a database. The forms should require
users to submit up to 200 alphanumeric characters, but should prevent certain strings. What should you perform
d. Buffer regression
3. You plan on creating an artificial intelligence application that is based on constraints and an end goal. What
d. Generation 5
90
focuses on user stories to work through the development process.
2. Answer: A
and its goal are defined; then the program learns more on its own to achieve the goal.
91
•
•
•
•
•
•
•
•
•
•
|
---|
About Netwrix
intelligence to identify security holes, detect anomalies in user behavior and investigate threat patterns in time to prevent
real damage.
Learn more about Netwrix Auditor and download its free 20-day trial by visiting
Corporate Headquarters:
300 Spectrum Center Drive, Suite 200, Irvine, CA 92618