There are four basic principles that apply for most security systems: authentication, authorization, confidentiality, and integrity.
Authentication (proving identity with credentials) –
Authentication is the process of proving the identity of a user of a system by means of a set of credentials. Credentials are the required proof needed by the system to validate the identity of the user. The user can be the actual customer, a process, or even another system. A person is validated through a credential. The identity is who the person is. If a person has been validated through a credential, such as attaching a name to a face, the name becomes a principal.
In this case the principal is associated with the username. The principal represents the identity of the user for a given service. Since a user may access many different services that have different usernames, we need to introduce the concept of a subject. A subject represents a collection of principals.
The credential set is highly dependent on the requirements of the organization's system for proving the identity, but is most likely a set of user attributes such as passwords, certificates, or smart cards. People in everyday life apply authentication at different levels. One level could be locking the front door to the house. Another could be verbally asking an employer to verify information that is circulating as a rumor.
Every day we meet people and introduce ourselves. This is a form of authentication. The person we meet may give a form of credential by describing his role or his work. Other forms of credentials are required when writing checks or using credit cards. If a cashier requires further validation from a person, he or she may ask for a driver's license. The driver's license also represents a form of credential to the cashier. The cashier is authenticating the person to allow a transaction, the purchase of an item, to take place in a store. E-commerce systems require a similar, digital form of authentication and credentials to access an online store.
Credentials allow one party to recognize another. Recognition can occur through various means. For example, people might use physical appearance or some other characteristic in order to identify someone. Using physical characteristics for authentication is known as biometrics. Biometric controls use the following characteristics to identify individuals:
• Fingerprints
• Voice
• Handwritten signature dynamics
• Retina and iris scans
• Palm scans and hand geometry
Biometric access control devices are considered physical access security control devices. In this article, I do not address physical security specifically. There are many ways you can physically secure your systems, such as using employee badges, multiple doors, and video surveillance.
Authorization (providing access to system resources) –
Once a user's identity has been validated, the user can be checked for access to a system resource. The process by which a user is given access to a system resource is known as authorization. For example, after a user logs in to a commerce system, which validates his or her identity, the user needs access to his or her account history; that is, the user needs authorization to retrieve the user's records. The user's records are the system resources needed by the user. The authorization process is the check by the organization's system to see whether the user should be granted access to the user's record. The user has logged in to the system, but he still may not have the permission necessary from the system to access the records.
You probably practice authorization every day by giving others access to your resources. Examples of authorization include inviting someone into your home, giving an administrator access to your computer, storing your money in a bank, or giving someone your credit card number so that the person can access your funds. In all these cases, it is important to be aware of the person's identity (by applying authentication) to make sure the person can be trusted with your resources.
When you give out your credit card number, you are authorizing the charge to your account, and your funds are the resource you are authorizing access to. Cognitively speaking, people may apply more authentication rules when giving a credit card number than a system can apply when giving access to a resource such as a database. An organization giving access to a system resource usually does a lookup, and based on the proven identity of a user match to the permission of the resource, it gives the user access to the resource. The authorization checks the permission and simply allows or denies access to the resource.
When deploying a system, access to system resources should also be mapped out. Security documents that detail the rights of individuals to specific resources must be developed. These documents must distinguish between the owners and the users of resources as well as read, write, delete, and execute privileges.
There might be property files that are used to configure servers. Sometimes these property files contain usernames and passwords so anyone who has read access to these files can potentially break into the server. Files such as these should be given a high level of security.
A common approach when deploying a system is giving a level of 1 to 5 to each file, 5 being the highest, and mapping out the permissions allowed to access the files based on the level of security. Allow only system administrative people to access level 5 files. This notion of categorizing files is a first step toward implementing an access control model. An access control model allows the operating system and other applications (such as SiteMinder) to enforce a company's security policy.
For example, the military uses a classification scheme that has unclassified, confidential, secret, and top secret. Mapping the level of security allowed for each file in a deployment of the system is an example of establishing an authorization rules set. An organization needs to have a plan for the rules for authorization. Who is allowed to access what? When developing such a plan, a question set is important. The question set addresses issues such as how important the file is, whether it contains sensitive material, and how this resource should be accessed and by whom. Examples of sensitive material include passwords and files that have settings that change the system, such as configuration files.
Confidentiality (protecting information from unauthorized readers) -
To protect data from being accessed by unauthorized readers, the data is changed to keep it confidential. This process is known as obfuscation (which literally means to "darken" - that is, to make obscure or to confuse). Confidentiality is the means of keeping information secret, not by blocking the access, but by making the information unreadable by the public. Only people allowed to read the information can unlock the secret file for the original message (usually with a key). Such techniques have been dated to 1900 B.C. in Egypt. Throughout history, there has always been a process, or an organization, that is responsible for encrypting and decrypting messages. Before keys were used, anyone who understood the algorithm could decrypt the message. So the knowledge of how the algorithms worked was kept secret, and there was a person educated in the algorithm who needed to understand both the encryption and how to reverse the process (for decryption).
Today, besides having the technique done in a digital form, the algorithms have also been modified to protect the algorithm itself by providing an extra variable called a key.
An organization should be concerned about confidentiality techniques whenever it wants to protect information that is being transmitted to another system. When the information is in its original form, it is called plaintext. When the information is in a protected form, it is called ciphertext. Ciphertext uses a cipher, which changes the plaintext into ciphertext. The cipher requires keys to change the information from one form to the other.
Two types of cryptographic systems are in use today for commercial applications. They are either symmetric or asymmetric systems. The symmetric systems use a shared secret key, whereas asymmetric systems use a key pair.
Many techniques for security have evolved over time, but are based on algorithms that are decades old. A modern variation of passing a public key and checking the key's integrity is the X.509 certificate. The X.509 is a called a public certificate. The X.509 is guaranteed to be unforgeable by having an issuing authority encrypt a digital signature and using a public key for validating the digital signature. The X.509 comprises several older algorithms that make up the X.509 certificate. The RSA algorithm created decades ago makes up the cipher algorithm for using the key pair. The X.509 uses a private key from an issuing authority (those agencies that create the certificate) and a public key accessed by the user to verify that public certificate has not been modified. X.509 is a more recent technique, but makes use of signatures in a digital form that has been around for a long time.
Integrity (validating your data) –
During the transmission or storage of data, information can be corrupted or changed, maliciously or otherwise, by a user. Validation is the process of ensuring data integrity. When data has integrity, it means that the data has not been modified or corrupted. One technique for ensuring data integrity is called data hashing. Under this process, the computer system hashes information and stores the hash result at a later time. A hash is an algorithm that is applied to information and produces a unique result. If the hash is applied to different information, changed by even one character, it produces a different result.
When the integrity of the information needs to be checked, the process will hash the information to be checked and compare it with the stored hash. If both hash results match, the data hasn't changed. The integrity process may also be used during the transmission of data to ensure that the data did not get corrupted from one system to the next, and that the original information is still valid.
As with other basic security principles, it is easy to find processes for ensuring data in the non-digital world. For example, when you balance your checkbook, you are checking data integrity. If the balance is incorrect, especially in favor of the bank, you may call the bank to correct the error. By calling the bank, you are correcting the data that failed the bank's validation process.
The old adage "Information is power" is more true than ever for the corporate world. Even the release of very general information about a company (for example, an upcoming merger between company A and company B) can have a profound impact on a company. For example, in the case of a corporate merger, if confidential information about a proposed merger is leaked to the press or other companies, the merger could be in jeopardy. In today's corporate environment, these basic principles can have a dramatic impact on the security of the organization. Developers who implement security measures must be mindful of not only the complex security techniques but also the basic, commonsense concepts that apply to any discussion of confidentiality and security.
Protecting resources from the hacker
In today's corporate world, what we are protecting and from whom we are protecting it is important. The corporate world no longer revolves around written information as the medium of documentation; it revolves around digital information. Spies no longer wear trench coats and exchange information in dark alleys. Nowadays, spies are more often than not sitting in front of a computer screen. This new type of spy is called a hacker. He is trained in technology and willing to use it for a price. The hacker personality takes many forms and spans a wide range. Today's hacker profiles include:
- A disgruntled employee who releases viruses into the system before he quits his job.
- A teenager who uses the high school's computer to hack into an organization that somebody told him about in church.
Hackers no longer belong to a club that meets in the basement of a home. They are people who belong to newsgroups. The hacker has evolved over time from the computer amateur to the computer professional. The hacker now practices social engineering. To the hacker, the goal is an organization's Information Technology (IT) department. The IT department should be ready and expecting such attacks.
Hack attacks: different scenarios
Many company resources need protection from hack attacks, including e-mail messages, network addresses, lists of employees, and confidential documents describing technology. Any of these items may lead to other items that a hacker can use for intrusion. For example, a person's e-mail could contain a personal note along with the user's name. This personal information can be re-used to try to break a person's password. For instance, the password may be a pet's name, a favorite sports team, and the like. In another example, the user (or hacker that knows the username) may go to a site that gives the option 'send me my password' when the user has forgotten the password. If the attacker can impersonate an SMTP server and the user's e-mail address, the attacker can receive e-mails addressed to the user. E-mails receiving passwords are sometimes not password protected and can be sniffed.
Another means of attack is when the hacker sends an e-mail posing as the IT department and requests that the person install a new software patch in his computer. Once the person installs the patch, the computer is no longer secure - the attacker owns it. Like spies, the best hackers are those who are never caught and never heard of. They don't have a "hacker" license plate or an "I hack for a living" t-shirt. Appearance-wise, they blend in with their targets. The best hackers look like the people working in the IT department of an organization. They may even walk into the company carrying a fake badge and wearing a company shirt, and use a conference room just as if they worked there.
A common attack employed by hackers is the call-in approach: A hacker may impersonate an IT technician calling a salesperson, especially one offsite, and say that he needs to remotely install some software. If the salesperson believes the hacker, then the hacker can easily install any harmful software he wants. Another type of call-in is the hacker impersonating a salesperson to the IT technician, where the hacker tells the IT technician that his or her password is no longer working and the IT technician walks the hacker through logging on to the salesperson's machine.
Weapons against attack
The two most important weapons a company has against hackers, spies, and attacks are:
- Adequate security training for staff
- A secure infrastructure in place that allows the organization to adequately meet
Potential threats
The better IT professionals understand hackers, security measures, and potential attacks, the better the IT professionals are prepared to handle threats. Even a simple attack can do great damage if the IT professional is not prepared to handle it.
There have been many instances where organizations were hacked but were never aware of it until it was too late. An organization should work hard to ensure that its information and resources are protected because it is the resources and information that make the organization. A recurrent problem I have observed through the years across companies and organizations is confidential information received by one person (director, vice president, and so on) not being secured. In order for information to be secure, each individual within the organization needs to understand how and what needs protection. To understand how information can be secured, you need to understand the security principles that form the foundation (or "pillars") of security.
In this article the Common Criteria (CC), an effort by the international community to define a standard set of evaluation criteria for IT security, and then it presents a starting point for you to explore and gather your application's security needs. I am writing a set of questions which I found somewhere and here to guide you on the definition of security objectives, which is the basis for defining your security requirements.
Criteria for Security Systems
The International Organization for Standardization (ISO) approved (in 1999) standard criteria to evaluate security within the computing industry in a document known as the Common Criteria (CC).
Origins of the Common Criteria
The Common Criteria is the result of the international community's efforts to create criteria to evaluate IT security. In the United States, the Trusted Computer System Evaluation Criteria (TCSEC) was developed in the 1980s and was the basis for efforts in Europe and Canada. In addition, in 1990 ISO began to develop standard evaluation criteria of standardized security.
In 1991, the European Commission (after a joint development by France, Germany, the Netherlands, and the United Kingdom) published the Information Technology Security Evaluation Criteria (ITSEC) v 1.2. In 1993, a combination of ITSEC and TCSEC was created as the Canadian Computer Product Evaluation Criteria (CTCPEC) v 3.0. Shortly after, in the United States, the draft Federal Criteria for Information Technology Security v 1.0 was published as a combination of both North American and European efforts. Finally, in 1999 the ISO adopted a set of common criteria for security evaluation in the CC document that brought all these efforts together.
Common Criteria building blocks
The Common Criteria defines a set of security functional requirements that are grouped into classes. A class is the most general grouping, and all members of the class share security objectives although they may differ in emphasis. CC is also composed of assurance requirements, which in turn are grouped into classes.
The Common Criteria is intended as standard evaluation for security in products. Utilizing these criteria to describe an end-to-end security solution and using them for comparison is difficult because solutions do not use these terms uniformly and combine them in a complex way. However, CC provides a model for security evaluation and an acceptance as a standard security requirement definition. In addition, understanding these criteria may guide you through the selection of security solutions, implementations, and even requirement gathering.
Functional Requirements
The functional requirements are based on the specific requirements to support security, and they define the desired behavior. There are 11 classes of functional requirements that are further divided into families and component criteria. The 11 classes as described in the Common Criteria for Information Technology Security Evaluation documentation are as follows:
- Communication: This class is concerned with assuring the identity and non-repudiation of parties in a communication or data exchange.
- Component access: This class specifies the requirements for controlling the establishment of a user's session such as session locking and access history.
- Cryptographic support: This class provides support for the life-cycle management of cryptographic keys. In addition, it defines requirements for cryptographic key generation, distribution, access, and destruction.
- Identification and authentication: This class specifies requirements for functions to establish and verify a user's identity, which is required to ensure that users are associated with the proper security attributes such as roles and groups.
- Security management: This class specifies the management of several aspects such as management of data, security attributes (such as Access Control Lists), and security roles.
- Privacy: This class describes the requirements used to satisfy the user's privacy needs and still provide the system flexibility for controls over the operation of the system. Users' privacy can span from complete anonymity to different degrees of accountability.
- Protection of security functions: This class addresses the functional requirements related to the integrity and management of security functions and integrity of system data. This is very similar to the user data-protection requirements. The main difference is that the protection of security functions is focused on the system data rather than the user's.
- Resource utilization: This class addresses the support for and the availability of required resources such as processing capability. For instance, fault tolerance and priority of services are addressed by this class.
- Security audit: This class involves recognizing, recording, storing, and analyzing information related to security-relevant activities.
- Trusted path or channel: This class addresses requirements for trusted communication paths between users and the system. These requirements include providing assurance that the user is communicating with the correct system. It also addresses requirements for trusted channel between the system and other systems such as third-party applications.
- User data protection: This class specifies the requirements for security functions and policies to protect user data such as access control policies, stored data integrity, and data authentication.
Assurance Requirements
Assurance requirements specify that each operation (or function) needs to meet a minimum level or metric. For example, logging in to a system may require a high level of security. The security assurance requirements are grouped into eight classes. The eight classes as described in the Common Criteria for Information Technology Security Evaluation documentation are as follows:
- Configuration management: This class addresses the means for establishing that the functional requirements and specifications are realized in the implementation of the system while controlling changes that occur during development.
- Delivery and operation: This class addresses the requirements for correct delivery, installation, generation, and start-up of the system.
- Development: This class defines requirements for the stepwise refinement down to the actual implementation of security functions and provides information to determine whether the functional requirements have been met.
- Guidance documents: This class addresses the requirements for user and admininstrator guidance documentation.
- Life-cycle support: This class addresses the discipline and control in the process of refinement of the system during development and maintenance phases. It includes development security and flaw remediation.
- Tests: This class addresses the need to guarantee and verify that the security functional requirements are met. It includes independent tests and functional tests.
- Vulnerability assessment: This class addresses the existence of exploitable covert channels, the possibility of misuse or incorrect configuration, and exploitable vulnerabilities (introduced during development and/or operation) of the system.
- Assurance maintenance: This class addresses requirements that are aimed to assure the system continues to meet the security requirements after the system or its environment changes. These changes include the discovery of new threats, changes in user requirements, and correction of bugs.
Each of these assurance classes contains families, which share objectives. Each family contains a hierarchy of one or more components. However, the degree of assurance may change for a set of functional requirements. For example, the development process and requirement gathering for a solution impact the degree of severity of potential security vulnerabilities.
Also, the assurance level to which the security objectives are satisfied can be measured from the confidence level that the implementation of security functions is correct and that it actually satisfies the security objectives.
Understanding Your Security Needs
There are many different reasons why you need security in your solution. The reasons typically include objective and subjective motivations to the selection or definition of the secure solution. Here are a few motivating questions:
- How do you manage authorization?
- Only those users with the correct credentials can acccess the system resources (data, network, and the like). What levels of user privacy are required?
- How do you manage availability? How do you keep the system resources reachable?
- How do you enforce accountability? You need to identify who did what and when.
- How do you control access consistent with roles, reponsibilities, and policies? Can you deny access based on user identity, clearance level, membership in a role and/or user integrity level?
- How do you protect messages and data integrity during transmission? How are you going to protect data integrity in the overall system? Do all resources have the same importance? Is there a priority of services based on the protection level of resources?
- How do you protect and react to attacks? Is it prudent to have separated security domains? What is to be done when an attack is discovered?
- How do you ensure the correct and reliable function of components and services?
- How do you deploy your solution securely?
- How do you manage recovery? Define what is meant by minimal recovery; define the different types of failures. You need to understand if there is an acceptable loss of data and information.
- What level of auditing is required? What type of logs and data are necessary? What type of audit functions are necessary? What type of response is necessary in the case of a violation? What is the basic threshold for potential violations of the system?
- Is non-repudiation necessary? Do you need non-repudiation of origin and/or receipt? What services are required?
- How do you protect user data and to what extent? Are you allowing revocation of security attributes? Are they going to expire? Are you going to establish a user session? For how long? Are you going to limit the number of concurrent user sessions?
Once you have addressed these questions (and all the others that are specific to your needs), you are ready to understand your security risks, specify your security objectives, and finally, state your security requirements. After your security requirements are clearly specified, you can start the selection of the technologies that best address your needs. This process is summarized in Figure 1.
Figure 1: Understanding your security needs
Asserting your security risks
Once you understand how your system is required to address the security needs of the solution, you are ready to assert the possible security risks by analyzing your security requirements (derived from answering your basic questions) and the security environment in which the solution will exist.
You need to define the risks to your solution and define the measures necessary to manage these risks to an acceptable level. To aid you in this definition you can analyze the possible threats and determine which ones apply to your solution. For instance, do you need to protect against loss of confidentiality? How about protecting against loss of integrity - damage through unauthorized access?
Once you define the risks to your solution, you may want to understand the likelihood that the attack may be successful and the consequence the attack will have on your system. After this assertion, you are ready to clearly state the security objectives of your application.
Stating your security objectives
Your organization probably has security policies and assumptions - if it does not, it should! You must be consistent with these policies when stating your solution's security objectives. The security objectives address the security concerns and requirements of the overall system.
The security objectives are generated based on the following:
• Experience. Have you seen a need or risk before?
• Engineering judgment. Does it make sense?
• Security policies. Is it required, for example, to have three levels of logins?
• Risks acceptance decisions. Is it acceptable to have certain data compromised?
• Economics. Is it affordable?
The objectives can be satisfied by the solution itself or by the environment in which the solution will reside. After you have clearly stated the security objectives, they are refined into security requirements. The system meets the security objectives if it correctly and effectively implements all the security requirements.
Fulfilling Your Security Requirements
Once you have clearly defined your security requirements, you need to choose how to satisfy them. For instance, you may select to use a third-party security application, or integrate some of the available technologies and implement custom security functionality.
You may want to consider the following suggestions:
- Use firewalls and De-Militarized Zones (DMZs) as appropriate. It is a bad idea to connect your system directly to the Internet and open it to attacks. In addition, be careful with what you download to your system.
- Since security is in constant evolution and security holes are often addressed in new releases, always use the most current versions of third-party applications and network software.
- Perform audits frequently and investigate all anomalies.
- Keep current on security issues by looking into sites such as CERT (www.cert.org) that announce potential and actual security breaches.
- Consider your vulnerability to security breaches such as attacks that can
- Alter system resources. This is a serious attack that compromises data, and probably affects your level of service. The good news is that this type of attack is - in practice - difficult to achieve in most Java environments.
- Compromise a user's privacy. This is another serious attack that annoys users and has the potential to modify messages and e-mail. Using a combination of technologies gives you strong protection against this type of attack.
- Cause denial of service. This is a moderate attack and one of the most common. A denial of service attack can be accomplished by exhausting system resources such as consuming CPU cycles and allocating all available memory. Java does not provide a way to defend against this type of attack. Recovery may be as simple as rebooting the system - although you may lose some customers in the process.
The following sections describe some of the available Java technologies that you may consider.
Considering communication and trusted path or channel
As part of the CC, communication is the functional requirement that guarantees the identity of the parties during data exchange. This is a concept called non-repudiation. The most popular technologies are digital signatures and message digests. The trusted path or channel functional requirements address the need for trusted communication between the system and its users (including third-party applications).
Message digests are used to verify that the contents of a message have not been altered. Message digests, however, do not verify the message came from the supposed sender. Since algorithms for the digests are public, anyone can create a message, generate the digest, and then say it came from anyone in the world. In order to authenticate a message and its sender, you need to use digital signatures.
Digital signatures are a way to label messages or objects so that the creator of that message or object can be positively identified to the recipient or user. They also verify that the contents have not been altered.
If you would like a third party for public key cryptography, products such as JSAFE, J/Crypto, and Cryptix are available. JSAFE offers support for RSA, DES, Triple-DES, RC2, RC4, and RC5. You can find JSAFE information at the www.rsasecurity.com/products/bsafe/index.html site. J/Crypto information can be found at www.baltimore.com/products/jcrypto/index.html. J/Crypto includes X.509 certificates, RC4, Triple-DES, hashes, and key exchange algorithms. Cryptix information can be found at www.cryptix.org and also provides RSA public key cryptography, among others.
The Java Secure Socket Extension (JSSE) is now integrated with the J2SDK v 1.4. It enables secure Internet communications by providing a framework and an implementation for a Java version of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Some of the functionality included in JSSE is for data encryption, message integrity, and server and client authentication. You can use JSSE to secure data transmission between client and server using any application protocol, such as HTTP or Telnet.
Considering component access
Component access is the functional requirement that controls a user's session, session locking, and access history. Some of the Java technologies that control component access include the class loader, the sandbox architecture, Java protected domains, the byte code verifier, and the different containers (such as the EJB container and the Web container).
When you design a component, you must think about the access rights necessary to access the component. You can specify security roles based on these access rights, and use these roles during deployment Once the component is deployed, the administrator of the J2EE server maps the roles to the users or groups of the default realm - this can be accomplished by using the deploytool.
J2EE has roles and groups that represent a logical grouping of users. A J2EE role is a category of users specific to an application and a group is a category of users for the entire J2EE server.
Java-protected domains allow multiple and unique permissions for applications by enabling the use of permissions or preconfigured settings.
Considering cryptographic support
The cryptographic support requirements address the life-cycle management of cryptographic keys and their access, distribution, and destruction. The Java 2 SDK v 1.4 provides cryptographic services in the java.security and java.net packages.
The Java Cryptography Architecture (JCA) is a framework for cryptographic capabilities in Java programs. These capabilities include support for RSA, digital signatures, and message digests. JCA allows these security components to have implementation independence and, if possible, algorithm independence. Other security components in the Java 2 platform include the Java Cryptography Extension (JCE) that provides key generation and cipher support, and the JSSE API discussed earlier.
The JCE also provides support for encrypted streams; therefore, there are JCE export restrictions.
Considering identification and authentication
The identification and authentication functional requirements address the need to establish and verify the identity of a user. Java Authentication and Authorization (JAAS), Java GSS-API, and Kerberos are some of the Java technologies that address this functional requirement.
There are a several technologies that address the need for identification and authentication, such as digital signatures and certificates for authentication. Also, the security manager and Java-protected domains are used for authorization.
Kerberos v 5 is used for authentication and secure communication of client and server applications, and it is the basis for many authentication systems. The purpose of the Kerberos system is to authenticate one principal to another.
The Java Authentication and Authorization Service (JAAS) is a set of packages that implements the standard Pluggable Authentication Module (PAM) framework and services to authenticate users (determine who is executing the code) and to authorize (enforce access controls to) users. JAAS is now integrated into the Java 2 SDK v 1.4.
The Java GSS-API is used to secure the exchange of messages between applications. It contains the Java bindings for the Generic Security Services Application Program Interface (GSS-API), which defines a uniform API to security services including Kerberos.
Considering security audits
This functional requirement addresses the need to recognize, store, and analyze information about security activities in the system. Auditing is important because it provides a history of events that help identify what happens during an attack or security breach. The revelation of security breaches are usually discovered through audit trails, which flag unauthorized access, indicate variations from normal operations, and help detect violations to the security guidelines and policies.
Because audit information is very important to your system, protect audit records at the highest level. Auditing has limited support in Java provided by the SecurityManager. There are plans to define a set of standard auditing functionalities for the future.
The security manager component is part of the core Java security architecture. It is responsible for determining whether certain requests to access particular valued resources are to be allowed.
You can extend the SecurityManager by using the policytool program and setting additional functionality. If you wish to monitor security access, you can set the java.security.debug system property.
There are several concepts related to auditing that you should consider in your system, such as:
Monitoring, for intrusions and violations.
Security auditing, such as internal and external auditors that look for backup controls, contingency plans, and standards.
Audit trails, which keep records of transactions, events, and logs.
Considering user privacy and user data protection
The functional requirement for privacy addresses the need to satisfy the user's privacy needs and is usually satisfied via encryption. Sun's Simple Key Management Internet Protocol (SKIP) allows parties in a communication to agree on an encryption scheme to ensure privacy.
The functional requirement for user data protection specifies the requirements for security functions and policies to protect user data such as access control policies, stored data integrity, and data authentication. The java.security.acl package defines support for access control lists. These can be used to restrict access to resources in any manner desired. The package consists of interfaces (and exceptions). The actual implementations, within Sun's JDK, are provided in the sun.security.acl package.
Security is not static. You need to have specific and clear security goals and requirements, understand the different technologies available, and use the most adequate for your needs. The Common Criteria effort was developed by the international community to evaluate security IT solutions and can be used as a roadmap to understand and derive your security objectives as well as to evaluate third-party security solutions.
In addition, the Java Security Model gives a flexible security model that can be used to satisfy your security requirements. You can use Java technologies to create secure and trustworthy applications. These technologies include the Java Cryptography Architecture (JCA), Java Cryptography Extension (JCE), Java Secure Socket Extension (JSSE), Java Authentication and Authorization (JAAS), and the Java GSS-API.
However, there is still an ongoing process, and new refinements and capabilities are to be expected in the future.
More Articles …
Page 10 of 24