Popular cloud computing services: SaaS (Software as a Service)

One of the reasons the ‘Cloud’ has become such a ubiquitous term is the Saas model. Some people are beginning to think Saas should not be thought of as a part of the Cloud packages at all, because it provides in some respects a different type of service than Paas or Iaas.

Saas or Software as a service is the most basic service, essentially it is a cloud offering that individuals or companies can use to standardize routine tasks or services.  An email client might use this type of cloud packaging because the basics of an email client needs to work across platforms.  Essentially Saas is a data storage, offering specific software that uploads and downloads from the general server.  The whole process is streamlined by the Cloud provider.  The Cloud provider is doing all the developing for the software; while in Paas the platform can do some of the work– Saas takes this beyond the scope of what Paas provides.  Saas is really just the data storage aspect of the Cloud offering where the data is limited in scope to the type of software the Saas is providing.

The Saas model, in fact most Cloud based services, rely upon the use of some software interface for the client that uploads and downloads from the Cloud.  The Cloud operator utilizes intelligent software to handle data from the clients.  Examples include GoogleDrive, iCloud, or an application store; all these services remotely hold data and software in the cloud that the client is able to upload and download from.  It takes little know-how to operate many of Saas Cloud operations the Cloud is able to manage and essentially streamlines aspects of business operations.

Interoperability and security are still issues with the Saas model.  A problem with Saas is the use of software precludes or interferes with control over your data.  The software operator remains in control of formatting the software.  The data a customer puts onto the Cloud is read by their own network through a pre-designed software client, so porting or moving customers’ data is a cumbersome process.  As portability is a problem for all Cloud services the service of your particular Cloud host is an extremely important decision.

What a Saas customer must keep in mind is that they are limited by the software they are using.  If, for-instance, one wanted to port data or use their information in any particular way, they would need to design their own system for accessing the information for personal use.  Porting data from an Saas Cloud provider is a significant concern for the customer.  On the other hand, the software service is already pre-packaged into the product so the customer does not have to worry about setting up a system.  And usually a customer using Saas will be looking to outsource significant amounts of IT needs to the Cloud provider.
As Cloud customers come from all walks of life, the client base for a provider is not limited to a company.

But many individuals use a Saas in their day to day operations. Saas is such a ubiquitous model that many people are using it without knowing it has a particular designation.  Whether  Saas should be considered in step with the other services is up in the air, but the basics of a Cloud service are there.  The Cloud host usually maintains a large server or servers to hold the data that is being sent through its operation, and the client accesses that data as way of interfacing with data, often in the form of communications.  The Cloud is a nascent industry with new issues cropping up routinely.

 

 

Popular cloud computing services: the PaaS (Platform as a Service)

One model for many developing companies, particularly those who are developing new software is Paas.  Paas or Platform as a Service is somewhere in between Iaas and Saas services because while it is not as restrictive as Saas it also is not as flexible as Iaas.  Paas lets the customer scale their operations according to their growth, and helps with development by having a consistent platform for a group of developers.

One use for Paas is for developing programs from multiple remote locations, because its services streamline the programing side.  By standardizing the available products or tricks in the bag, multiple people can remotely add to the programs being designed by the engineers without stepping on each others toes.  It is often used by companies that offer specialized services to other companies.  A company offering these services will often rent a cloud platform that gives them tools to then design programs for other companies to use.  The service is the platform like a Windows or Mac platform, except not as simple and customized by the Cloud providers themselves.

Paas is a convenient tool for developers and others who wish to coordinate their projects onto interpretable software.  A developer of some sort often needs to be working in the same language as their coworkers, to be able to integrate their particular designs with each other.  And for many companies that have developers working remotely this helps keep their work on the same level.  Paas works for other types of structures as well, a company that provides a software service to other companies may use Paas system to supplement their own servers.  The cloud is often used as a way for start-ups to avoid many of the costs associated with owning a server.  Paas is often a ‘pay as you go’ platform, meaning you can pay as your need arises.  If you need to rent more space you can pay according to your needs.  This helps young start-ups get costs associated with initial investments down.  Though by no means is a Paas provider limited to a start-up as their customer base.  A more developed company might want to use a Paas system to streamline operations.

Though the customer base for Paas is generally designers who offer applications for consumption, the advantage to using a platform specific Cloud service is that the customer can write all the code, and not be tied down to creating up the entire infrastructure of Iaas.  Saas services just wouldn’t work for developers.  The platform provided by the cloud services frees the developers up to work on more necessary tasks.

Paas customers can be varied, not limited to software developers alone.  A Paas provider might simplify their videos porno platform to provide more basic services that allow a company to have many built-in features.  While Saas and Iaas have elements of this, Paas is more variable in what it offers from provider to provider.  The Paas system is not as simple as a Saas system or an Iaas system.  The Iaas systems tend to be more hardware oriented, where the platform and software are already developed; in this way it is generally utilized by companies that already have a product to push out.  The Paas while solely for developing companies enables a developer to utilize the Cloud offering at all ends of their development process.

Paas in this way really enables new ideas and new developments in the tech world. Essentially keeping an application or web developer set from the beginning to the end. Though as the company grows they may want to consider hosting their own servers or depending on their needs renting out an Iaas host down the road.  But having the availability of the Paas enables young companies to begin and sell their products by saving them from hardware and various software development tools needed to really make excellent products.

Popular cloud computing services: the IaaS (Infrastructure as a Service)

A popular type of Cloud service these days is Iaas. It is a means of keeping costs down in the flexible area of hardware needs.  Iaas or Infrastructure as a Service is designed around providing a user with the available hardware to host whatever project needs hosting.

The best way to think about this is you are paying for the use of a network like you would a tax on infrastructure.  One day you may have to use the subway, another day the roads, and then some days you have five trucks and a subway car on the infrastructure.  Infrastructure as a service gives you real or virtual hardware that you can upload your information to.  Your programs or the users of your webpage go through the infrastructure of the Cloud host.  The host gives you the availability of storage and memory that scales according to your needs, but you have to build the project from the ground up to make use of the available infrastructure.

Getting down to basics allows the customer or the company renting out the Cloud service to scale their operations according to their needs.  For some the question may be why go through the extra effort to provide your own platform.  The scale of your operation when you rent out virtual room in the Iaas system is more flexible this way.  While Paas offers more software services to the customer, the open nature of Iaas gives a more established flexibility to create their own services with the hardware rented out.  The provider has the hardware, whatever particular hardware services they are offering, and the customer rents it out to keep costs down.  Any large database needs to be kept in a cool dry environment, and this amongst other things drives costs up especially for a project with variable memory needs.

Many services are virtual server space, network connections, bandwidth, IP addresses and load balancers.  Like Saas and Paas, Iaas is accessed by a client through the internet.  The Cloud in general is essentially an Application or a Web page that accesses the server through the internet and creates available storage for the user. The provider is able to keep their own costs down by letting the customer base make decisions on what type of platform or software to install on their hardware.  In turn, everything works seamlessly together. The hardware of an Iaas provider is often stored in many different facilities, allowing them to provide a product of scale.  Basically they can rent out their hardware to other users and by having a large facility or facilities they are able to keep costs down that are passed on to the customer.

The customer does not have to rent out their own facility this way.  By not having to maintain their own facility they are able to scale their operation according to peak and low traffic times.  For instance a weight loss website might want to rent out from an Iaas or Paas provider to keep costs down during  lulls in business.  But after New Years they might acquire a lot of customers that providing for would be a huge expense the rest of year.

Between Iaas and Paas providers the user has to decide what type of operation needs they have.  For a developer Paas might be the way to go, but for a more established company, or a company that has a product in line for their users a Iaas provider will be able to give them the hardware they might need at a scalable rate meeting their needs as it is needed.  Time, money, efficiency and ease of use are important factors in any business and tuning into the correct providers is the way to go.

VM snapshots for efficient Forensic Investigation

Cloud computing is a technology which allows users to access storage, software, and infrastructure and deployment environment based on a model named “pay-for-what-they-use”. The nature of the cloud environment is that it is multi-tenant and dynamic as there is a need for addressing the various legal, technical and organizational challenges regarding the cloud storage.

With the dynamic nature of the cloud environment, it is possible for digital investigations to be carried out in the cloud environment. Digital forensics has to adhere to a number of steps as it was the case with traditional computer forensics. These steps include Identification, Collection, Examination and Reporting/ Presentation. The first step involves identifying the source of evidence, while the collection phase involves identifying the actual evidence and collecting the necessary data. The examination stage involves analyzing the forensic data, while in the reporting phase, the found evidence is presented in a court of law.

The digital investigators experience challenges as a result of the legal, technical and organizational requirements. If some compromise is made on the part of the CSP, then the evidence which is provided will not be genuine. It might have happened the data you are relying on as evidence was injected by a malicious individual.

A number of digital devices are currently using the cloud, but the investigators are given little chance to obtain the evidence. The available Agreement may not be stating the role of the CSP in carrying out the investigation and its responsibility during the time of happening of the crime. The CSP might have failed to keep logs which are an important part in getting evidence regarding the occurrence of a crime. The investigator also has to rely on the CSP for collection of the necessary log files, and this is not easy. Many researchers have clearly stated that many investigators experience difficulties in trying to collect the log files.

The cloud service provider will provide their clients with a number of different services, and it has been found that only a few customers from the same organization will be accessing the same services. Malicious users are capable of stealing sensitive data from the other users and this can negatively affect the trust of the CSP. There is a need for the cloud to protect against these malicious activities by use of Intrusion Detection Mechanisms for monitoring the customer VMs and in detecting malicious activity.

A user can create his or her physical machine to create a VM. Other than for the user having to request, some cloud software such as the OpenStack and eucalyptus will create snapshots from a VM which is running and then store the snapshots till when the VM has terminated. If you reach the maximum VMs, then the older VMs will be deleted from the system. The snapshots from a cloud environment are a great source of digital evidence and they can be used for the purpose of regenerating events. It is hard for us to store numerous snapshots. The snapshots have also been found to slow the virtual machine, and this is determined by the rate at which it has changed since when it was taken and the period of time for which it is stored.

Malicious activities will always be identified in case the users of the VM carry out actions such as uploading a malware to the systems in our cloud infrastructure, excessive access from a location, or by performing numerous downloads or uploads within a short period of time. Other activities which can be suspicious include cracking of passwords, launching of dynamic attack points and deleting or corrupting some sensitive organization data.

The Way Forward in Heterogeneous DataCenter Architectures

The use of heterogeneous datacenter architecture has been on the rise. Developers experience numerous challenges when trying to adapt applications and systems in such areas. The good thing with cloud computing is that it will abstract the hardware from the programmers and end users. This is a good idea for allowing the underlying architecture to be improved, such installation of new hardware, and no changes will be made to the applications.

The use of heterogeneity in processor architectures will help solve a number of problems. The elements for heterogeneous processing can improve the efficiency of our systems through features such as specialization, which are computations matching the elements which have been specialized for processing. The Graphical processing units (GPUs) are examples of systems which have developed in the computing industry. Others include media-functional units such as SSE4 instructional set, parallel coprocesors like Intel’s Xeon Phi and encryption units. The future architectures are expected to feature multiple processors, each with heterogeneous internal components, interconnects, accelerators, and storage units with good efficiencies. Companies which rely on large-scale datacenters like PayPal and Microsoft are investigating on how to implement heterogeneous processing elements so as to improve on the performance of their products.

Technology for developing cloud computing which can be integrated with the heterogeneity of the data center will make us look for ways for exploiting the varied processing elements for special purpose and we don’t need to lose the advantages associated with abstraction.

With infrastructure as a service, the physical and virtual resources will be exposed to the end user. Virtual machines will offer an instant control to the operating system (OS). In the traditional architectures, virtualization introduced a great overhead for workloads are highly sensitive to the performance of the system. However, modern technologies such as peripheral component interconnect (PCI) and single-root I/O virtualization (SR-IOV) have reduced this overhead since they perform a direct access to the accelerators and the networking devices, and the incurred overhead is far less, maybe 1%.

Also, with the increase in heterogeneity of the datacenters, the deployments for IaaS should be expected to expose varied components. For us to extend to the heterogeneous IaaS deployment from the homogeneous cloud flexibility, we have to perform a further research on the following fileds:

– The schemes which can be employed for sharing in accelerators.
– Optimal tradeoffs associated with virtualization functionality and performance.
– Optimization techniques for power and utilization.
– Scheduling techniques for determination of job assignments for the resources to be allocated more efficiently.
– Schemes for cost and prioritization.
– Mechanisms for migration of jobs with state in the accelerators and the host in accelerators.

Heterogeneous computing should involve finding ways to exploit the available new interconnect technologies such as the parallel file systems, software-defined networking in relation to the heterogeneous compute elements.

For the case of platform as a service, the heterogeneity has to be exposed to the framework, or exposed to programmer, or hidden by backends targeting heterogeneity, or hidden by the libraries. Future research should be focused on the following:

– Software architecture regarding accelerated libraries.
– Scheduling mechanisms aware of heterogeneity at the level of the platform.
– Application programming frameworks with the capability of exposing or not exposing the heterogeneity to the programmer.
– Allocating resources amongst multiple frameworks or platforms in an heterogeneous manner, or for the frameworks which share the same datacenter.

The catapult framework for Microsoft is an example of a research which is targeting to improve on the heterogeneous hardware. The software was created for the purpose of improving how the performance of Binge Search Engine. It will provide us with a valuable use case on how to exploit the heterogeneous hardware for applications in commercial datacenters.

The Need for Standards in Cloud Computing Security

For enterprises to view cloud computing as the best choice for storage of their data, standards are of great essence. Most IT enterprises are working hard to ensure that they get a cloud which will help them cut on their expenses while achieving their business needs.

Today, most organisations allow only a percentage of their daily operations to be supported by the cloud. Although IT experts expect that the adoption of the cloud should accelerate in the near future, many enterprises are still wondering whether the cloud is the best solution for storing their data. The main source of fear is security. The enterprises are not sure of whether their data will be secure in the cloud.

They are also in need of creating an on-demand service while keeping compliance and industry compliance. The enterprises shy away from storing g their data in the cloud for fear that they are not protected. The cloud is porous in nature, and this makes it an attractive target by attackers and securing it has become more complex as the porno site.

Currently, there is no definition on what an effective cloud security is. There exist no standards defining what an effective cloud security might, and what is expected from both the providers and the users to ensure that the cloud data has been well secured. Instead of having these, the enterprises and providers are left to rely on data center standards, list of auditing specifications, industry mandates and regulatory requirements for provision of guidance on how the cloud environments should be protected.

Although this approach can make cloud computing to be somehow complex, it is a good approach to ensure that the cloud data is well secured. There is a need for both the enterprises and the cloud providers to ensure that they focus on the core elements of well secured cloud such as identity and access management, virtualisation security, content security, threat management and data privacy.

It is also good for the industry to consider the NIST (National Institute of Standards and Technology) specifications regarding the cloud security, so as to form a good foundation for protection of the data and services which are running in the cloud. Although most of the principles here were meant for the government organisations, they are very relevant and applicable in the private sector.

The guidelines provided by NIST are good for addressing serious issues regarding cloud security such as identity and access management, architecture, trust, data protection, software isolation, incidence response, availability and compliance. The body also states the factors which organisations have to consider in relation to public cloud outsourcing. The CSA (Cloud Security Alliance) is a good source of knowledge for rules regarding how to secure data running in on-demand environment. Here, you will know more about the best practices for securing such data. With CSA, all the necessary guidelines which can help you know whether your cloud provider is doing what they can to secure your data are provided.

Working through such organisations is good as they will help both the customers and the provide in laying of a good groundwork for the purpose of creating a secure cloud environment. Security principles should be applied as much as possible when we are securing our cloud environments. With good standards for cloud computing, the enterprises will be much guaranteed that their data is safe in the cloud. This will improve their trust for the cloud provider, and they will make cloud computing the best solution to their IT needs. The current customers will be much assured of the security of their data.

The Federal Risk Management and Accreditation Program

FedRAMP (Federal Risk Management and Accreditation Program) is an accreditation process through which the cloud provides align their security policies to those that have been stated by the U.S government. Although this process is new, it has brought a number of improvements to the cloud security and is expected to being more improvements. With the approach, standardisation is provided for both cloud services and products.

It is aimed at accelerating the rate at which secure cloud solutions for the government agencies are adopted, and the security of cloud products and services is improved. FedRAMP also ensures that consistent security is achieved across all the government agencies, automating the services and ensuring that there is continuous monitoring.

FedRAMP helps us implement a framework in with a standardised processes for the purpose of security assessments which can leverage the path for the ongoing authorisation and assessment and as well as the initial P-ATO. With a unified approach to the idea of cloud computing, you will experience a decrease in time, cost and the resources which be needed in architecting the cloud solution and the security will be improved while creating uniform standards across all the government agencies. This will make it easy for the agencies to update their IT infrastructure so as to make an improvement so as to provide services and protect their data in an efficient manner.

Although the FedRAMPO will provide us with the framework, agencies will be tasked with looking for the cloud service provider (CSP) having P-ATO and meting all the needs of the FedRAMP. The agency will also be tasked with taking a good inventory of the cloud services, which will help us develop a good cloud strategy, and report on the cloud service infrastructure on an annual basis. This task can be tiresome and this is why agencies usually choose CSP who not only satisfies the needs of the FedRAMP but has a complete understanding of the whole FedRAMP process and has the necessary resources so as to continue supporting the agency.

As government agencies continue to adopt cloud computing, quality CSPs are a necessity as they can help the agencies to reduce the risk they face in cloud adoption strategies. Since each agency is unique in this case, each may have its own requirements. Also, CSPs are not the same. However, the best thing is for the agency to look for a CSP which is much flexible. This will make it possible for the specific security controls of the agency to be layered to be layered on top of our base FedRAMP infrastructure. Each agency will want to get a CSP formed by a team of professionals who are experienced and willing to listen to the agency and understand its specific needs. The CSP should also help the agency in achieving their unique objectives.
For some enterprises, FredRAMP will have two meanings: a mechanism for measuring the success of security, and a way for selling the cloud services to the government agencies under the command of migrating to the cloud.

Some of the organisations which run clouds and adhere to the FredRAMP standards include Akamai, Amazon Web Services, Lockheed Martin and the U.S Department of Agriculture. Both the private industry representatives and governmental stakeholders took part in developing the FredRAMP standards in 2012. They were geared towards reducing costs, increasing efficiencies and increasing the level of safety in the cloud. In case you are not a CSP, there are several avenues for you to get involved. You can take advantage of a FredRAMP provider, which will help in sending messages of seriousness. You can also apply for a Third-Party Assessment Organisation.

Logging Framework for Forensic Environments in Cloud Computing

The field of cloud computing has attracted many researchers. It is good for you to know the conditions under which the data is stored in data centres or is processed, and then it becomes an interest for cloud computing forensics. The use of cloud computing in forensics has increased, and this is as a result of emergence of new technologies. The architecture of a cloud logging system is layered, and is composed of 5 layers, each with its own task. Let us discuss these layers:

The management layer
The modules which are responsible for most operations in the cloud can be found in this level, together with the ones targeted for the forensics, like the “Cloud Forensic Module”.

Virtualisation layer
This is the second layer in the architecture, and this is the layer in which we can find the servers and workstations which host our virtual machines. Although the virtual machines are the main building blocks in our environment, it is good for us to have virtualisation enabled in the hardware. A Local Logging Module should be installed in the Cloud Forensic Interface in the physical machine that we have. This will be the one tasked with gathering of the raw data from the virtual machines which are being monitored. The investigator can choose to adjust the amount of data, and they can select a particular virtual machine to monitor it, or maybe choose to monitor the whole activity which is taking place in the virtual machine.
For the data to be gathered reliably from your virtual machine, the local logging module has to be fully integrated with the running hypervisor inside our physical machine. We have to be keen on the kind of data which we intercept from the virtual machine, and then send it for further processing. It is possible for any activity to be intercepted, you will experience some penalties in terms of processing speed and timer.

Storage layer
This is the third layer in the logging architecture. It is where the RAW data which has been send from the modules which exist in the virtualisation layer is stored. The RAW data will be send by the logging modules in the form that it has been gathered from the hypervisor. From this, we can say that the layer has functionality similar to one of a distributed storage.

Analysing Layer
This is the fourth layer in the logging architecture. It is responsible for ordering, analysing, aggregating and processing the data which has been stored in our previous layer. As you might have noticed, the processes will use the computing resources intensively, and this calls for the analysing process to be done in an offline manner, and it is made available to the investigators immediately the job is ready. Once the process is completed, the investigators will be having all the relevant information regarding what happened in the remotely monitored machine, and they will be capable of navigating throughout the activities of the virtual machine so as to know what happened. In most cases, the layer is implemented in the form of distributed computing applications. This is mostly the case when the application needs a great computing power

Storage layer
This is the fifth layer in the architecture. This is where the results published from the rest of the layers is stored. This is the layer at which the forensics investigator will interact with the virtual machine snapshots they are monitoring, and this is done by use of the Cloud Forensic Module which is obtained from the Management layer.

Improving SOC Efficiencies to Bridge the Security Skills gap

Security alert storms are on the rise. Most organisations have chose to deploy more products for security and you have to know that each of the product will be having its own security alerts, workflows and interfaces.

These enterprises have gone ahead to recruit more security analysts so that they can deal with the increasing security alerts. However, most IT professionals lack security skills, and this is why enterprises have not found enough security analysts. Research has shown that the need for security analysts is increasing by 18% on an annual basis.

The question now is, how do enterprises solve this problem? Automation is the best solution to the problem. It will work by reducing on the amount of work that an analyst is expect to perform, but it will be hard for a junior to know the tricks of the trade.

The following are some of the measures which have been taken for the purpose of alleviating the skill-set gap:

Sharing knowledge and collaboration
Most tools for sales and marketing are focused on collaboration. Identify a tool which has succeeded in sales porno and marketing as this will give you any necessary information about the actions of the customers. Also, anyone who makes use of the system can share their experience with other users. Each SOC has to be ready to learn from the peer analysts and then take part in the operations workflow for SOC. When you build the collaboration as part of the SOC workflow, you will be in a position to detect any duplicate incidences which under investigation, and the junior analysts should be educated so that they can learn from the senior analysts.

Training and play-books
Creation of play-books is good as these will help the analysts read the process described therein and then adhere to them in their daily practices. Most tools for sales and marketing will make the individual work hard and in the proper way by reminding what their next step constantly, and the time they are expected to involve or collaborate with the others in the team. In SOC, this has to be done correctly so that the work of the analyst will not be interfered with in any way. The playbook should always be geared towards promoting the best practices which should be followed and these must have been developed over a period of time rather than in a faster manner. The play-books should not been seen as a static file sitting in your documents, but they should be seen as a repository which represent events which have taken place overtime. These will improve the productivity of the analyst, and at the same time make it easy for them to track future events.

Automation
This is best when some tasks have been repeated and they do not require any intervention by human beings. There are numerous such tasks in security and they just take us unnecessary time. In some cases, some cases will go un-investigated since the number of alerts will overwhelm the available videos porno security personnel. It is always good for us to automate the tasks which are complex for us to perform.

Searching and Learning Historically
The analyst can easily and quickly make decisions from the historical data they have from security incidences of the past. The data should be more than the log data, and should be analysed very well. When it comes to issues of security, you don’t need complex tasks for the purpose of alerts.
Tracking incidences using a closed loop
It is good for you to analyse metrics the response to an incidence, workload imposed on the analyst and the required skills over time and this will help you improve on your security posture in the organisation.