The Way Forward in Heterogeneous DataCenter Architectures

The use of heterogeneous datacenter architecture has been on the rise. Developers experience numerous challenges when trying to adapt applications and systems in such areas. The good thing with cloud computing is that it will abstract the hardware from the programmers and end users. This is a good idea for allowing the underlying architecture to be improved, such installation of new hardware, and no changes will be made to the applications.

The use of heterogeneity in processor architectures will help solve a number of problems. The elements for heterogeneous processing can improve the efficiency of our systems through features such as specialization, which are computations matching the elements which have been specialized for processing. The Graphical processing units (GPUs) are examples of systems which have developed in the computing industry. Others include media-functional units such as SSE4 instructional set, parallel coprocesors like Intel’s Xeon Phi and encryption units. The future architectures are expected to feature multiple processors, each with heterogeneous internal components, interconnects, accelerators, and storage units with good efficiencies. Companies which rely on large-scale datacenters like PayPal and Microsoft are investigating on how to implement heterogeneous processing elements so as to improve on the performance of their products.

Technology for developing cloud computing which can be integrated with the heterogeneity of the data center will make us look for ways for exploiting the varied processing elements for special purpose and we don’t need to lose the advantages associated with abstraction.

With infrastructure as a service, the physical and virtual resources will be exposed to the end user. Virtual machines will offer an instant control to the operating system (OS). In the traditional architectures, virtualization introduced a great overhead for workloads are highly sensitive to the performance of the system. However, modern technologies such as peripheral component interconnect (PCI) and single-root I/O virtualization (SR-IOV) have reduced this overhead since they perform a direct access to the accelerators and the networking devices, and the incurred overhead is far less, maybe 1%.

Also, with the increase in heterogeneity of the datacenters, the deployments for IaaS should be expected to expose varied components. For us to extend to the heterogeneous IaaS deployment from the homogeneous cloud flexibility, we have to perform a further research on the following fileds:

– The schemes which can be employed for sharing in accelerators.
– Optimal tradeoffs associated with virtualization functionality and performance.
– Optimization techniques for power and utilization.
– Scheduling techniques for determination of job assignments for the resources to be allocated more efficiently.
– Schemes for cost and prioritization.
– Mechanisms for migration of jobs with state in the accelerators and the host in accelerators.

Heterogeneous computing should involve finding ways to exploit the available new interconnect technologies such as the parallel file systems, software-defined networking in relation to the heterogeneous compute elements.

For the case of platform as a service, the heterogeneity has to be exposed to the framework, or exposed to programmer, or hidden by backends targeting heterogeneity, or hidden by the libraries. Future research should be focused on the following:

– Software architecture regarding accelerated libraries.
– Scheduling mechanisms aware of heterogeneity at the level of the platform.
– Application programming frameworks with the capability of exposing or not exposing the heterogeneity to the programmer.
– Allocating resources amongst multiple frameworks or platforms in an heterogeneous manner, or for the frameworks which share the same datacenter.

The catapult framework for Microsoft is an example of a research which is targeting to improve on the heterogeneous hardware. The software was created for the purpose of improving how the performance of Binge Search Engine. It will provide us with a valuable use case on how to exploit the heterogeneous hardware for applications in commercial datacenters.

The Need for Standards in Cloud Computing Security

For enterprises to view cloud computing as the best choice for storage of their data, standards are of great essence. Most IT enterprises are working hard to ensure that they get a cloud which will help them cut on their expenses while achieving their business needs.

Today, most organisations allow only a percentage of their daily operations to be supported by the cloud. Although IT experts expect that the adoption of the cloud should accelerate in the near future, many enterprises are still wondering whether the cloud is the best solution for storing their data. The main source of fear is security. The enterprises are not sure of whether their data will be secure in the cloud.

They are also in need of creating an on-demand service while keeping compliance and industry compliance. The enterprises shy away from storing g their data in the cloud for fear that they are not protected. The cloud is porous in nature, and this makes it an attractive target by attackers and securing it has become more complex as the porno site.

Currently, there is no definition on what an effective cloud security is. There exist no standards defining what an effective cloud security might, and what is expected from both the providers and the users to ensure that the cloud data has been well secured. Instead of having these, the enterprises and providers are left to rely on data center standards, list of auditing specifications, industry mandates and regulatory requirements for provision of guidance on how the cloud environments should be protected.

Although this approach can make cloud computing to be somehow complex, it is a good approach to ensure that the cloud data is well secured. There is a need for both the enterprises and the cloud providers to ensure that they focus on the core elements of well secured cloud such as identity and access management, virtualisation security, content security, threat management and data privacy.

It is also good for the industry to consider the NIST (National Institute of Standards and Technology) specifications regarding the cloud security, so as to form a good foundation for protection of the data and services which are running in the cloud. Although most of the principles here were meant for the government organisations, they are very relevant and applicable in the private sector.

The guidelines provided by NIST are good for addressing serious issues regarding cloud security such as identity and access management, architecture, trust, data protection, software isolation, incidence response, availability and compliance. The body also states the factors which organisations have to consider in relation to public cloud outsourcing. The CSA (Cloud Security Alliance) is a good source of knowledge for rules regarding how to secure data running in on-demand environment. Here, you will know more about the best practices for securing such data. With CSA, all the necessary guidelines which can help you know whether your cloud provider is doing what they can to secure your data are provided.

Working through such organisations is good as they will help both the customers and the provide in laying of a good groundwork for the purpose of creating a secure cloud environment. Security principles should be applied as much as possible when we are securing our cloud environments. With good standards for cloud computing, the enterprises will be much guaranteed that their data is safe in the cloud. This will improve their trust for the cloud provider, and they will make cloud computing the best solution to their IT needs. The current customers will be much assured of the security of their data.

The Federal Risk Management and Accreditation Program

FedRAMP (Federal Risk Management and Accreditation Program) is an accreditation process through which the cloud provides align their security policies to those that have been stated by the U.S government. Although this process is new, it has brought a number of improvements to the cloud security and is expected to being more improvements. With the approach, standardisation is provided for both cloud services and products.

It is aimed at accelerating the rate at which secure cloud solutions for the government agencies are adopted, and the security of cloud products and services is improved. FedRAMP also ensures that consistent security is achieved across all the government agencies, automating the services and ensuring that there is continuous monitoring.

FedRAMP helps us implement a framework in with a standardised processes for the purpose of security assessments which can leverage the path for the ongoing authorisation and assessment and as well as the initial P-ATO. With a unified approach to the idea of cloud computing, you will experience a decrease in time, cost and the resources which be needed in architecting the cloud solution and the security will be improved while creating uniform standards across all the government agencies. This will make it easy for the agencies to update their IT infrastructure so as to make an improvement so as to provide services and protect their data in an efficient manner.

Although the FedRAMPO will provide us with the framework, agencies will be tasked with looking for the cloud service provider (CSP) having P-ATO and meting all the needs of the FedRAMP. The agency will also be tasked with taking a good inventory of the cloud services, which will help us develop a good cloud strategy, and report on the cloud service infrastructure on an annual basis. This task can be tiresome and this is why agencies usually choose CSP who not only satisfies the needs of the FedRAMP but has a complete understanding of the whole FedRAMP process and has the necessary resources so as to continue supporting the agency.

As government agencies continue to adopt cloud computing, quality CSPs are a necessity as they can help the agencies to reduce the risk they face in cloud adoption strategies. Since each agency is unique in this case, each may have its own requirements. Also, CSPs are not the same. However, the best thing is for the agency to look for a CSP which is much flexible. This will make it possible for the specific security controls of the agency to be layered to be layered on top of our base FedRAMP infrastructure. Each agency will want to get a CSP formed by a team of professionals who are experienced and willing to listen to the agency and understand its specific needs. The CSP should also help the agency in achieving their unique objectives.
For some enterprises, FredRAMP will have two meanings: a mechanism for measuring the success of security, and a way for selling the cloud services to the government agencies under the command of migrating to the cloud.

Some of the organisations which run clouds and adhere to the FredRAMP standards include Akamai, Amazon Web Services, Lockheed Martin and the U.S Department of Agriculture. Both the private industry representatives and governmental stakeholders took part in developing the FredRAMP standards in 2012. They were geared towards reducing costs, increasing efficiencies and increasing the level of safety in the cloud. In case you are not a CSP, there are several avenues for you to get involved. You can take advantage of a FredRAMP provider, which will help in sending messages of seriousness. You can also apply for a Third-Party Assessment Organisation.

Logging Framework for Forensic Environments in Cloud Computing

The field of cloud computing has attracted many researchers. It is good for you to know the conditions under which the data is stored in data centres or is processed, and then it becomes an interest for cloud computing forensics. The use of cloud computing in forensics has increased, and this is as a result of emergence of new technologies. The architecture of a cloud logging system is layered, and is composed of 5 layers, each with its own task. Let us discuss these layers:

The management layer
The modules which are responsible for most operations in the cloud can be found in this level, together with the ones targeted for the forensics, like the “Cloud Forensic Module”.

Virtualisation layer
This is the second layer in the architecture, and this is the layer in which we can find the servers and workstations which host our virtual machines. Although the virtual machines are the main building blocks in our environment, it is good for us to have virtualisation enabled in the hardware. A Local Logging Module should be installed in the Cloud Forensic Interface in the physical machine that we have. This will be the one tasked with gathering of the raw data from the virtual machines which are being monitored. The investigator can choose to adjust the amount of data, and they can select a particular virtual machine to monitor it, or maybe choose to monitor the whole activity which is taking place in the virtual machine.
For the data to be gathered reliably from your virtual machine, the local logging module has to be fully integrated with the running hypervisor inside our physical machine. We have to be keen on the kind of data which we intercept from the virtual machine, and then send it for further processing. It is possible for any activity to be intercepted, you will experience some penalties in terms of processing speed and timer.

Storage layer
This is the third layer in the logging architecture. It is where the RAW data which has been send from the modules which exist in the virtualisation layer is stored. The RAW data will be send by the logging modules in the form that it has been gathered from the hypervisor. From this, we can say that the layer has functionality similar to one of a distributed storage.

Analysing Layer
This is the fourth layer in the logging architecture. It is responsible for ordering, analysing, aggregating and processing the data which has been stored in our previous layer. As you might have noticed, the processes will use the computing resources intensively, and this calls for the analysing process to be done in an offline manner, and it is made available to the investigators immediately the job is ready. Once the process is completed, the investigators will be having all the relevant information regarding what happened in the remotely monitored machine, and they will be capable of navigating throughout the activities of the virtual machine so as to know what happened. In most cases, the layer is implemented in the form of distributed computing applications. This is mostly the case when the application needs a great computing power

Storage layer
This is the fifth layer in the architecture. This is where the results published from the rest of the layers is stored. This is the layer at which the forensics investigator will interact with the virtual machine snapshots they are monitoring, and this is done by use of the Cloud Forensic Module which is obtained from the Management layer.

Improving SOC Efficiencies to Bridge the Security Skills gap

Security alert storms are on the rise. Most organisations have chose to deploy more products for security and you have to know that each of the product will be having its own security alerts, workflows and interfaces.

These enterprises have gone ahead to recruit more security analysts so that they can deal with the increasing security alerts. However, most IT professionals lack security skills, and this is why enterprises have not found enough security analysts. Research has shown that the need for security analysts is increasing by 18% on an annual basis.

The question now is, how do enterprises solve this problem? Automation is the best solution to the problem. It will work by reducing on the amount of work that an analyst is expect to perform, but it will be hard for a junior to know the tricks of the trade.

The following are some of the measures which have been taken for the purpose of alleviating the skill-set gap:

Sharing knowledge and collaboration
Most tools for sales and marketing are focused on collaboration. Identify a tool which has succeeded in sales porno and marketing as this will give you any necessary information about the actions of the customers. Also, anyone who makes use of the system can share their experience with other users. Each SOC has to be ready to learn from the peer analysts and then take part in the operations workflow for SOC. When you build the collaboration as part of the SOC workflow, you will be in a position to detect any duplicate incidences which under investigation, and the junior analysts should be educated so that they can learn from the senior analysts.

Training and play-books
Creation of play-books is good as these will help the analysts read the process described therein and then adhere to them in their daily practices. Most tools for sales and marketing will make the individual work hard and in the proper way by reminding what their next step constantly, and the time they are expected to involve or collaborate with the others in the team. In SOC, this has to be done correctly so that the work of the analyst will not be interfered with in any way. The playbook should always be geared towards promoting the best practices which should be followed and these must have been developed over a period of time rather than in a faster manner. The play-books should not been seen as a static file sitting in your documents, but they should be seen as a repository which represent events which have taken place overtime. These will improve the productivity of the analyst, and at the same time make it easy for them to track future events.

Automation
This is best when some tasks have been repeated and they do not require any intervention by human beings. There are numerous such tasks in security and they just take us unnecessary time. In some cases, some cases will go un-investigated since the number of alerts will overwhelm the available videos porno security personnel. It is always good for us to automate the tasks which are complex for us to perform.

Searching and Learning Historically
The analyst can easily and quickly make decisions from the historical data they have from security incidences of the past. The data should be more than the log data, and should be analysed very well. When it comes to issues of security, you don’t need complex tasks for the purpose of alerts.
Tracking incidences using a closed loop
It is good for you to analyse metrics the response to an incidence, workload imposed on the analyst and the required skills over time and this will help you improve on your security posture in the organisation.

Encryption of Data in the Cloud

Many organisations are nowadays looking on how to take advantage of cloud computing, but security of their data remains a serious concern. However, there are several mechanisms which can help you in encrypting your data in cloud and ensure there is effective data protection.

As organisations grow in size, they experience security challenges which they lack knowledge and experience to handle. Although most IT experts conclude that encryption of cloud data is the key to security, the approach can be daunting, and especially for small to mid-sized businesses. The process of managing encryption keys in a cloud environment is not easy. The encryption key should be kept separate from the encrypted data, and this is a challenge, especially in a cloud environment with an asymmetrical growth.

Encryption keys should be stored in a separate storage block or server. To stay protected against disasters, the encryption keys should be backed up in offline storage. The backup needs to be audited on a regularly basis, probably each month to ensure that it is free from corruption. Although some of the keys will expire automatically, others need to be refreshed, thus, calling for the need for a refresh schedule. For improved security, the key themselves should be encrypted, while the master and recovery keys should be given a multi-factor authentication.

It is good for any organisation to let a third party manage the encryption keys rather than the IT department of the organisation. If you encrypt the data before uploading it to your cloud storage provider, and then it happens that the same data is needed on a remote or mobile device having no decryption keys, the downloaded data will be useless. In case the company is in need of sharing the data with their business partner, and they don’t need the partner to access the decryption keys directly, this will become complex.

The following are some of the criteria which can be used for encrypting data in the cloud:

Exercise discretion

You have to determine the percentage of your organisation data which is considered as being sensitive. You will find that majority of your organisation data does not need to be encrypted. With a ubiquitous encryption, the functionality of the application can be interrupted, most probably the search and report functionality, and these are very important in the today’s cloud model. A discretionary approach to encryption should ensure that the sensitive data has been secured without interference with the advantages associated with emerging technologies.

Adherence to security policy of the corporate
The security policy for your organisation can help you determine the sensitive information in the environment and then make use of the strategy to create a strategy for the encryption strategy. Both the internal and external regulations in relation to the business have to be considered porno.

Automation-ready encryption
Once you have agreed on what needs to be encrypted, an action should be taken. Security technologies should be leveraged for identification of sensitive information in the corporate, and the encryption should be used as a remediation tool for risky situations. Once this process has been automated, inappropriate exposure of data will have been mitigated in a content-aware manner.

Consider the human element
Any data security mechanisms must consider the needs of the end users. If the security program of the corporate interferes with the normal workflow of the users, they will have to seek for alternatives to bypass the corporate network entirely.

Cloud providers and their potential SaaS partners should be asked about the protocol they use when transmitting their data. The SSL (Secure Socket Layer) protocol is now not the best since the discovery of a man-in-the-middle attack discovered in 2014. This can only be solved by implementation of TLS rather than the SSL, but the problem comes in that systems running older operating systems such as Windows XP are not able to implement the TLS. This has made some businesses to continue using SSL despite the risk it poses of exposing confidential data. The main solution to this problem is disabling the SSL completely, either on the server or client side, but this will make it inaccessible by systems which rely only on SSL.

Scaling and Economics of Scale for the Cloud

The advantages of moving your computing needs into the cloud is for some an obvious move and for others an important question to consider. The simple explanation is that the market is designed to be more efficient, in this case by moving separate databases to a central location. With new technologies there is a market for unused storage that the economy of scale allows us to free up and eliminate waste by a centralized server. Now pricing isn’t the only motivating factor one might use the Cloud; wether you are outsourcing a whole IT department to the Cloud, a few simple tasks, the hardware systems of your operation, or some combination taking into account waste becomes a vital part of any entrepreneur’s job.

For most new entrepreneurs they are growing their operations and want to keep costs as lean as possible, and as scalable as possible, to keep the business growing according to needs that may not be predictable. So you may be shopping for a Paas operation to meet your particular needs, now one thing to consider is what do you want to keep in house and what services do you want to pay for xxx. Your Paas provider is going to be able to provide a number of services that when you started would have been generally wasted resources. And in the future you have the capacity to move into a different system depending on one’s needs. This flexibility is the essence of the scalability of the information economy in general, it gives a whole new model to the information that wasn’t available in the past.

Economies are run by many factors, one such factor can be scaling, which is what having a dedicated server allows for. By hosting one large server and being able to adjust how the data is stored between computers allows for the Cloud provider to eliminate waste that might be collecting by each company hosting their own dedicated server. The advantages to this model is that it saves start up money for the client, and gives them greater flexibility for their needs, and provides a third party to profit in a new way. This process is one factor in driving our economy, in fact Adam Smith isolated this phenomenon. Adam Smith gave an example of separating tasks between three different individuals, and by doing this he found they were able to produce more; this is an example of an economy of scale. In a more industrial world we see this process going on in factories and all over our economy. The scale of the Cloud provider’s servers allows for them to make more profit than is lost by each client individually.

There are by some standards two ways to scale your operation using Cloud resources, that is horizontal scaling and vertical scaling. Vertical scaling is the ability to add more hardware resources and horizontal scaling is the codes ability to utilize those increased resources. On the one hand you may have more need for a more robust network of memory, and then on the other you may have to scale your operation to be able to handle an increase in RAM usage. The usage of greater quantities of RAM demand a more agile program that can convert between sources of data. An operation that is scalable in these two ways are able to effectively utilize the Cloud to its potential. Different Cloud providers are able to utilize these scaling effects differently. Fore-instance a Paas Cloud service will handle both horizontal scaling and vertical scaling; while a Iaas Cloud provider may only help you to scale your operation vertically.

DLP (Data Loss Prevention) in the Cloud

Most organizations have moved their sensitive data to the cloud, but they lack policy controls for the cloud data. Research has shown that 21% of the documents uploaded to the cloud have sensitive data such as protected health information (PHI), personally identifiable information (PII), intellectual property or payment card data and this creates concerns in terms of cloud compliance. In the year 2014, breaches in cloud data rose.
Most organizations have made an investment in tools for data loss prevention so as to protect loss or theft of their on-promise information and adhere to data compliance laws. The problem is that most of these tools have been made to protect data contained in emails and file servers, meaning that they address issues to do with mobile security and cloud governance since the data will always be passed to unsanctioned cloud services which are regularly accessed by unsecured devices. It has been found each average organizations will upload 3.1GB of data each day, and it is expected that 1/3 of organization data will be in the cloud by 2016. You have to recognize that migration of unprotected data to the cloud is risky, thus, there is a need for any organization to extend data prevention policies to take care of the data in the cloud to protect against being exposed.
Whenever you are addressing DLP, consider the following requirements:
1. Know the activity-level usage in your apps, and then use DLP to identify the activities dealing with sensitive data, anomalies and non-compliant behavior.
2. The cloud DLP software to be used should know the context which surrounds all the activity whenever you are dealing with sensitive data.
3. Restrictions and controls should be formulated in the organization to ensure that sensitive data is used safely.
4. Cloud activities should be tracked at app, user and activity level for compliance and auditing purposes.
5. Sensitive content which is residing in the cloud or moving to the cloud apps has been encrypted.

 

A number of tools for preventing data loss in the cloud have been developed. With NetScope Active Cloud, sensitive data for an organization can be protected from breaches and leaks. The tool provides advanced mechanisms for data loss prevention such as custom regular expressions, over 3000 data identifiers, support for over 500 file types, double-byte characters for international support, proximity analysis, exact match and fingerprinting. Once the tool detects some sensitive data, it use context for narrowing the content down, increasing the accuracy of detection and in reducing false positives.
Skyhigh is another DLP tool, and it extends the ability of an organization to protect against loss of data to the data stored in the cloud. With Skyhigh, DLP policies are enforced in a real-time manner, and we are provided with the capability to carry out an on-demand scan for the data which has been stored in the cloud so as to know whether we have some data outside the cloud policy. When configuring the DLP policies, you can choose a number of policy actions such as quarantine, video porno alert, tombstone, or maybe choose to block the sensitive data from being uploaded to the cloud service. With Skyhigh, you are free to leverage the policies which you have created in other DLP solutions such as the EMC, Symantec, Websense and Intel McAfee using a closed loop remediation.
Symantec is also another tool which provides mechanism for data loss prevention in the cloud. It has partnered with Box, which is an online tool for file sharing and this improves the functionality of the tool. The tool is also expected to extend the data loss prevention of sensitive data which has been stored on mobile devices.

Cloud Mobile Media

The increase in the popularity of mobile devices, such as tablets and smartphones, coupled with increase in popularity of the wireless internet has led to an increase in demand for rich media experience. The trend has also led to increase in mobile traffic, majority of it being video data. However, the use of mobile video is associated with some challenges. First, mobile devices have limited on-board resources for coding of intense media and processing tasks. Secondly, the wireless channel is unreliable and time varying, limiting the bandwidth for communication between the mobile devices and content delivery systems working in the backend. There is a greater need for us to balance between the increasing demand for mobile media applications and the weaknesses of the available media delivery networks.

The design of the mobile media network is determined by the trade-off between the quality of service (Qos) and cost. The initial cost of setting up the mobile media network and the subsequent maintenance costs should be kept as low as possible. If the cost too much, it will be equivalent to the price of the service, and this will negatively affect how the mobile media penetrates or is adapted by the end users. Also, the quality of service has to be kept high as this is what the end users will need to be provided with so that they can be happy with what they pay for. For this balance to happen, new ideas and emerging trends have to be embraced.

Recently, the cloud computing technology has introduced mechanisms for reducing the cost of deploying a mobile media network. In a cloud computing environment, the system resources can be allocated in a dynamic manner so that the elastic demand of the application can be met in a real-time manner. The cloud-computing paradigm has began to enhance the experience of mobile media, and this is where the cloud mobile media has resulted from.

However, the concept of cloud mobile media has introduced new challenges. An example of such a challenge is that the cloud computing platforms are usually built on the off-the-shelf platform whose performance and reliability cannot be said to be good. Also, the cloud leads to concerns regarding security and privacy of data. However, most of these challenges are as a result of dealing with data in mobile.

The following are some of the technical challenges facing the cloud mobile media network:

  1. Scalability- the system should be in a position to handle many users, devices and a large content.
  2. Heterogeneity- the content which can be supported should be of a diverse format, while the users should possess diverse preferences, and the devices should be of diverse forms.
  3. Reliability- failures do occur with systems. There is a need for the system to be designed in a redundant form so that services may be offered even after failures have occurred. Even when the unreliable wireless channels develop issues, the system should continue offering services to the end users.
  4. Usability- the system should be designed so as to support the various users using the various forms of technology. The user interface should be easy to learn, intuitive, and made to suit mobile devices having limited interactive options.
  5. Security- privacy and management of digital rights privacy are a serious challenge in cloud mobile media solutions.

The design of the cloud mobile media should be done in such a way that it will meet the increasing demands of the users. However, the problem is that there are limited resources for implementation of this. However, mechanisms have been implemented for solving such problems. Mobile cloud edge can help you in connecting the mobile devices which are resource-constrained to the cloud infrastructure which is rich in resources. Examples of these include WIFI access points, base stations and some other wireless edge devices.