Logging Framework for Forensic Environments in Cloud Computing

The field of cloud computing has attracted many researchers. It is good for you to know the conditions under which the data is stored in data centres or is processed, and then it becomes an interest for cloud computing forensics. The use of cloud computing in forensics has increased, and this is as a result of emergence of new technologies. The architecture of a cloud logging system is layered, and is composed of 5 layers, each with its own task. Let us discuss these layers:

The management layer
The modules which are responsible for most operations in the cloud can be found in this level, together with the ones targeted for the forensics, like the “Cloud Forensic Module”.

Virtualisation layer
This is the second layer in the architecture, and this is the layer in which we can find the servers and workstations which host our virtual machines. Although the virtual machines are the main building blocks in our environment, it is good for us to have virtualisation enabled in the hardware. A Local Logging Module should be installed in the Cloud Forensic Interface in the physical machine that we have. This will be the one tasked with gathering of the raw data from the virtual machines which are being monitored. The investigator can choose to adjust the amount of data, and they can select a particular virtual machine to monitor it, or maybe choose to monitor the whole activity which is taking place in the virtual machine.
For the data to be gathered reliably from your virtual machine, the local logging module has to be fully integrated with the running hypervisor inside our physical machine. We have to be keen on the kind of data which we intercept from the virtual machine, and then send it for further processing. It is possible for any activity to be intercepted, you will experience some penalties in terms of processing speed and timer.

Storage layer
This is the third layer in the logging architecture. It is where the RAW data which has been send from the modules which exist in the virtualisation layer is stored. The RAW data will be send by the logging modules in the form that it has been gathered from the hypervisor. From this, we can say that the layer has functionality similar to one of a distributed storage.

Analysing Layer
This is the fourth layer in the logging architecture. It is responsible for ordering, analysing, aggregating and processing the data which has been stored in our previous layer. As you might have noticed, the processes will use the computing resources intensively, and this calls for the analysing process to be done in an offline manner, and it is made available to the investigators immediately the job is ready. Once the process is completed, the investigators will be having all the relevant information regarding what happened in the remotely monitored machine, and they will be capable of navigating throughout the activities of the virtual machine so as to know what happened. In most cases, the layer is implemented in the form of distributed computing applications. This is mostly the case when the application needs a great computing power

Storage layer
This is the fifth layer in the architecture. This is where the results published from the rest of the layers is stored. This is the layer at which the forensics investigator will interact with the virtual machine snapshots they are monitoring, and this is done by use of the Cloud Forensic Module which is obtained from the Management layer.

Improving SOC Efficiencies to Bridge the Security Skills gap

Security alert storms are on the rise. Most organisations have chose to deploy more products for security and you have to know that each of the product will be having its own security alerts, workflows and interfaces.

These enterprises have gone ahead to recruit more security analysts so that they can deal with the increasing security alerts. However, most IT professionals lack security skills, and this is why enterprises have not found enough security analysts. Research has shown that the need for security analysts is increasing by 18% on an annual basis.

The question now is, how do enterprises solve this problem? Automation is the best solution to the problem. It will work by reducing on the amount of work that an analyst is expect to perform, but it will be hard for a junior to know the tricks of the trade.

The following are some of the measures which have been taken for the purpose of alleviating the skill-set gap:

Sharing knowledge and collaboration
Most tools for sales and marketing are focused on collaboration. Identify a tool which has succeeded in sales and marketing as this will give you any necessary information about the actions of the customers. Also, anyone who makes use of the system can share their experience with other users. Each SOC has to be ready to learn from the peer analysts and then take part in the operations workflow for SOC. When you build the collaboration as part of the SOC workflow, you will be in a position to detect any duplicate incidences which under investigation, and the junior analysts should be educated so that they can learn from the senior analysts.

Training and play-books
Creation of play-books is good as these will help the analysts read the process described therein and then adhere to them in their daily practices. Most tools for sales and marketing will make the individual work hard and in the proper way by reminding what their next step constantly, and the time they are expected to involve or collaborate with the others in the team. In SOC, this has to be done correctly so that the work of the analyst will not be interfered with in any way. The playbook should always be geared towards promoting the best practices which should be followed and these must have been developed over a period of time rather than in a faster manner. The play-books should not been seen as a static file sitting in your documents, but they should be seen as a repository which represent events which have taken place overtime. These will improve the productivity of the analyst, and at the same time make it easy for them to track future events.

Automation
This is best when some tasks have been repeated and they do not require any intervention by human beings. There are numerous such tasks in security and they just take us unnecessary time. In some cases, some cases will go un-investigated since the number of alerts will overwhelm the available videos porno security personnel. It is always good for us to automate the tasks which are complex for us to perform.

Searching and Learning Historically
The analyst can easily and quickly make decisions from the historical data they have from security incidences of the past. The data should be more than the log data, and should be analysed very well. When it comes to issues of security, you don’t need complex tasks for the purpose of alerts.
Tracking incidences using a closed loop
It is good for you to analyse metrics the response to an incidence, workload imposed on the analyst and the required skills over time and this will help you improve on your security posture in the organisation.