Saturday, 26 December 2009

Forensic Cop Journal 2(3): Standard Operating Procedure of Acquisition on Ubuntu

Introduction

When dealing with the evidence of storage media, a digital forensic analyst must be careful in the process of acquisition. Once he makes a mistake, then the next processes would be doubted, even it could be rejected by the court. As the process of acquisition is very important in digital forensic, it should be handled properly. To obtain the output of the acquisition process is reliable; this journal discusses how to perform it properly on Linux Ubuntu machine.

Usually and mostly the acquisition process is performed by using forensic applications such as FTK Imager from Access Data and EnCase from Guidance Software running under Ms Windows operating system. This journal gives different perspective to the digital forensic analyst how to do it on Ubuntu analysis workstation. The output resulted from Ubuntu machine is the same as the output yielded from the applications above. With this condition, the analyst has many ways to perform the acquisition.

One philosophy on digital forensic which is must be understood by the analyst is that never rely on the analysis of digital forensic on one application only. It means that the analyst should have as many forensic applications as possible to perform one forensic job. With the set of these applications, the analyst could have many choices to do it and select one or some of them which probably give the best results. To use these applications properly, the analyst should also understand well the procedure of digital forensic.

Step 1: Preparing machine to be forensically sound write protect

After the booting process finishes, open the command console or terminal; and then type the following command in order to be super user. With this condition, the super user has privilege to modify any file in the machine.

sudo –s


After that, type the command below

gedit /etc/fstab

This command is aimed to edit the file fstab stored in the folder /etc. Editing the file is performed with the purpose of configuring “write protect” condition. Opening this file is also done to ensure whether or not the configuration of “write protect” has been applied. With the condition of “write protect”, any storage media such as hard disk, flash disk and so on attached to the analysis machine through USB port is protected from any changes incidentally or deliberately. Any action applied to the evidence of storage media will not give impact to the content of media. It means that the contents remain unchanged during the process of acquisition.

If the file has not been configured yet for the purpose of “write protect”, the commands below are added in the file of /etc/fstab. It could be put at the end of the file contents.

# Read Only Configuration
/dev/sdb     /media/sdbro     auto   noauto,user,ro,nosuid,nodev,uhelper=hal   0   0
/dev/sdb1   /media/sdb1ro   auto   noauto,user,ro,nosuid,nodev,uhelper=hal   0   0
/dev/sdb2   /media/sdb2ro   auto   noauto,user,ro,nosuid,nodev,uhelper=hal   0   0
/dev/sdb3   /media/sdb3ro   auto   noauto,user,ro,nosuid,nodev,uhelper=hal   0   0
/dev/sdb4   /media/sdb4ro   auto   noauto,user,ro,nosuid,nodev,uhelper=hal   0   0
/dev/sdb5   /media/sdb5ro   auto   noauto,user,ro,nosuid,nodev,uhelper=hal   0   0


/media/sdbro is the mounting location of the evidence of storage media in which the evidence is usually marked as /dev/sdb, while /media/sdb1ro till /media/sdb5ro are the mounting location of each partition which is marked as /dev/sdb1 to /dev/sdb5. The reason why the number of partition is five is to anticipate the possibility of the storage media has five partitions. To prepare the mounting location as mentioned above, type the following commands.

mkdir /media/sdbro
mkdir /media/sdb1ro
mkdir /media/sdb2ro
mkdir /media/sdb3ro
mkdir /media/sdb4ro
mkdir /media/sdb5ro


After the configuration above has been added into the file /etc/fstab, the file is saved. The file has been ready for the purpose of forensically sound write protect. For further information, please access the forensic journal related to this topic at http://forensiccop.blogspot.com.

For further information on this journal, please access http://www.scribd.com/doc/24519235/Forensic-Cop-Journal-2-3-2009-Standard-Operating-Procedure-of-Acquisition-on-Ubuntu. on this link you will find the full version of this journal. I hope this journal could be useful for those who would like to experience digital forensic world in their life.

Saturday, 19 December 2009

Forensic Cop Journal 2(2): Standard Operating Procedure of Audio Forensic

Introduction


There are many types of digital evidence which could be encountered by digital forensic analyst in dealing with computer crime or computer-related crime. Not only files, videos, digital images, encrypted items, unallocated clusters, slacks and so forth, but also digital audio files might be analysed. In certain cases, the audio files become significant evidence to show the involvement of the perpetrators in the criminal case. Usually it contains speech records between two or more people talking about a plan to commit a crime; therefore the analyst should be able to reveal this conversation to the criminal investigators. With this evidence, the investigators have strong reason to prove that the perpetrators have planned a crime.

To reveal the conversation contained in the audio files is not an easy job. The analyst should follow strict guidelines of audio forensic so that the output of analysis could be accepted by the court. Once the analyst does one step of analysis carelessly, the results of analysis might be rejected by the court. To reach the results of audio forensic analysis in the best output, this journal discusses Standard Operating Procedure (SOP) of Audio Forensic. With this SOP, it is expected that the analyst could have a good guidelines in guiding them to perform audio forensic analysis in step by step, so that the result of analysis is reliable.

The SOP of Audio Forensic comprises five steps, namely acquisition, authentication, audio enhancement, decoding and voice recognition. Below is the explanation of each step.

To obtain the complete explanation of each steps above, please access this link http://www.scribd.com/doc/24292835/Forensic-Cop-Journal-2-2-2009-Standard-Operating-Procedure-of-Audio-Forensic to download the pdf version. Hopefully this journal could help digital forensic analysts to perform audio forensic analysis properly.

Tuesday, 1 December 2009

Forensic Cop Journal 2(1): Ubuntu Forensic

Background

Ubuntu Forensic is the use of Ubuntu for digital forensic purposes. As it provides a wide range of forensic tools as well as anti-forensic and cracking tools, so it is reliable to investigate a computer crime and analyse digital evidence on it. The significant difference on forensic applications between Ubuntu and Ms Windows is that Ubuntu applications are freeware, while the application running under Ms Windows are commercial. The results obtained between these applications are relatively the same. It means that digital forensic analyst should also be well understood on the use of Ubuntu forensic applications as well as Ms Windows’s applications. If they do it, so they will have many forensic tools which can be applied in the investigation/analysis. When a tool does not give satisfied results, they should be able to use other tools either under Ubuntu or Ms Windows to yield the best results.

This journal is written with aims to broaden forensic view among forensic professionals. It is expected that they can explore packages provided on Ubuntu for forensic purposes. They should know that not only Ms Windows forensic applications which can be used for digital forensic, but also many tools on Ubuntu which can do the same thing with the same results. In some extent, Ubuntu gives stronger results than Ms Windows’s applications. For instance, dcfldd can be used for forensic imaging with different purposes. It can be used to image some certain blocks as desired as well as the whole drive imaging. This feature is not provided by imaging applications running under Ms Windows. Other instance is image metadata analysis through exif. On Ubuntu, there are some tools which can be used to analyse the image exif such as exif, exiftool and metacam. There are also tools which can be used to manipulate the exif values such as exiv2 and libjpeg-progs. All these tools are freeware.

One essential reason why the author frequently uses Ubuntu for digital forensic purposes such as forensic imaging is forensically sound write protect. It is compulsory for every digital forensic analyst to apply it when dealing with the storage drive evidence. It is aimed not to change the contents of drive either incidentally or deliberately. Once the contents is changed, so the next actions of digital forensic become doubt or even refused by the court, unless digital forensic analyst can explain comprehensively why (i.e. the relevance) it is changed and what the implications of that action. It is usually performed on live analysis with strict procedures. On dead analysis (i.e. post mortem) the analyst is still required to keep the contents of hard drive not changed. To reach this purpose, Ubuntu can be modified in order to give forensically sound write protect. It is performed by modifying the file /etc/fstab with the mount option is read-only, so whatever is done on the drive evidence, it does not change the contents. When accessing a text file, so this action does change the MAC (i.e. Modified, Accessed and Created) time at all. It remains unchanged, although the file is accessed. It occurs because the modification of the file /etc/fstab gives forensically sound write protect for any actions committed by the analyst on the drive.

With this feature, the analyst can do many things such as live analysis on the drive in order to speed up the investigation. It is frequently done when dealing with many drives as the evidence. If the regular procedure of digital forensic is performed, so it will take a long time for forensic imaging on each drive. To shortcut the investigation is to apply forensically sound write protect and then to read and analyse the drives directly. The aim of this action is that the analyst can know which drive among the drives has strong relationship with the case. Once it is obtained, so the analyst can carry out further analysis on it.

Below are the tools which can be used for the purposes of digital forensic analyses, anti-forensic and cracking. The number of tools for forensic purposes is twenty-five, while fifteen tools for anti-forensic and ten tools for cracking. Actually there are some tools having description related on these purposes, but it is not mentioned on this journal. One of powerful tools which is often used by the author is Autopsy. It is GUI version of The Sleuthkit created by Brian Carrier. What commercial applications running under Ms Windows such as Encase and FTK discover when analysing digital evidence is the same as what Autopsy finds.

The description of each tool below is directly quoted from Synaptic Package Manager created by Connectiva S/A and Michael Vogt on April 2009. This application provides an ease for Ubuntu users to install or uninstall Ubuntu packages. If they are still doubt on the use of certain package, they should read the description given on each package.

The full version of this journal can be downloaded at http://www.scribd.com/doc/23406648/Forensic-Cop-Journal-2-1-2009-Ubuntu-Forensic. I hope this journal could be useful in positive meaning for anybody who would like to explore Ubuntu for digital forensic purposes.

Sunday, 22 November 2009

Audio Forensic with Cedar

A few weeks ago, I joined the Audio Forensic Training which was jointly conducted between Forensic Laboratory Centre of Indonesian National Police Headquarters and Cedar Cambridge. In this training, we developed the latest techniques on noise filtering by using Cedar instrument which was installed in the Audio Laboratory at my office. According to Dr. David Robinson who was also the instructor at this training, the audio lab we have is the best Cedar lab in South East Asia.

In this training, we praticed to remove a wide range of noise. In some cases, the voice recorded is not clear because of the noise, even it can not be heard at all. The noise sounds are much louder than the human voice. With the assisstance of Cedar providing many powerful filtering modules, we were successful to remove the noise and to make the human voice to be clear to listen to. Besides this, Cedar also provides feature to recognise the editing line in the case of when the voice recording is edited. Cedar can detect the time when the editing occured by displaying a vertical line. Through this line, we can know there is a change before and after this line. If it happens, it means that the voice recording is not original anymore. The editing could be done in the purpose of to remove unwanted parts or to add some parts. In this case, the recording could be rejected and could not be accepted to be analysed forensically because the content has been changed.

Cedar also provides spectogram for each words said. It is useful for the pruposes of voice identification or verification. With this feature, we can apply phonetics forensic in order to compare spectogram between questioned voice and known voice, so that we can know whose voice it is. In this case, we develop a technique based on FBI procedure on phonetics forensic. This procedure was described by Bruce E. Koenig on the journal "Spectographic Voice Identification: A Forensic Survey" created in 1986. In this journal he also explained that the comparison should be performed on at least 20 different words which are pronouced similarly for meaningful results. Below the complete quotation from his journal about the comparison procedure:


(1) Only original recordings of voice samples were accepted for examination, unless the original recording had been erased and a high-quality copy was still available.
(2) The recordings were played back on appropriate professional tape recorders and recorded on a professional full-track tape recorder at 7 1/2 ips. When possible, playback speed was adjusted to correct for original recording speed errors by analyzing the recorded telephone and AC line tones on spectrum analysis equipment. When necessary, special recorders were used to allow proper playback of original recordings that had incorrect track placement or
azimuth misalignment.

(3) Spectrograms were produced on Voice Identification, Inc., Sound Spectrographs, model 700. in the linear expand frequency range (0-4000 Hz), wideband filter (300 Hz) and bar display mode. All spectrograms for each separate comparison were prepared on the same spectrograph. The spectrograms were phonetically marked below each voice sound.
(4) When necessary, enhanced tape copies were also prepared from the original recordings using equalizers, notch filters, and digital adaptive predictive deconvolution programs13,14 to reduce extraneous noise and correct telephone and recording channel effects. A second set of spectrograms was then prepared from the enhanced copies and was used together with the unprocessed spectrograms for comparison.
(5) Similarly pronounced words were compared between two voice samples, with most known voice samples being verbatim with the unknown voice recording. Normally, 20 or more different words were needed for a meaningful comparison. Less than 20 words usually resulted in a less conclusive opinion, such as possibly instead of probably.
(6) The examiners made a spectral pattern comparison between the two voice samples by comparing beginning, mean and end formant frequency, formant shaping, pitch, timing, etc., of each individual word. When available, similarly pronounced words within each sample were compared to insure voice sample consistency. Words with spectral patterns that were distorted, masked ‘by extraneous sounds, too faint, or lacked adequate identifying characteristics were
not used
(7) An aural examination was made of each voice sample to determine if pattern similarities or dissimilarities noted were the product of pronunciation differences, voice disguise, obvious drug or alcohol use, altered psychological state, electronic manipulation, etc.
(8) An aural comparison was then made by repeatedly playing two voice samples simultaneously on separate tape recorders, and electronically switching back and forth between the samples while listening on high-quality headphones. When one sample had a wider frequency response than the other, bandpass filters were used to compensate during at least some of the aural listening tests.
(9) The examiner then had to resolve any differences found between the aural and spectral results, usually by repeating all or some of the comparison steps.
(10) If the examiner found the samples to be very similar (identification) or very dissimilar (elimination), an independent evaluation was always conducted by at least one, but usually two other examiners to confirm the results. If differences of opinions occurred between the examiners, they were then resolved through additional comparisons and discussions by all the examiners involved. No or low confidence decisions were usually not reviewed by another examiner.



According to his survey, only 1 false identification case (i.e. 0.31%) was found from 318 cases of phonetics forensic, while only 2 false eliminations (i.e. 0.53%) were found from 378 phonetics forensic cases. From this data, it means that the FBI technique is reliable for voice identification or verification.

In order to run this procedure of phonetics forensic, Cedar is reliable as well as noise filtering and editing line recognition.


Good luck...!



Thursday, 19 November 2009

Face Sketching

This material actually is my slides presentation when being requested to be instructor on Frontline Forensic Course in Indonesia. This course has been being conducted since 16 November till 4 December 2009. In this course, I deliver teaching materials about Digital Forensic, Face Sketching, Photography Forensic, Fire Investigation and GPS. In this post, I just describe my materials on Face Sketching. The full version of this material can be downloaded at this link http://www.scribd.com/doc/22742609/Face-Sketching.

Face sketching is required when criminal investigators would like to obtain description of somebody based on the witness testimony. It is performed in order to have suspect's face so that it is easy to recognise by the investigators, even public can know him when seeing him anytime and anywhere. To reach this condition, the results of face sketching are distributed not only for law enforcement agencies but also for public. When people see the suspect, they will contact law enforcement agency to inform his existence; and then the investigators could arrest him soon.

However, there are also problems encountering the process of face sketching, such as:
1. Witnesses might saw the suspect at glance, so that it is not enough to identify him perfectly as the witness can not give face components in details.
2. Witnesses might saw the suspect from behind and or from right/left side, so that the description of suspect's face is not sufficient to describe.
3. The limits of memory of the witnesses. It means that when a witness saw the suspect, it does not guarantee he could recognise the suspect's face in details. There is possibility that the face description will be distorted as the witness can not remember each face components in details.
4. The lack of lights. When a witness saw the suspect in the condition of dark situation (i.e a little light), he can not describe the suspect's face perfectly as he can not see it well.

In the process of face sketching, there are two components which chould be considered, namely:
1. General characteristics
2. Class characteristics

In General characteristics, it shows a general pattern of a face component such as:
1. Eyes
2. Noses
3. Eyebrows
4. Hair
5. Lips
6. Head shapes
7. Jaw shapes
8. Mustaches
9. Beards
10. Eye lines
11. Smile lines

Meanwhile, in class characteristics, it shows a specific pattern of a component and or a specific sign on a face such as:
1. Mole or beauty spot
2. Cockeye or a cast in the eye
3. Harelip
4. Scar or wound print

The techniques developed in the process of face sketching are:
1. Manual. It requires a good painter

2. Automatic. It requires a reliable computer application

To obtain the description of face sketching in details, please click the link above. Hopefully it could be useful for anybody who would like to explore face recognition. Good luck...!
 

Monday, 2 November 2009

Digital Forensic: State of the art

I think it is a long time for me not to post a new topic in this blog. For this reason, I apologise because I have been so busy with some crime scene processing and digital forensic analysis.

In this post, I would like to describe a more detail about digital forensic from investigation flowchart and digital forensic procedure to study case. It is in the form of a presentation which will be delivered at the British Council, Jakarta on 7 November 2009. At that moment which is 25th anniversary of British Chevening Scholarship Scheme, I am invited to deliver this topic as I was awarded Chevening scholarship when joining MSc in Forensic Informatics at the University of Strathclyde, UK in 2008/2009. This presentation can be downloaded at http://www.scribd.com/doc/22000028/Digital-Forensic-State-of-the-Art-BC071109.

On slide 3, I explain that in the investigation of the case of computer crime and computer-related crime, digital forensic gives fully technical support to criminal investigators in order to solve the case. Digital evidence found by digital forensic analyst will be basis for the investigator to decide further investigative steps. When the case is brought to the court, the forensic analyst will be requested to give expert testimony regarding the digital evidence found. If they can explain it properly, so it can be accepted by court as a strong evidence, no doubt at all.

On slide 4, it is described that digital forensic acts not only at computer crime, but also at computer-related crime. It means that digital forensic covers a wide area of investigation where computer is used. In this crime, computer has three roles, namely computer as the tool to commit the crime, computer as the target of the crime and computer as a media for storing data related to the crime.

On slide 6, the definition of digital forensic is given. It is the application of computer science and IT technology in order to solve a crime for justice purposes. Based on this definition, digital forensic plays some key roles, namely:
  1. To support and perform scientific crime investigation
  2. To perform forensic analysis on digital evidence
  3. To be able to describe a crime connection between suspect and evidence
  4. To deliver expert testimony at court.
On slide 8, I emphasize digital forensic princples which must be applied since digital forensic is performed. These principles derived from the guideline of ACPO (Association of Chief Police Officers), UK are as follows:
  1. Principle 1: No action taken by law enforcement agencies should change data held on a computer or storage media.
  2. Princple 2: The person accessing the data must be competent to do so and be able to explain the relevance and implications of the actions taken.
  3. Principle 3: An audit trail or record of all processes applied should be created and preserved.
  4. Principle 4: The person in charge has overall responsibility to ensure that these principles are adhered to.
The principles above must be applied by digital forensic analyst when performing the investigation of computer crime or computer-related crime. Once one principle is missed, so the results of digital forensic analysis becomes weak and doubted even it is possible to be refused by court. These principles are strict to implemented during the analysis.


On other slides, please download the presentation material from the link above. I hope it can be useful in positive meaning for someone who would like to apply and develop digital forensic. Good luck....!


Monday, 5 October 2009

Forensic Cop Journal 1(3) 2009: Forensically Sound Write Protect on Ubuntu

Actually this journal is derived from my previous post concerning forensically write protect on Ubuntu which has been experimented successfully before. After considering this topic is so significant, so I take it to be an official journal. For this journal, I just put Introduction and Experiments Preparation for this post; therfore for full version of pdf of this journal, it can be downloaded at http://www.scribd.com/doc/20616188/Forensic-Cop-Journal-13-2009Forensically-Sound-Write-Protect-on-Ubuntu.



Introduction
 
The first principle according to ACPO (Association of Chief Police Officers) in the UK is “No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court” (ACPO, p4). This principle which is applied and used by forensic investigators in the world requires the investigators to pay more attention when dealing with data stored in computer storage media. Once it is changed, the next phases of examination will be considered weak and doubt, even the results of examination could be rejected by court. However the changes are still allowed when the investigators can know exactly their actions and its implications such as when performing live imaging.


In order to accommodate this principle, the investigators apply write protect during their examination process, particularly when making forensic imaging at the first time. This write protect can be in the form of either software or hardware. In Ms Windows OS, there are many forensically sound write protect tools offered to users. Most of them are commercial. Write protect is also available on Ubuntu, but this is for free. We just make a little modification on fstab file to configure Ubuntu machine becomes forensically sound write protect. This journal discusses about it including the experiments performed and the results obtained.


Experiments Preparation


The 4GB flash disk is used as the object of these experiments. It is set up by using GParted in order to configure the partition, so that it has 4 partitions with different file systems. Below is the specification of each partition with the operating system installed within it by using Unetbootin.


Partition 1: size=996.19 MB and file system of ntfs.
Partition 2: size=996.22 MB and file system of fat16 with BartPE as operating system.
Partition 3: size=996.19 MB and file system of ext2 with Helix 3.0 as operating system.
Partition 4: size=847.15 MB and file system of ext3 with Ubuntu 8.10 as operating system.


Particularly for partition 1, there is no OS installed in it because it is designed for storing files. This configuration is intended to make a condition of flash disk becomes closely similar with a real hard disk having some partitions with different file systems.





Friday, 2 October 2009

Forensic Cop Journal 1 (2) 2009: Similarities and Differences between Ubuntu and Windows on Forensic Applications


This post is the form of development of previous post concerning the same topic. It is about similarities and differences between Ubuntu and Windows on forensic applications. The previous post only discuss it in general and is like brief summary of experiments performed before; therefore in order to make the topic becomes comprehensive view, this post in the form of journal is issued. I only put the sections of Introduction and Research Preparation below. If you wish, the PDF full version of this journal can be downloaded at http://www.scribd.com/doc/20514332/Forensic-Cop-Journal-12-2009Similarities-and-Differences-Between-Ubuntu-and-Windows-on-Forensic-Applications



Introduction
 
In dealing with computer crime, the forensic investigators are faced to volatile digital evidence which must be discovered as soon as possible because sooner it can be recovered, better the criminal investigators handle the case, even it can make the duty of the investigators become easy to locate and catch the perpetrators. There are many ways to carry out forensic investigation on cases of computer crime. Although there is a bunch of various different techniques for this purpose, essentially they have same goal, namely to recover the digital evidence, and then serve it for court. 


There are two conditions in which the forensic investigators often deal with; they are forensic analysis under Microsoft Windows and under Linux OS such as Ubuntu. In this case, Ms Windows and Ubuntu have their own advantages and disadvantages regarding with computer forensic examination. In some extent, they have similarities, but in the other cases, they also have differences. This journal will describe the topic about “similarities and differences between Ubuntu and Ms Windows on forensic applications”. The descriptions also include practical samples of forensic tools in order to support the opinion.

Research Preparation


In order to run this research on the track, I make some experiments based on my experience in investigating the case of computer crime by setting up 4 GB flash disk as experimental object. I configure it to be 3 partitions by using Partition Editor application from Ubuntu. The first partition is FAT32 with the size of 1000 Mbyte in which I install Helix Forensics by using USB Startup Creator from Intrepid so that it becomes bootable flash disk to run Helix Forensics live, then I also put some files which have different file extensions such as pdf, doc, odt, ppt, jpg, odp and so on in different folders, some of these files are then deleted. The first partition becomes one of the objects of experiments. To be more focus on analysing, I limit the similarities in 5 points of view and differences in 3 points of view.




Wednesday, 16 September 2009

Forensic Cop Journal 1(1) 2009: Symmetric and Asymmetric Cryptography in Brief Practice


Since cryptography offers a tight security for people to encode their message to be unreadable by third party, most people are interested in utilizing it in order to keep their privacy. It is expected that unauthorised people can not read it although they can get access for it because as long as they do not have the encryption key, they will not be able to open it unless they use decryption tools. However the tools have limited ability depending on the types of cryptography and the key size. Such tools can not generate decoding all encrypted message because they only work for certain encryption types.

Based on this fact, criminals use cryptography to conceal essential information related to their crime, so that police or forensic investigators can not open and read it. The crime perpetrators can use various types of cryptography and or strong level of key size in order to encode more securely their message, therefore cryptography is one of important concerns for forensic investigators on how to deal with it appropriately in order to solve the crime. It has been common fact that the encrypted message usually contains valuable information, so the forensic investigators are required to extract it. For this task, they have two duties. The first one is to find out files, partitions and emails which are being encrypted, and the second one is to try decrypting it to be readable. This decrypted information might be useful for police to investigate the crime.

Types of Cryptography
Simply cryptography converts a message from plain text to be ciphered text by using a cryptographic algorithm such as DES (Data Encryption Standard), IDEA (International Data Encryption Algorithm), Blowfish, AES (Advanced Encryption Standard) and so on. Generally there are two types of cryptography algorithm, namely symmetric and asymmetric. The clear and significant difference between them is encryption key meaning only one key (i.e. private key) on symmetric and two keys (public and private keys) on asymmetric. Below is the description of both.


Symmetric Cryptography
 

Initially people used cryptography with one key meaning the key used for encryption is same as the key for decryption. It can be analogized with the door key in real world because people use the same key for locking and unlocking the door. The key in cryptography is mathematical function designed to convert a plain text data to be cipher text data for encryption. With the same algorithm, the ciphered data can be converted to be original text data.

To protect the access in using such algorithm for decrypting a ciphered data, it is used a passphrase key controlling the operation of a cipher, so that the authorized users having known the passphrase key can only perform decryption. It means that although the type of cryptographic algorithm has been detected but the passphrase key is still missing, so the ciphered data can not be decrypted. This key made along with the process of encryption has various key sizes depending on the types of applied cryptography.

One of well known pioneer symmetric cryptographic algorithms is DES which is based on 56-bit block size. It is considered weak at this time because it can be broken by certain attacks such as brute force and cryptanalysis; therefore in 2002 it was superseded by AES using three block ciphers of 128-bit with key sizes of 128, 192 and 256-bit as a cryptography algorithm standard in the US. The other symmetric algorithms which are frequently used are Twofish with 128-bit block and up to 256-bit key size, Blowfish with 64-bit block and various key size between 32 and 448 bits, Serpent with the block size of 128 bits providing 128, 192 or 256-bit key size, CAST5 with 64-bit block size supporting 40 to 128-bit key size and Triple DES which is combination of three 56-bit DES. The algorithms above can also be applied with a combination of two or three algorithms in order to increase the security of cryptography such as AES-Twofish, Serpent-AES and AES-Twofish-Serpent. These combinations have been implemented by certain applications such as TrueCrypt.

Nowadays the forms of using symmetric cryptography is varied and more interesting with a nice Graphical User Interface (GUI) such as Remora USB Disk Guard in figure 1 and PixelCryptor using a picture as a bridge for encryption and decryption, so that it attracts people to use it for their current needs on information security.




Figure 1
Remora USB Disk Guard protected by two types of passwords for logon and encryption/decryption is designed for mobile encryption on USB storage device.



These applications are easy to use and offer challenges such as PixelCryptor which can not be used to decrypt an encrypted package if the linked picture as the image key is missing or modified. To decrypt the encrypted package on PixelCryptor, it is required the image key as well as passphrase key as shown in figure 2. Besides those above, there are still symmetric cryptography applications using ordinary GUI such as Kruptos using 128/256-bit key size of Blowfish, and Blowfish Advanced CS offering various types of algorithm.





Figure 2
PixelCryptor uses an image file as a link to encrypted package as well as passphrase.



The other feature is encrypted volume which is used to store any files or folders to be encrypted by putting it within the volume. Actually the volume is a file which can be mounted as a virtual drive. The files and folders moved to the volume become encrypted automatically, so that it gives an ease for the users to modify the encrypted objects instantly as they want to. Once it is unmounted, all files or folders within the volume will be encrypted and the virtual drive will disappear, on the other hand if it is mounted, all files and folders within the volume will be decrypted. This feature is delivered by TrueCrypt and LockDisk.


Asymmetric Cryptography
 

Since symmetric is considered as inflexible and insecure in sharing the encryption key among the users, so the asymmetric cryptography is developed. Asymmetric provides two different types of key, namely public key and private key. Public key is designed to be shared to the other people for encrypting a plain text data to be ciphered text data, whereas private key which must be kept securely by the owner is used to decrypt ciphered text data to be plain text data.

This technique is considered secure because a user can distribute his public key to anybody he wants without worrying to be intercepted by third party. Although the encrypted data can be tapped by another people, they can not decrypt it without having the private key. Even the private key can be revoked if the owner considers the key is stolen.

One of common usage of asymmetric cryptography is email. Since people need to secure their email communication from interception by third party, the use of asymmetric cryptography becomes frequent because it is more flexible and secure in distributing public key to be shared to another people than symmetric cryptography.



The asymmetric cryptography algorithm which is often used in encrypted email is PGP (Pretty Good Privacy) providing privacy, authentication and integrity checking over generating public and private key and digital signature. For email encryption, the plug in Enigmail providing OpenPGP can be used along with mail clients such as Mozilla Thunderbird and SeaMonkey as security extension.





Figure 3
OpenPGP Enigmail within Mozilla Thunderbird generates key pairs for public key and private key with RSA algorithm and 4096-bit key size.



OpenPGP Enigmail offers key pairs generation as shown in figure 3 with 4096-bit RSA algorithm which is used widely in e-commerce protocols because it is accepted as one of means providing strong security. It also provides key expiry from only 1 day to no expiry at all and revocation to terminate the private key in the case of it is stolen or missing. Besides RSA, there are other algorithms such as DSA (Digital Signature Algorithm) and El Gamal. DSA is used for digital signature, while EL Gamal is asymmetric cryptography having three components namely key generator, encryption and decryption algorithms.





Figure 4
The encrypted message created by OpenPGP Enigmail within Mozilla Thunderbird uses RSA Algorithm with 4096-bit key size


Through short experiment, it shows that encryption and decryption using OpenPGP Enigmail within Mozilla Thunderbird is quite easy to carry out and reliable for strong security. In this experiment, a sender sends his public key to a recipient. After obtaining a public key, recipient sends an encrypted message using sender’s public key. The encrypted email will be then decrypted by the sender using his private key. If it is intercepted when it is in transit over network, the interceptor will gain an encrypted message as shown in figure 4. He needs sender’s private key to perform decryption of the message as well as passphrase key. Both are required to perform such decryption.



Conclusion



Both cryptography algorithms of symmetric and asymmetric are frequently used nowadays, even by criminals; therefore forensic investigators should know about it and how to deal with it properly. Although it is almos impossible to break a high level of key/block size of an encrypted message, there is still possibility to obtain the encrypted message. For instance when the suspected computer found is still running. It is not necessary to turn off directly because probably there is still an encrypted message or volume which has been decrypted. Even the only access an encrypted drive is when it is running. It means that it is being decrypted.


Bibliography
Anson, S. and Bunting, S. (2007). Mastering Windows Network Forensics and Investigation. Indianapolis: Wiley Publishing, Inc.
Carrier, B. (2005). File System Forensic Analysis. London: Addison – Wesley.
Casey, E. (2002). Practical Approaches to Recovering Encrypted Digital Evidence. International Journal of Digital Evidence. 1 (3). Available: http://www.utica.edu/academic/institutes/ecii/publications/articles/A04AF2FB-BD97-C28C-7F9F4349043FD3A9.pdf. Last accessed 28 February 2009.
Casey, E. (2004). Digital Evidence and Computer Crime: Forensic Science, Computers and the Internet. 2nd edition. London: Elsevier Academic Press.
Casey, E. (2008). The Impact Of Full Disk Encryption On Digital Forensics. ACM SIGOPS Operating System Review. 42 (3). Available: http://portal.acm.org/ft_gateway.cfm?id=1368519&type=pdf&coll=GUIDE&dl=GUIDE&CFID=24053605CFTOKE N=64412596. Last accessed 28 February 2009.
CE-Infosys. (2008). Free CompuSec version 5.2 User Manual. Available: http://www.ce-infosys.com/ftp/cei/FREE_CompuSec_Handbook.zip. Last accessed 6 April 2009.
Code Gazer. (2008). PixelCryptor. Available: http://www.codegazer.com/pixelcryptor/. Last accessed 7 April 2009.
Huber, U. and Sadeghi, A-R. (2006). A Generic Transformation from Symmetric to Asymmetric Broadcast Encryption. Bochum: Ruhr-Universitat. Available: http://www.springerlink.com/content/50023w6877036027/fulltext.pdf. Last accessed 22 February 2009.
Katz, J. (2004). Cryptography. In: Tucker, A. B. (ed). Computer Science Handbook. 2nd edition. Florida: Chapman & Hall/CRC. p210-232.
Klonsoft. (2009). Klonsoft LockDisk 3.0 for Windows. Available: http://www.klonsoft.com/lockdisk/. Last accessed 7 April 2009.
Kolb, L. J. (2001). Blowfish Advanced CS Version 2.I2.00.0II. Available: http://www.lassekolb.info/bfacs.pdf. Last accessed 7 April 2009.
Mandia, K., Prosise, C. and Pepe, M. (2003). Incident Response & Computer Forensics. 2nd edition. London: McGraw-Hill/Osborne.
Marcella, A. J. and Greenfield, R. S. (2002). Cyber Forensics – A Field Manual for Collecting, Examining, and Preserving Evidence of Computer Crimes. London: Auerbach Publications.
Microsoft. (2007). 2007 Microsoft Office System Document Encryption. Available: http://download.microsoft.com/download/6/7/f/67f1ff44-f1c9-4fae-a451-4e803f7b727e/2007_Office_DocEncryption.docx. Last accessed 6 April 2009.
Mohay, G., Anderson, A., Collie, B., de Vel, O. and McKemmish, R. (2003). Computer and Intrusion Forensics. London: Artech House.
Moller, B. (2004). A Public-Key Encryption Scheme with Pseudo-random Ciphertexts. Berkeley: University of California. Available: http://www.springerlink.com/content/2m9ukr0uqwke7nu4/fulltext.pdf. Last accessed 22 February 2009.
O'Connor, L. and Klapper, A. (1994). Algebraic nonlinearity and its applications to cryptography. Journal of Cryptology. 7 (4). Available: http://www.springerlink.com/content/h3158k76g3207h28/fulltext.pdf. Last accessed 22 February 2009.
Passware. (2008). Passware Encryption Analyzer Professional v.1.0. Available: http://www.lostpassword.com/pdf/EncryptionAnalyzer_datasheet.pdf. Last accessed 7 April 2009.
Sammers, T. and Jenkinson, B. (2007). Forensic Computing. 2nd edition. London: Springer.
Seagate. (2007). Seagate DriveTrust™ Technology Enables Robust Security Within the Hard Drive. Available: http://www.seagate.com/docs/pdf/whitepaper/TP565_DriveTrustOverview_Oct06.pdf. Last accessed 6 April 2009.
Stephenson, P. (2000). Investigating Computer-Related Crime. London: CRC Press.
The Enigmail Project. (2009). A Simple Interface for OpenPGP Email Security. Available: http://enigmail.mozdev.org/home/index.php. Last accessed 7 April 2009.
TrueCrypt. (2008). TrueCrypt – Free Open Source Disk Encryption Software. Available: http://www.truecrypt.org/docs/. Last accessed 7 April 2009.
Ye, D. (2001). Decomposing Attacks on Asymmetric Cryptography Based on Mapping Compositions. Journal of Cryptology. 14 (2). Available: http://www.springerlink.com/content/efbjfqeempmfg7lr/fulltext.pdf. Last accessed 22 February 2009.

Sunday, 13 September 2009

Brief Description on Similarites and Differences in Forensic Applications between Ubuntu and Windows


The investigators can perform forensics analysis either under Ubuntu 8.10 or under Windows XP in dealing with the case of computer crime. At certain extent, both operating systems have many similarities so that the forensics investigators do not need to be confused in deciding what operating system suitable for carrying out a particular analysis. 

Based on the explanations supported by experience and some experiments performed , there are at least 5 points of similarities between Ubuntu 8.10 and Windows XP regarding with forensics analysis. They are :
 1.    Forensics Imaging explained in the post of Experiment 11
 2.    Registry Analysis described in the Experiment 12
 3.    File Metadata Analysis, consisting of
        a)    Magic Number Analysis and
        b)    EXIF Information Analysis discussed in the Experiment 10
 4.    Internet Explorer Analysis explained in the Experiment 13
 5.    Unallocated Clusters Recovery discussed in the Experiment 14
 
Besides similarities, there are also differences between Ubuntu 8.10 and Windows XP related to forensics analysis. At certain extent, these differences brings Ubuntu 8.10 to be more flexible, while at the other extent, it takes Windows XP to be more familiar and much easier to operate. 

Based on the descriptions, experiments and experience, there are at least 3 differences between Ubuntu 8.10 and Windows XP on forensics analysis, namely :
 1.    Commercial versus Freeware
         a)    Cost of Applications

The big differences between Ubuntu 8.10 and Windows XP on forensics analysis is the cost of applications in which they are mostly commercial under Windows XP but they are mostly freeware under Ubuntu 8.10, therefore to carry out forensics analysis under Windows XP needs a great amount of money to buy some forensic tools, on the other side the investigators performing forensics analysis under Ubuntu 8.10 do not need to purchase forensic tools because they are open source with communities support.

For instance, according to http://www.digitalintelligence.com and http://www.x-ways.net on 17 December 2008, below is the price list of some famous forensics tools under Microsoft Windows :
•    The price of EnCase Forensic Version 6 from Guidance Software is US$ 3,600 for corporate standard and US$ 2,850 for government / law enforcement
•    The price of Forensic Toolkit (FTK) 2.0 from AccessData is US$ 3,835
•    The price of X-Ways Forensics from X-Ways Software Technology AG is EUR 685.90 for 1 license with 1 year update maintenance
 
On the other hands, there is no price at all for mostly forensics tools under Ubuntu 8.10 or Linux such as Autopsy with Sleuthkit, dcfldd, exiftool, pasco, regviewer, Ghex, foremost, Py-Flag, AIR, md5deep, ntfsprogs and so on.

         b)    User Interface

All forensics tools under Ms Windows XP use Graphical User Interface (GUI) so that it makes the forensics investigators as the users become much easier in operating the applications in order to obtain the best result of examination. The expensive price gives the easiness for the users in using the tools through GUI.
 
On the other side, most forensics tools under Ubuntu 8.10 or Linux are based on command console, so that the forensics investigators have to understand the use of command line in running them such as dcfldd, exiftool, foremost, md5sum and so on, but there are also GUI-based forensics tools such as Autopsy, regviewer, Py-Flag, AIR and so on. These GUI-based tools are actually originated from command line tools too such as AIR originated from dcfldd for forensics imaging, Autopsy originated from The Sleuthkit commands and so on.

 2.    Blocks Imaging explained in the Experiment 9
 3.    The Bridge of Wine discussed in the Experiment 15

Experiment 15 on Wine as Ubuntu Super Bridge

I like Wine application on Ubuntu a lot. It makes a significant difference between Ubuntu and Ms Windows, although not all Windows applications can be installed into Ubuntu. For some cases, it is very helpful. I suggest anybody to install and use it so that the machine becomes more flexible.

One of amazing tools under Ubuntu 8.10 is Wine. Through this application the forensics investigators can run some Windows XP applications properly under Ubuntu 8.10 machine, otherwise there is no such application under Windows XP.
 
Through Wine, Ms Office Password Recovery from Elcomsoft can be installed into Ubuntu 8.10 machine. This application is often used by the forensics investigators to recover password from Ms Office files. Actually this Password Recovery application can only run under Windows XP, it can not run under Ubuntu machine, but through Wine, it becomes possible.
 
 Figure 1
Ms Office Password Recovery application of Windows XP can run under Ubuntu 8.10 through Win
 
For this experiment, a Ms Word file was set up with password protection for opening file. Through Wine, the Password Recovery tool is run under Ubuntu 8.10 to recover the password. The result produced was excellent in which the password can be recovered successfully.

Figure 2
 Ms Office password recovery application shows the results of password recovered after running under Ubuntu.
 
At certain extent, Wine application shows the advantage of Ubuntu 8.10 in dealing with forensics analysis by running some forensics tools of Windows XP under Ubuntu 8.10 machine.
 

Experiment 14 on Deleted Files Recovery under Ubuntu

This experiment was performed on December 2008 in order to support my statement on similarities of forensic applications running under between Ubuntu and Windows. From all experiments I carried out under Ubuntu, I can say that Ubuntu is excellent operating system, particulalry when it is used for forensic purposes.


One of requests which is often asked to the forensics investigators is deleted files recovery in order to obtain more evidence related to the case. When a file is deleted, so the clusters being occupied by the file will be marked by the OS as 'unallocated' in the file allocation table. It means the clusters can be used by the OS to store a new file which will then overwrite the deleted files. As long as the unallocated clusters are not occupied by another files yet, so the deleted files can be recovered perfectly, otherwise the deleted files can not be recovered but there is still possiblity to gain the partial data of deleted files as 'slack' which is started from the end of file to the end of cluster.
For this reason, the experiments using Autopsy running under Ubuntu was performed in order to carry out unallocated sectors recovery. The object of this experiment is deleted files in the image file of partition1.dd from previous experiment I performed on forensics imaging.
After running 'sudo autopsy' command and typing 'http://localhost:9999/autopsy' in the Firefox internet browser and entering the input data such as case name, host name, image location and so on, it is displayed the Autopsy window containing choices for forensics investigators to perform such as file analysis, keyword search, file type, image details, metadata and data unit.  In my point of view, the Autopsy is one of powerful forensics tools I know.
Through file analysis, in the 'c:\ExperimentMaterials\Documents' directory, it was found some deleted files including written date, accessed date, created date, size and metadata. The deleted files are 'Additional Papers for Strathclyde.doc', 'Alien Song.mpg', 'Analisa EnCase Cloned 1.ppt', 'CHFA v3 Module 01 Computer Forensics in Todays World.pdf' and so on. It was also found the deleted picture files in the directory 'c:\ExperimentMaterials\Pictures'. These deleted files which can be displayed in ASCII, Hex and ASCII Strings can be extracted to be saved in another place for further analysis.

Figure 1
Through Autopsy, the deleted files can be recovered including time stamps and metadata

The experiment results above is the same as the results I obtain when I run Forensic Toolkit (FTK) which is one of my favourite forensic tools for the prupose of deleted files recovery. Although the results are the same, but there is a big difference between them, namely Autopsy which is based on The Sleuthkit (TSK) is free, while FTK is commercial.
 

Experiment 13 on Internet Explorer Analysis under Ubuntu


This experiment was part of class assignments performed at computer laboratory of CIS Strathclyde. Surprisingly in this laboratory, all machines run Ubuntu as the operating system, so that all forensic activities carried out under Ubuntu. All applications used during the activities are free and flexible, even some of them are more powerful than commercial applications running under Ms Windows.

The most computer users in the world use Microsoft Windows as their operating system especially Windows XP because most applications either commercial or freeware are compatible with it. Based on this, the forensics investigators have to consider it because the most frequent evidence come from Windows XP machine including the evidence of Internet Explorer which is default installation from Microsoft. The Internet Explorer is often used by the users for browsing the internet, accessing emails and so on.
 
In this experiment, it was carried out the analysis of Internet Explorer traces under Ubuntu 8.10 in order to get the activity history of Internet Explorer. The tools used are pasco command under Ubuntu 8.10. 
For this experiment, the directory of 'Local Settings' containing temporary internet files such as index.dat from experimental machine was copied for the object of examination, after that the command of 'pasco index.dat > IEAnalysis.txt' was run, then the result of this command is IEAnalysis.txt file. If the investigators open this file using vi command, so it will display the content irregularly therefore they have to use spreadsheet applications such as OpenOffice Spreadsheet, Gnumeric Spreadsheet and so on, so that they can analyse the use of Internet Explorer easily with more details.

 
 Figure 1
The result of pasco command is displayed regularly using spreadsheet application

From pasco command, it was found the list of Internet Explorer activities with time stamps (modified and access), file name and http headers of websites which had ever been visited by the user. Below is some of the websites :
http://www.liputan6.com, http://www.forensicfocus.com, http://www.jsfce.com, http://certified-computer-examiner.com, http://www.utica.edu, http://en.wikipedia.org and so on which were clicked by the user on 17 December 2008 from 7.35am till 8am.

Experiment 12 on Windows Registry Analysis under Ubuntu

This experiment is the same as the experiment 9, 10 and 11 which are part of a set of experiments related to the class assignments performed on December 2008. In my point of view, the assignment report will be more reliable if it is supported by a number of experiments as well as literature study; therefore for most of my assignments during my course at Strathclyde, I usually peformed some experiments to prove my statements.



Registry under Ms Windows OS stores many important informations such as users, applications installed in a machine, USB drives which ever attached into a machine and so on, therefore it becomes one of targets for forensics investigators to search.

In this experiment, it is used the registry viewer applications running under Ubuntu 8.10 with the object is the registry from my experimental machine running dual booting.

Under Ubuntu 8.10, cp command was run to copy 5 registry files from an experimental forensic image which was taken from a Windows machine:

/WINDOWS/system32/config/SAM
/WINDOWS/system32/config/SECURITY
/WINDOWS/system32/config/software
/WINDOWS/system32/config/system
/Documents\and\Settings/UserXP/NTUSER.DAT

After that regviewer application was carried out to analyse these files.

From /HKEY_LOCAL_MACHINE/SAM/Domains/Account/Users/Names, it was obtained the list of users namely Administrator, Guest, HelpAssistant, SUPPORT_388945a0 and UserXP.

Figure 1
/HKEY_LOCAL_MACHINE/SAM/Domains/Account/Users/Names shows the list of user accounts.

From /HKEY_LOCAL_MACHINE/ntuser.dat/Software and /HKEY_LOCAL_ MACHINE/SOFTWARE, it was gained the list of company along with their software which are installed into the target machine such as AccessData with FTK and FTK Imager, Adobe with Acrobat Reader, America Online, BitComet and so on.

Figure 2
/HKEY_LOCAL_MACHINE/ntuser.dat/Software shows the list of software installed within the machine.

From /HKEY_LOCAL_MACHINE/SYSTEM/ControlSet002/Enum/USBSTOR, It was found the list of storage devices with their unique entry which ever attached to the USB port in the experimental machine such as SanDisk-Cruzer, Fujitsu, Generic and so on.

Figure 3
/HEKY_LOCAL_MACHINE/ControlSet002/Enum/USBSTOR shows the list of storage media which was ever attached to  the machine

Experiment 11 on Similarity in Forensic Imaging between Ubuntu and Windows

This experiment was performed in order to seek similarity in forensic imaging between  applications running under Ubuntu and Windows XP. It was part of a big experiments related to class assignments at Strathclyde on December 2008.


This is the first thing to do in performing forensics analysis to the hard drive evidence. If this is not handled appropriately, so the next phases of forensics examination will be weak, even it can be refused by court, therefore to pay more attention on this phase is a compulsory for forensics investigators. Because it is very crucial, so there is a strict rule on forensics imaging, namely 'make an image with a bit stream copy'. It can be physical image from hard drive to hard drive or from hard drive to image file.

During imaging process, the forensics investigators have to be able to ensure that there is nothing changed either in the hard drive or image file. To process this, the investigators can use hash value checking such as md5 by comparing the md5 value between hard drive evidence and image file or cloned hard drive. If this is match, it means the forensics imaging has worked well, otherwise it fails and can not be accepted for next examination phases.

Windows XP and Ubuntu 8.10 have similarities on this point of view. Under Ubuntu 8.10, the forensics investigators can select what device or partition they would like to image by using 'fdisk -l' command, then perform imaging to the selected device or partition by using 'dcfldd' command. After imaging process finishes, they have to verify md5 hash value between the source and the target to ensure that there is nothing changed during imaging process.


Figure 1
The use of 'fdisk -l' command to ensure  about devices and partitions attached to the machine

Figure 2
The use of 'dcfldd' command to perform imaging and The use of 'md5sum' to gain md5 hash value

From the experiment which is described by the figures above, it was obtained that the md5 hash value of partition 1 is 0171fbb2536ccd6c5fe6607743c9de17. This value is same as the md5 value of partition1.dd. It means the imaging process can be accepted for forensics purpose.

Under Windows XP, FTK Imager from AccessData was run in order to perform imaging to the same partition1. There are three choices offered by FTK Imager for forensics investigators in making an image, namely Raw (dd), SMART and E01. In this case, Raw (dd) is more appropriate to image partition1. FTK Imager also provided a window to fulfil the miscellaneous about the case such as case number, evidence number, investigator name and so on. These data do not interfere the imaging process and the value of md5 hash.

Figure 3
FTK Imager showing a  number of partitions from the experimental flashdisk

After the imaging process finishes, FTK Imager runs verifying process to gain md5 hash value of the image and compare it to the md5 hash value of the source. From the experiment using FTK Imager above, the md5 hash value of the source (drive) of partition1 is  0171fbb2536ccd6c5fe6607743c9de17 is same as the md5 hash value of the image.



Figure 4
FTK Imager verifies hash value between drive and image by using MD5 and SHA1

The md5 hash value obtained from dcfldd under Ubuntu 8.10 and FTK Imager under Windows XP are the same. It means that there is similarity in forensics imaging process between Ubuntu 8.10 and Windows XP; therefore it depends on forensic investigators which way they prefer to perform.

Friday, 11 September 2009

Experiment 10 on Analysing a Fake Image under Ubuntu

This experiment which was performed on December 2008 was part of a set of experiments related to the class assignments seeking the similarities of forensic analysis between Ubuntu and Windows XP.


EXIF which stands for Exchangeable Image File Format is the image file format specification with the addition of metadata tags for JPEG, TIFF Rev. 6.0 and RIFF WAV file formats. The specific metadata tags cover data and time information, camera settings, picture thumbnail and description and copyright information.

This EXIF metadata information is important and it is often used to identify the originality of an image. The jpg files can be manipulated by using picture editor applications such as Adobe Photoshop but it can give impact to the Exif metadata which also follows to be changed such as X and Y resolution, time stamps, picture editor software and so on, therefore the technique to recover the Exif information from jpg file is often used by forensics investigators in dealing with the case of fake picture.


For this experiment, there are 2 jpg files to be analysed in order to gain the Exif metadata by using the exiftool command under Ubuntu 8.10. These files are original jpg file and fake jpg file. The fake jpg file was manipulated from the original jpg file.


Under Ubuntu 8.10. The exiftool was run  through command console on the first jpg file, then it gave the EXIF information as follows (i.e. as shown in figure 1).

File Modification Date/Time: 2008 : 02 : 16  08 : 46 : 38
X Resolution: 72
Y Resolution: 72
Resolution Unit: inches
Exif Version: 0210
Thumbnail Offset: 274
Thumbnail Length: 2185
Encoding Process: Baseline DCT, Huffman coding
Image Size: 640 x 480
 


Figure 1
The exiftool gives the EXIF information such  as File Modification Date/Time, X Resolution, Y Resolution, Exif  Version and so on.


Then this EXIF information will be analysed and compared to the EXIF information of the second jpg file in order to decide the originality of a picture file. From the second jpg file, the exiftool displays the EXIF information as follows (i.e. see figure 2):

File Modification Date/Time: 2008 : 02 : 16  09 : 36 : 46
X Resolution: 524
Y Resolution: 524
Resolution Unit: inches
Software: Adobe Photoshop 7.0
Exif Version: 0210
Thumbnail Offset: 372
Thumbnail Length: 3825
Encoding Process: Baseline DCT, Huffman coding
Image Size: 320 x 238

 




Figure 2
The exiftool displays the EXIF information of   fake jpg file containing Software, RGB Tone Reproduction Curve and so on


By analysing the EXIF informations of both files above, the forensics investigators can draw a conclusion that the second jpg picture is fake because the EXIF information tells about the software of Adobe Photoshop which was used to manipulate the picture including RGB Tone Reproduction Curve information and so on. There are also differences on File Modification Time, X Resolution, Y Resolution, Thumbnail Offset, Thumbnail Length and Image Size between the original and the fake.


The other way to check the originality of an image is by using pixel analysis. This technique is based on zooming in the image until thousands times in order to seek the arrangement of pixels. If it is normal, the image is original, otherwise it is fake. This technique will be discussed later in next post.