A few weeks ago, I joined the Audio Forensic Training which was jointly conducted between Forensic Laboratory Centre of Indonesian National Police Headquarters and Cedar Cambridge. In this training, we developed the latest techniques on noise filtering by using Cedar instrument which was installed in the Audio Laboratory at my office. According to Dr. David Robinson who was also the instructor at this training, the audio lab we have is the best Cedar lab in South East Asia.
In this training, we praticed to remove a wide range of noise. In some cases, the voice recorded is not clear because of the noise, even it can not be heard at all. The noise sounds are much louder than the human voice. With the assisstance of Cedar providing many powerful filtering modules, we were successful to remove the noise and to make the human voice to be clear to listen to. Besides this, Cedar also provides feature to recognise the editing line in the case of when the voice recording is edited. Cedar can detect the time when the editing occured by displaying a vertical line. Through this line, we can know there is a change before and after this line. If it happens, it means that the voice recording is not original anymore. The editing could be done in the purpose of to remove unwanted parts or to add some parts. In this case, the recording could be rejected and could not be accepted to be analysed forensically because the content has been changed.
Cedar also provides spectogram for each words said. It is useful for the pruposes of voice identification or verification. With this feature, we can apply phonetics forensic in order to compare spectogram between questioned voice and known voice, so that we can know whose voice it is. In this case, we develop a technique based on FBI procedure on phonetics forensic. This procedure was described by Bruce E. Koenig on the journal "Spectographic Voice Identification: A Forensic Survey" created in 1986. In this journal he also explained that the comparison should be performed on at least 20 different words which are pronouced similarly for meaningful results. Below the complete quotation from his journal about the comparison procedure:
(1) Only original recordings of voice samples were accepted for examination, unless the original recording had been erased and a high-quality copy was still available.
(2) The recordings were played back on appropriate professional tape recorders and recorded on a professional full-track tape recorder at 7 1/2 ips. When possible, playback speed was adjusted to correct for original recording speed errors by analyzing the recorded telephone and AC line tones on spectrum analysis equipment. When necessary, special recorders were used to allow proper playback of original recordings that had incorrect track placement or
azimuth misalignment.
(3) Spectrograms were produced on Voice Identification, Inc., Sound Spectrographs, model 700. in the linear expand frequency range (0-4000 Hz), wideband filter (300 Hz) and bar display mode. All spectrograms for each separate comparison were prepared on the same spectrograph. The spectrograms were phonetically marked below each voice sound.
(4) When necessary, enhanced tape copies were also prepared from the original recordings using equalizers, notch filters, and digital adaptive predictive deconvolution programs13,14 to reduce extraneous noise and correct telephone and recording channel effects. A second set of spectrograms was then prepared from the enhanced copies and was used together with the unprocessed spectrograms for comparison.
(5) Similarly pronounced words were compared between two voice samples, with most known voice samples being verbatim with the unknown voice recording. Normally, 20 or more different words were needed for a meaningful comparison. Less than 20 words usually resulted in a less conclusive opinion, such as possibly instead of probably.
(6) The examiners made a spectral pattern comparison between the two voice samples by comparing beginning, mean and end formant frequency, formant shaping, pitch, timing, etc., of each individual word. When available, similarly pronounced words within each sample were compared to insure voice sample consistency. Words with spectral patterns that were distorted, masked ‘by extraneous sounds, too faint, or lacked adequate identifying characteristics were
not used
(7) An aural examination was made of each voice sample to determine if pattern similarities or dissimilarities noted were the product of pronunciation differences, voice disguise, obvious drug or alcohol use, altered psychological state, electronic manipulation, etc.
(8) An aural comparison was then made by repeatedly playing two voice samples simultaneously on separate tape recorders, and electronically switching back and forth between the samples while listening on high-quality headphones. When one sample had a wider frequency response than the other, bandpass filters were used to compensate during at least some of the aural listening tests.
(9) The examiner then had to resolve any differences found between the aural and spectral results, usually by repeating all or some of the comparison steps.
(10) If the examiner found the samples to be very similar (identification) or very dissimilar (elimination), an independent evaluation was always conducted by at least one, but usually two other examiners to confirm the results. If differences of opinions occurred between the examiners, they were then resolved through additional comparisons and discussions by all the examiners involved. No or low confidence decisions were usually not reviewed by another examiner.
According to his survey, only 1 false identification case (i.e. 0.31%) was found from 318 cases of phonetics forensic, while only 2 false eliminations (i.e. 0.53%) were found from 378 phonetics forensic cases. From this data, it means that the FBI technique is reliable for voice identification or verification.
In order to run this procedure of phonetics forensic, Cedar is reliable as well as noise filtering and editing line recognition.
Good luck...!
Sunday, 22 November 2009
Thursday, 19 November 2009
Face Sketching
This material actually is my slides presentation when being requested to be instructor on Frontline Forensic Course in Indonesia. This course has been being conducted since 16 November till 4 December 2009. In this course, I deliver teaching materials about Digital Forensic, Face Sketching, Photography Forensic, Fire Investigation and GPS. In this post, I just describe my materials on Face Sketching. The full version of this material can be downloaded at this link http://www.scribd.com/doc/22742609/Face-Sketching.
Face sketching is required when criminal investigators would like to obtain description of somebody based on the witness testimony. It is performed in order to have suspect's face so that it is easy to recognise by the investigators, even public can know him when seeing him anytime and anywhere. To reach this condition, the results of face sketching are distributed not only for law enforcement agencies but also for public. When people see the suspect, they will contact law enforcement agency to inform his existence; and then the investigators could arrest him soon.
However, there are also problems encountering the process of face sketching, such as:
1. Witnesses might saw the suspect at glance, so that it is not enough to identify him perfectly as the witness can not give face components in details.
2. Witnesses might saw the suspect from behind and or from right/left side, so that the description of suspect's face is not sufficient to describe.
3. The limits of memory of the witnesses. It means that when a witness saw the suspect, it does not guarantee he could recognise the suspect's face in details. There is possibility that the face description will be distorted as the witness can not remember each face components in details.
4. The lack of lights. When a witness saw the suspect in the condition of dark situation (i.e a little light), he can not describe the suspect's face perfectly as he can not see it well.
In the process of face sketching, there are two components which chould be considered, namely:
1. General characteristics
2. Class characteristics
In General characteristics, it shows a general pattern of a face component such as:
1. Eyes
2. Noses
3. Eyebrows
4. Hair
5. Lips
6. Head shapes
7. Jaw shapes
8. Mustaches
9. Beards
10. Eye lines
11. Smile lines
Meanwhile, in class characteristics, it shows a specific pattern of a component and or a specific sign on a face such as:
1. Mole or beauty spot
2. Cockeye or a cast in the eye
3. Harelip
4. Scar or wound print
The techniques developed in the process of face sketching are:
1. Manual. It requires a good painter
2. Automatic. It requires a reliable computer application
To obtain the description of face sketching in details, please click the link above. Hopefully it could be useful for anybody who would like to explore face recognition. Good luck...!
Face sketching is required when criminal investigators would like to obtain description of somebody based on the witness testimony. It is performed in order to have suspect's face so that it is easy to recognise by the investigators, even public can know him when seeing him anytime and anywhere. To reach this condition, the results of face sketching are distributed not only for law enforcement agencies but also for public. When people see the suspect, they will contact law enforcement agency to inform his existence; and then the investigators could arrest him soon.
However, there are also problems encountering the process of face sketching, such as:
1. Witnesses might saw the suspect at glance, so that it is not enough to identify him perfectly as the witness can not give face components in details.
2. Witnesses might saw the suspect from behind and or from right/left side, so that the description of suspect's face is not sufficient to describe.
3. The limits of memory of the witnesses. It means that when a witness saw the suspect, it does not guarantee he could recognise the suspect's face in details. There is possibility that the face description will be distorted as the witness can not remember each face components in details.
4. The lack of lights. When a witness saw the suspect in the condition of dark situation (i.e a little light), he can not describe the suspect's face perfectly as he can not see it well.
In the process of face sketching, there are two components which chould be considered, namely:
1. General characteristics
2. Class characteristics
In General characteristics, it shows a general pattern of a face component such as:
1. Eyes
2. Noses
3. Eyebrows
4. Hair
5. Lips
6. Head shapes
7. Jaw shapes
8. Mustaches
9. Beards
10. Eye lines
11. Smile lines
Meanwhile, in class characteristics, it shows a specific pattern of a component and or a specific sign on a face such as:
1. Mole or beauty spot
2. Cockeye or a cast in the eye
3. Harelip
4. Scar or wound print
The techniques developed in the process of face sketching are:
1. Manual. It requires a good painter
2. Automatic. It requires a reliable computer application
To obtain the description of face sketching in details, please click the link above. Hopefully it could be useful for anybody who would like to explore face recognition. Good luck...!
Monday, 2 November 2009
Digital Forensic: State of the art
I think it is a long time for me not to post a new topic in this blog. For this reason, I apologise because I have been so busy with some crime scene processing and digital forensic analysis.
In this post, I would like to describe a more detail about digital forensic from investigation flowchart and digital forensic procedure to study case. It is in the form of a presentation which will be delivered at the British Council, Jakarta on 7 November 2009. At that moment which is 25th anniversary of British Chevening Scholarship Scheme, I am invited to deliver this topic as I was awarded Chevening scholarship when joining MSc in Forensic Informatics at the University of Strathclyde, UK in 2008/2009. This presentation can be downloaded at http://www.scribd.com/doc/22000028/Digital-Forensic-State-of-the-Art-BC071109.
On slide 3, I explain that in the investigation of the case of computer crime and computer-related crime, digital forensic gives fully technical support to criminal investigators in order to solve the case. Digital evidence found by digital forensic analyst will be basis for the investigator to decide further investigative steps. When the case is brought to the court, the forensic analyst will be requested to give expert testimony regarding the digital evidence found. If they can explain it properly, so it can be accepted by court as a strong evidence, no doubt at all.
On slide 4, it is described that digital forensic acts not only at computer crime, but also at computer-related crime. It means that digital forensic covers a wide area of investigation where computer is used. In this crime, computer has three roles, namely computer as the tool to commit the crime, computer as the target of the crime and computer as a media for storing data related to the crime.
On slide 6, the definition of digital forensic is given. It is the application of computer science and IT technology in order to solve a crime for justice purposes. Based on this definition, digital forensic plays some key roles, namely:
On other slides, please download the presentation material from the link above. I hope it can be useful in positive meaning for someone who would like to apply and develop digital forensic. Good luck....!
In this post, I would like to describe a more detail about digital forensic from investigation flowchart and digital forensic procedure to study case. It is in the form of a presentation which will be delivered at the British Council, Jakarta on 7 November 2009. At that moment which is 25th anniversary of British Chevening Scholarship Scheme, I am invited to deliver this topic as I was awarded Chevening scholarship when joining MSc in Forensic Informatics at the University of Strathclyde, UK in 2008/2009. This presentation can be downloaded at http://www.scribd.com/doc/22000028/Digital-Forensic-State-of-the-Art-BC071109.
On slide 3, I explain that in the investigation of the case of computer crime and computer-related crime, digital forensic gives fully technical support to criminal investigators in order to solve the case. Digital evidence found by digital forensic analyst will be basis for the investigator to decide further investigative steps. When the case is brought to the court, the forensic analyst will be requested to give expert testimony regarding the digital evidence found. If they can explain it properly, so it can be accepted by court as a strong evidence, no doubt at all.
On slide 4, it is described that digital forensic acts not only at computer crime, but also at computer-related crime. It means that digital forensic covers a wide area of investigation where computer is used. In this crime, computer has three roles, namely computer as the tool to commit the crime, computer as the target of the crime and computer as a media for storing data related to the crime.
On slide 6, the definition of digital forensic is given. It is the application of computer science and IT technology in order to solve a crime for justice purposes. Based on this definition, digital forensic plays some key roles, namely:
- To support and perform scientific crime investigation
- To perform forensic analysis on digital evidence
- To be able to describe a crime connection between suspect and evidence
- To deliver expert testimony at court.
- Principle 1: No action taken by law enforcement agencies should change data held on a computer or storage media.
- Princple 2: The person accessing the data must be competent to do so and be able to explain the relevance and implications of the actions taken.
- Principle 3: An audit trail or record of all processes applied should be created and preserved.
- Principle 4: The person in charge has overall responsibility to ensure that these principles are adhered to.
On other slides, please download the presentation material from the link above. I hope it can be useful in positive meaning for someone who would like to apply and develop digital forensic. Good luck....!
Monday, 5 October 2009
Forensic Cop Journal 1(3) 2009: Forensically Sound Write Protect on Ubuntu
Actually this journal is derived from my previous post concerning forensically write protect on Ubuntu which has been experimented successfully before. After considering this topic is so significant, so I take it to be an official journal. For this journal, I just put Introduction and Experiments Preparation for this post; therfore for full version of pdf of this journal, it can be downloaded at http://www.scribd.com/doc/20616188/Forensic-Cop-Journal-13-2009Forensically-Sound-Write-Protect-on-Ubuntu.
Introduction
The first principle according to ACPO (Association of Chief Police Officers) in the UK is “No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court” (ACPO, p4). This principle which is applied and used by forensic investigators in the world requires the investigators to pay more attention when dealing with data stored in computer storage media. Once it is changed, the next phases of examination will be considered weak and doubt, even the results of examination could be rejected by court. However the changes are still allowed when the investigators can know exactly their actions and its implications such as when performing live imaging.
In order to accommodate this principle, the investigators apply write protect during their examination process, particularly when making forensic imaging at the first time. This write protect can be in the form of either software or hardware. In Ms Windows OS, there are many forensically sound write protect tools offered to users. Most of them are commercial. Write protect is also available on Ubuntu, but this is for free. We just make a little modification on fstab file to configure Ubuntu machine becomes forensically sound write protect. This journal discusses about it including the experiments performed and the results obtained.
Experiments Preparation
The 4GB flash disk is used as the object of these experiments. It is set up by using GParted in order to configure the partition, so that it has 4 partitions with different file systems. Below is the specification of each partition with the operating system installed within it by using Unetbootin.
Partition 1: size=996.19 MB and file system of ntfs.
Partition 2: size=996.22 MB and file system of fat16 with BartPE as operating system.
Partition 3: size=996.19 MB and file system of ext2 with Helix 3.0 as operating system.
Partition 4: size=847.15 MB and file system of ext3 with Ubuntu 8.10 as operating system.
Particularly for partition 1, there is no OS installed in it because it is designed for storing files. This configuration is intended to make a condition of flash disk becomes closely similar with a real hard disk having some partitions with different file systems.
Friday, 2 October 2009
Forensic Cop Journal 1 (2) 2009: Similarities and Differences between Ubuntu and Windows on Forensic Applications
This post is the form of development of previous post concerning the same topic. It is about similarities and differences between Ubuntu and Windows on forensic applications. The previous post only discuss it in general and is like brief summary of experiments performed before; therefore in order to make the topic becomes comprehensive view, this post in the form of journal is issued. I only put the sections of Introduction and Research Preparation below. If you wish, the PDF full version of this journal can be downloaded at http://www.scribd.com/doc/20514332/Forensic-Cop-Journal-12-2009Similarities-and-Differences-Between-Ubuntu-and-Windows-on-Forensic-Applications
Introduction
In dealing with computer crime, the forensic investigators are faced to volatile digital evidence which must be discovered as soon as possible because sooner it can be recovered, better the criminal investigators handle the case, even it can make the duty of the investigators become easy to locate and catch the perpetrators. There are many ways to carry out forensic investigation on cases of computer crime. Although there is a bunch of various different techniques for this purpose, essentially they have same goal, namely to recover the digital evidence, and then serve it for court.
There are two conditions in which the forensic investigators often deal with; they are forensic analysis under Microsoft Windows and under Linux OS such as Ubuntu. In this case, Ms Windows and Ubuntu have their own advantages and disadvantages regarding with computer forensic examination. In some extent, they have similarities, but in the other cases, they also have differences. This journal will describe the topic about “similarities and differences between Ubuntu and Ms Windows on forensic applications”. The descriptions also include practical samples of forensic tools in order to support the opinion.
Research Preparation
In order to run this research on the track, I make some experiments based on my experience in investigating the case of computer crime by setting up 4 GB flash disk as experimental object. I configure it to be 3 partitions by using Partition Editor application from Ubuntu. The first partition is FAT32 with the size of 1000 Mbyte in which I install Helix Forensics by using USB Startup Creator from Intrepid so that it becomes bootable flash disk to run Helix Forensics live, then I also put some files which have different file extensions such as pdf, doc, odt, ppt, jpg, odp and so on in different folders, some of these files are then deleted. The first partition becomes one of the objects of experiments. To be more focus on analysing, I limit the similarities in 5 points of view and differences in 3 points of view.
Subscribe to:
Posts (Atom)