Study on Use of Force and Police Body Cameras

Share on FacebookTweet about this on TwitterShare on Google+

Study on Use of Force and Police Body Cameras

body cameras

TASER International’s AXON flex™ On-Officer Police Camera system with Controller, wearable Camera on Oakley® Flak Jacket Glasses

A new study has been looking into whether or not the use of police worn body cameras has any effect on police use of force against suspects. Conversely, the study also explored whether or not the police use of the cameras effects assaults against police. The results are fascinating.

In short, the study found that officers who kept their body cameras running all day showed a decrease in use of force incidents. Officers’ who turned their cameras on and off during their shift at their discretion showed an increase in use of force incidents. But perhaps most surprisingly, the study showed that police who wore body cameras–whether they were switched on or off– where 15% more likely to be attacked and or assaulted while in the line of duty! Read the entire article here.

Posted in Enhancement of audio and video | Leave a comment

FBI Uses Audio Surveillance Devices Outside Courthouse

Share on FacebookTweet about this on TwitterShare on Google+

FBI Plants Audio Surveillance Outside Courthouse

audio surveillance

What happens when the FBI plants audio surveillance devices without a warrant?

The use of an audio surveillance device, such as a hidden microphone or wiretap (aka a “bug”) is typically limited to law enforcement who have procured a warrant. For an agent of the government to secretly record a conversation without a warrant is typically considered a violation of one’s 4th Amendment protection. Recently, several FBI agents are finding themselves in hot water for planting audio surveillance devices in at least three locations around the entrance of the San Mateo County courthouse, without any warrant or judicial approval. In a case such as this, it crucial to understand several concepts: 1) what are the laws related to usage of audio surveillance devices by police, 2) can these recordings be used in court (since they were essentially obtained unlawfully), and 3) what techniques might an audio forensic expert use to prepare audio evidence created by such an audio surveillance device for court or other proceedings?

What Did the FBI Do that Was So Wrong?

Back in 2009 and 2010, the FBI planted several audio surveillance devices. One of the devices was planted in a metal sprinkler box attached to a wall near the courthouse entrance, the second in a large planter box to the right of the courthouse entrance, and a third device near the vehicles parked on the street in front of the courthouse entrance. The purported goal of the FBI was to record several real estate investors whom they believed were bid rigging and colluding to deflate prices at public foreclosure auctions. The listening devices were activated by agents an hour before the auctions on at least 31 occasions between December 22, 2009 and September 15, 2010. FBI agents then would switch off the devices after some time when the auctions were concluded. The auctions were held on the courthouse steps. The defense lawyers in this case were asking the U.S. District judge to throw out more than 200 hours of recorded conversations by FBI agents.

During the proceedings, Senior District Judge Charles R. Breyer expressed what he described as a gut level discomfort with the notion of government agents listening at the courthouse door. The judge said, “Let’s say I was out of that courthouse that day, I used the staff entrance and I turned my [sic] law clerk. I wouldn’t know [about that recording], would I, unless the government turned it over?”

Judge Breyer said that the targets of the investigation— the real estate investors—likely believed that their side conversations at the public auctions were private. Whether or not that expectation of privacy was reasonable, he added, would determine whether the 200 plus hours of recordings, and all evidence arising from them, would be suppressed.

The real estate investor’s defense attorneys, at Latham & Watkins, argued that the FBI does not have a right to use audio surveillance devices without appropriate warrants and that their clients had a reasonable expectation of privacy, even though the conversations were held in public places. To support this position, their attorney argued precedence from many well known cases related to the issue.

Case Law on the 4th Amendment and Audio Surveillance

In Katz v. United States, 389 U.S. 347, 351-52 (1967), the seminal case on modern Fourth Amendment interpretation, the Supreme Court affirmed the right of individuals to be free from warrantless government eavesdropping in places accessible to the public. Speaking in a public place does not mean that the individual has no reasonable expectation of privacy (eg. a public telephone booth). In Wesley v. WISN Division-Hearst Corp., 806 F. Supp. 812, 814 (E.D. Wis. 1992) the court stated, “[W]e do not have to assume that as soon as we leave our homes we enter an Orwellian world of ubiquitous hidden microphones.” Therefore, a private communication in a public place qualifies as a protected “oral communication” under Title III of the Omnibus Crime Control and Safe Streets Act of 1968, 18 U.S.C. § 2510 et seq. (“Title III”), and therefore may not be intercepted without judicial authorization (eg. a warrant).

Did the FBI Need a Warrant?

David J. War, the attorney representing the U.S. Department of Justice, argued that the defendants participated in a conspiracy to rig bids and commit mail fraud at public real estate foreclosure auctions in the San Francisco Bay Area. He argued that little about these auctions was private: they were attended by dozens of people. They were held in an open and public area outside of the rear, employee-only entrance to a county building that housed a courthouse and sheriff’s offices. They were conducted near a closed-circuit surveillance camera that was conspicuously placed above that entrance. And they were adjacent to a curbside that was marked as designated for law-enforcement vehicles. County employees, uniformed law enforcement, and any and all bidders who wished to attend the auctions regularly used the space where these auctions were held and where the defendants’ conversations and activities were recorded. Under these circumstances, War argued, the defendants’ cannot establish that they had a reasonable expectation of privacy, thus, their conversations and activities were exposed for the public to hear and see and are therefore not protected by the Fourth Amendment or Title III.

Little Girls and Their iPhones

Arguments for whether or not the audio evidence may be used in court was heard in April 2016. The judge has yet to make a ruling. It is unclear, therefore, whether or not the evidence will be allowed. Historically, there are times when unlawfully obtained audio recordings have been admitted as permitted evidence. In a recent custody dispute coming out of a Los Angeles Family Court, the children in question secretly recorded conversations with their father using an iPhone as an audio surveillance device. Despite such a recording being consider wiretapping, and therefore unlawfully obtained, the judge still admitted the recordings and considered them as evidence in making a custody decision. The major difference here, however, is that the children were not acting as agents of the government (i.e. police). Therefore even though the recordings were unlawfully obtained, there is no 4th Amendment violation.

A case such as the Family Law example illustrates that even though evidence was unlawfully obtained, there are circumstances where the evidence may be admissible in court. But just because these iPhone recordings may be used in court, the people making the recordings may have opened themselves up to civil litigation, fines, penalties, or even imprisonment.

If the FBI is successful in getting their audio recordings admitted into evidence, it may be necessary for an audio forensic expert to enhance aspects of the recording to better understand what is being said. For instance, since the recordings were made outside, there may be noises such as cars or wind that interfere. Various enhancement techniques could be used to reduce such interference. Presumably there were also other people talking that had nothing to do with the case. It may be necessary to isolate and eliminate unrelated conversations, where possible. Likewise, the defense may also desire to enhance aspects of the audio in order to create a forensic transcript which may become vital in refuting claims made by the prosecution.

Posted in Advice for attorneys and police on recent surveillance issues, Enhancement of audio and video, In The News - Surveillance | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

The 5th Amendment and Digital Evidence

Share on FacebookTweet about this on TwitterShare on Google+

The 5th Amendment and Digital Evidence

Digital Evidence

Can the government compel someone to decrypt their own hard drive if the digital evidence may be self incriminating?

A former Pennsylvania police officer has spent the last 7 months in jail for refusing to decrypt his own hard drives. The prosecution believes the hard drives may have unlawful images stored on them. The court, therefore, ordered the former officer to decrypt the hard drives, which he refused. The defendant claimed that by decrypting the hard drives, he’s essentially testifying against himself, which he’s protected from doing by the 5th Amendment.

The outcome of this case may become case law in regards to whether or not a court may compel someone to decrypt or unlock their own encrypted digital evidence, such as a hard drive or iPhone. In the case mentioned above, the Electronic Frontier Foundation (EFF) claims, “Compelled decryption is inherently testimonial because it compels a suspect to use the contents of their mind to translate unintelligible evidence into a form that can be used against them.” Therefore, the EFF claims, one should not be compelled to decrypt his own digital evidence.

Posted in Advice for attorneys and police on recent surveillance issues, Criminal Trials Civil Trials Depositions and Hearings, In The News - Surveillance | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Cars – The New Mobile Surveillance Units

Share on FacebookTweet about this on TwitterShare on Google+

Cars – The New Mobile Surveillance Units

audio forensic services

Tech within a car could be collected (or even hacked) and used in court by utilizing audio forensic services to process, evaluate, or enhance the evidence.

One may not think it, but modern cars have found a practical, albeit unexpected, use within the field of audio forensic services. There are numerous technologies powering the telematic and infotainment services of automobiles. Technologies like on-board vehicle electronics, global positioning system (GPS) satellite communications, short-range and wide-range wireless telecommunications systems, data analytics, cloud computing and the internet are just few of the technologies that are combined using permutations and combinations to empower the ultra automatic automobiles that we use today. Some of these communication devices, such as GM’s “OnStar”, even have the potential of being accessed by the government. What’s more, every movement and location the vehicle makes could be tracked and catalogued by the government.

In this post we will discuss the various components and technological elements that are used to power the telematic and entertainment modes of our cars. In turn this will help us better understand the privacy implications of connected cars and the sources from where we can gather important data and evidence for audio forensic services, and how this evidence may be used in court. First we’ll explore some of the technology behind vehicle telematic systems, then we’ll discuss how these components could potentially be used in court, could be used against the vehicle operator, or could be utilized by audio forensic services.

Cars and Audio Forensic Services

There are many components in a car that use modern computing power. Sensors are used to measure various activities and components within the vehicle. The computer within each modern vehicle, therefore, processes many million lines of code in order to make the vehicle operate exactly as the car makers wanted. The communication network that allows these sensors and components to function together is referred to as a bus, which controls functions such as engine temperature, engine RPM, throttle position, vehicle speed and orientation, distance travelled, fuel levels and consumption, door open/close, tire pressure, ignition, headlights/tail-lights, battery status, cumulative idling, odometer, trip distance, braking activity, and much more. The data these sensors compile is available directly to the car makers via the OBD port. Utilizing wireless connectivity, this data can be transferred to servers where it can be further processed and used. It is possible, therefore, that a car’s functions can be accessed (or hacked) by an outside entity which can then compile the data or even hijack some of the vehicle’s functions.

Telematics Control Units

Telematics Control Units (TCU’s) use GPS receivers and wireless interface modules that allow two way communication outside the vehicle.  They control functions such as remote vehicle diagnostics, remote operations like start, stop, door lock, door unlock and alerts, automatic crash notifications, emergency calling, vehicle locating and monitoring and geo-fencing. Cars like the new Hyundai Elantra can even be started via a voice command to smartphone or smartwatch. This means, however, there is the potential for an outside entity, such as the government or hackers, to access this function without the owners knowledge or control.

Telematics Operations Centre

Telematics Operations Centre is network hub that communicates with Telematics Control Units and other information sources and delivers telematic services back to the vehicle, the end user or the application provider. It performs various functions such as network security, fault management, configuration and customer relationship management.

Call Center

The call center provides voice assistance to vehicle occupants in real time. Customer service representatives can access customer and vehicle information from the Telematics Operations Centre. An example of this would be GM’s OnStar, which is a service that utilizes microphones installed within the vehicle. The government, therefore, could activate the microphones without the knowledge or consent of the vehicle’s owner or operator and secretly record conversations. These recorded conversations could then be used as evidence in a case. If necessary, they could be enhanced using audio forensic services for use in court.

Service and Content Providers

Most technology manufacturing companies use third party sources such as Pandora and Spotify to provide content or entertainment services. Such entertainment services use the Telematics Operations Center to reach the dashboard.

Aftermarket Telematic Systems

Aftermarket systems use devices generally attached to OBD-II port rather than using units built inside the vehicle dashboard to connect to the internet. They rely on mobile devices or a separate interface that is mounted on the dashboard.  

Wireless Communication Technologies

Various wireless technologies such as Personal Area Network systems using Bluetooth or Wireless USB devices, Dedicated Short Range Communication and Cellular communications are used to convert cars into moving internet hotspots. They are used to link products such as Mojio, Dash, public safety-related Intelligent Transportation Systems, Operations Centre and internet streamed radio and video applications. Because such devices connect to the internet, they also have the potential to be accessed or hacked by a third party such as the government. Once hacked, the government may be able to gain access to the vehicle operator’s phone and phone history, driving history, internet usage, etc.

Cloud Computing and Data Analytics

With an ever-growing amount of vehicles on road, large quantities of data is being created by all these vehicle’s computers and components. Cloud computing is used to store these large amounts of data and provide sophisticated processing abilities. The cloud also stores the various apps we use in our cars. Using data analytics, meaningful patterns and conclusions are drawn, which are further analyzed from the vast data base collected.

Intelligent Transportation Systems

Intelligent Transportation Systems use a lot of representative data coming in from various sources such as vehicles, roadside infrastructure, pedestrians, cyclists or other physical things on the roadway, connected through wireless communication. Mobile sensors like GPS, radar, laser, video, lights, thermals and fixed sensors like traffic counters, cameras and weather instruments are used together with Dedicated Short Range Wireless Communication. All this data is processed and made available in real time to the parties or components that need it (such as traffic regulating processes).

Our Cars May Be Used Against Us

The Hyundai Elantra mentioned above is a very high-tech and sophisticated piece of technology. It represents that cars are no longer simply to get us from Point a to Point B. Cars are now complex computers processing millions of pieces of information every minute they are on the road. Many of these processes are vulnerable, however. They can often be hacked or simply accessed by the government, the manufacturer, or by someone else who may have nefarious intentions. Although some of the data is harmless and benign, it is concerning to imagine that the government could access a car’s microphone and listen in to a private conversation (not unlike what they can do with smartphones). If they record this data or these conversations, however, this audio evidence could then be used in court. Once the evidence has been collected, it may be invaluable to audio forensic services to process, evaluate, or enhance the evidence so that it may be used effectively in your case.

Posted in Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

How Camera Lens Effects Forensic Video Enhancement

Share on FacebookTweet about this on TwitterShare on Google+

How a Camera Lens Effects Forensic Video Enhancement

Forensic Video Enhancement

The type, style, and quality of lens used by a digital camera will effect the quality of the still images or videos it records as well as the forensic video enhancement technique utilized

A camera lens, like the one pictured above, actually consists of many lenses. The lens affects the quality of a still image by controlling the amount of light that reaches the image sensor. Other characteristics, of a camera, that affects the image quality are field of view, iris, f-number (or f-stop), focusing, mounts and lens quality. Knowledge of types and characteristics of camera lens is of great importance during forensic video analysis and forensic still image analysis. Using forensic video enhancement, different techniques can be applied on low quality images or videos to highlight important elements of the video evidence or still image evidence, which could not be captured due to many factors (such as inferior lens quality, mis-calibrated equipment, poor lighting conditions, etc.). In this post we will discuss the important characteristics of a camera lens that can affect the quality of still images or videos.

Three main types of lenses are:

  1. Fixed lens,
  2. Varifocal lens and
  3. Zoom lens.

Fixed lens: In fixed lens, focal length is fixed and only one field of view is available. In other words, a fixed lens is not capable of zooming in or out. Most surveillance cameras today have fixed lenses, which makes forensic video enhancement crucial for clarifying the facts and details of a case.

Varifocal lens: A varifocal lens has different focal lengths and thus different fields of view are available (ie. it can zoom in and out, albeit typically minimally). However the field of view should be changed manually. Focal lengths in the range of 3 to 8 millimeters are available in varifocal lens for network or surveillance cameras.

Zoom lens: A zoom lens also offers different field of views. However in a zoom lens, refocusing the lens is not required to create a change in the field of view in a specified range. If lens adjustments are required, it can be either done manually or motorized.

Field of View (Focal Length)

Focal length is the distance between the entry lens and the image sensor, or the point where all light rays converge. Focal length and size of the image sensor together determine a camera’s field of view. If the focal length of a lens is short, the field of view will be wide. The typical size of image sensors are 1/4, 1/3, 1/2, and 2/3 inch. Telephoto lenses have a focal length of more than 70 mm and are used to capture objects that are very far off or smaller in size. It is good for long distances and has less geometrically distorted images. The disadvantage to this type of lens, however, is that they sense less light. If less light gets in the lens, the image may be not as bright or clear.

Wide angle lenses have a focal length of less than 35 mm. They are used when a wide field of view is required. The advantages typically are good depth of field and better low light performance. The disadvantages of using a wide angle lens are increased geometrical distortion and bad long distance viewing.

Aperture or Iris Diameter

The Aperture diameter, or iris, determines the amount of light that passes to the image sensor. A bigger iris will result in more light passing on to the image sensor. Another factor that determines the amount of light reaching the image sensor is exposure time. Low light conditions will require a bigger iris and longer exposure time. Whereas a brighter environment requires a smaller iris and less exposure time. To capture quality images optimum light should pass to the image sensor. Getting the correct amount of light to the aperture is key for too much light will result in overexposure and too little light will result in dark images. Therefore shutters and adjusters are used to control the exposure time and iris respectively.

Depth of Field

Depth of field is the distance in front and beyond the point of focus, where objects simultaneously appear to be sharp. The Focal length, Iris diameter and distance from the camera to the object together determine the depth of field. Longer focal length will mean shorter depth of field and similarly wider Iris diameter will also result in shorter depth of field. Understanding the depth of field is often very important when evaluating surveillance video in order to determine the best forensic video enhancement techniques to use.

Determining an optimum setting to achieve an ideal depth of field is complicated, but all modern digital cameras will do this automatically. However, depending on the lighting conditions, the following setting is recommended to achieve an ideal depth of field and higher quality of images:

Sunshine or lots of light: In such situations, use a small iris opening and shorter exposure time to achieve better quality images and higher depth of field.

Overcast or cloudy sky: For obtaining better quality images of moving objects in overcast lighting conditions, reduce the exposure time. This will result in reducing the depth of field. On the other hand if you increase the exposure time, blurring of moving objects will increase. However if capturing moving objects is not the end goal, then increasing the exposure time will improve the depth of field and overall quality of the images.

Evening and night: In low light situations, increase the light sensitivity of image sensors. However this will increase the noise in images.

Different recording conditions, as mentioned above, will result in different types or qualities of video. A forensic video expert will take all of this into account while evaluating and analyzing a video to determine the best forensic video enhancement techniques to use on the evidence.

Posted in Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

How Image Sensors Impact Video Forensics

Share on FacebookTweet about this on TwitterShare on Google+

How Image Sensors Impact Video Forensics

Video Forensics

The type, quality, and technology behind digital camera image sensors plays a major role in video forensics and how digital images and video enhancements are completed.

The technology of video forensics is impacted greatly by the technology used in recording a video, or capturing still images. The image sensor is a main component in all digital cameras. Knowledge of the technology of Image sensors and image scanning techniques is essential in the field of video forensics. An Image sensor consists of many photo sites which correspond to the pixels in an image. Each photo site registers the light it receives and converts this light into a corresponding number of electrons, which are interpreted as shades of gray. Bright light will produce more electrons and dull light will produce fewer electrons. There are different methods (RGB, CMY and CMYG color systems) which are used to register colors in digital cameras. A CMY system produces better light sensitivity than RGB. CCD (charge-coupled device) image sensors use the CMYG color system, whereas progressive scan image sensors use the RGB color system. There are two main technologies used to develop image sensors:

  1. CCD (Charge Coupled device)
  2. CMOS (Complementary Metal Oxide Semiconductor)

CCD Technology: In CCD technology, charges from pixels are converted to voltage levels, buffered, and are then sent off the CCD chip as an analog signal. With CCD, every pixel has limited output nodes, therefore the image quality is thus very high. These cameras have been used for more than 35 years and are more sensitive to light than CMOS sensors. However CCD sensors are expensive and consume much more power than CMOS sensors. Think of the video forensics involved in the Rodney King video.

CMOS Technology: In CMOS technology, amplifiers and analog to digital converters are already integrated in the image sensor. This allows better integration and more advanced functions. Recent advancements in CMOS technology have improved the quality of images produced using these sensors. These improvements mean that the field of video forensics has advanced tremendously, especially in the last few years. 

There are, however, many other characteristics in image sensors that impact the quality of images. For example, the size of the image sensor and size of the pixels also impact the image quality. A larger image sensor with more pixels will produce higher resolution images with greater detail. A larger pixel will store more electrons from light exposure and thus will be more sensitive to light. Another factor that has an impact on image quality is the dynamic range of the sensors. An image sensor with high dynamic range will capture both dark and bright objects without “noise” on the image.

Image Scanning Techniques

Information produced by Image sensors is read and displayed using Image Scanning techniques. There are two prevalent image scanning techniques:

  1. Interlaced scanning and
  2. Progressing Scanning

Interlaced Scanning: Interlaced scanning was invented in the 1930’s and is today used in CCD based image sensors and older analog cameras. An interlaced image from a CCD camera produces one field displaying odd lines and another field displaying even lines. At any given time, only half the image lines are transmitted, first the odd and next the even lines are transmitted, then the even and odd lines are combined to form an image on every line. To allow the human visual and cognitive functions to interpret these field lines as a complete image and not separate odd or even field lines, these lines are refreshed at a certain frequency or number of frames per second. These interlacing techniques have been extensively used in analog cameras, television and VHS videos for a very long time.

Deinterlacing Techniques: There are many deinterlacing techniques that are used to show interlaced videos on either television or computer screens. An advanced technique, called motion adaptive deinterlacing, is used to produce sharp and full resolution images. This technique uses blending and calculation of motion to deinterlace the interlaced images. Another method of deinterlacing uses line doubling or interpolation to first remove either of the field lines and then double the lines of the remaining field. This method finishes the comb effect but typically has a negative impact on image quality.

Progressive Scanning: Progressive scanning is a technique that can be used in both CCD and CMOS image sensors. In Progressive scanning, values are used from every pixel of the image sensor and data is scanned sequentially. This produces a full frame image which is then sent over a network or stored. Because there is literally no flickering effect in progressive scanning, it can capture moving objects much better and is preferred in video surveillance applications.

Posted in Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

How Light Affects Video Recording

Share on FacebookTweet about this on TwitterShare on Google+

How Light Affects Video Recording

Video Recording

Different lighting conditions effect a still image or video recording in dramatic and important ways.

Light has tremendous affect on the quality of a video recording or still image. For instance, details and colors of an image get lost when there is too much backlight. However with correct exposure and additional frontal light, details not visible before will emerge out in the same image. A video forensic expert should understand all aspects of lighting and how light affects a still image or video recording. The importance of understanding light allows for better forensic video analysis and applying the right video enhancement techniques. In this post we will briefly explain the different forms and direction from which light can come and how that affects the image quality. We will also explain about Color Temperature and Invisible light.

Forms and Directions of Light

For capturing a good surveillance video recording, it is important to ensure that the target area has a proper source and direction of light. If, however, lighting conditions are not ideal, and there is a video that may be a critical piece of evidence, then using video enhancement services of only the best forensic video experts may be necessary to better see what happened in the video evidence.

Most common sources or forms of light in a scene are:

  • Direct Light: Direct light can come from sources such as point source object like sunlight or from small bright object such as a spotlight. Whatever the case might be, direct light creates sharp contrast with highlights and shadow.
  • Diffuse Light: Diffuse light can come from sources such as a gray sky, from an illuminated screen, a diffuser, or light bouncing off a ceiling or other surface. With diffuse light, the object from which light comes is typically much larger than the subject. Diffused light, therefore, decreases the contrast. It also negatively affects the brightness and colors. The level of details captured will also reduce.
  • Specular Reflection: In Specular Reflection light comes from one direction and bounces in another direction by reflecting from a smooth surface. For example, Specular light is generated when light reflects off water, glass or metal, or a similarly reflective material. This type of light also reduces visibility, but it can be reduced by using a polarizing filter.

Lighting Direction

The Direction from which the light comes can significantly impact the level of details obtained. Main directions from which light can fall on the subject are:

  • Front light: Light coming from behind the camera is ideal and will result in proper illumination of the scene.
  • Side light: Light falling from the side is good to capture architectural effects but will develop shadows.
  • Back light: Backlight is created when light falls directly on the camera lens. Details and colors of the subject will often be lost when capturing images that have a higher amount of backlight.

Color Temperatures in Light

Different types of light have different color temperatures and, therefore, impact the color of an image differently. This temperature is measured in Kelvin (K) and is based on the fact that heated objects radiate this heat. Red has a lower color temperature and is the first visible light that radiates from a heated object. The color turns blue as the temperature increases. So for example, standard light bulbs will often create a brown or yellowish tone in an image. When compared with daylight they have lower color temperature of about 3000 K and will appear more reddish, where the sun is approximately 6500 K. Similarly in an industrial warehouse setting, where long tube fluorescent lights are used to provide unobtrusive light, the images will appear more green and dull in tone.

The human brain can easily cope and adjust how we perceive images from color changes occurring from different kinds of light. Cameras, however, require a special system, called a white balance system, to adapt to different sources of local illumination.

Invisible Light

Colors that fall in between 3,000 to 10,000 Kelvin temperatures are visible to the human eye. However, invisible light wavelength bands are generated by objects that have color temperature above or below these limits. These invisible light wavelengths are either called infrared or ultraviolet. Near infrared light (700 nanometers up to about 1,000 nanometers) can be filtered when a camera utilizes an IR filter. However CCD (charge-coupled device) and CMOS (complementary metal–oxide–semiconductor) cameras are insensitive to UV light. As analog cameras are sensitive to UV light, a UV filter is fitted to an analog camera and used in such situations.

Posted in Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Evolution of Video Surveillance Systems

Share on FacebookTweet about this on TwitterShare on Google+

Evolution of Video Surveillance Systems

Video Surveillance

Video Surveillance Systems Continue to Improve

The video surveillance industry is over 35 years old. It has evolved from using pure analog surveillance cameras and recording devices, which recorded videos on tape through VCR’s (videocassette recorders), to fully digital network based video surveillance systems which use high quality, high definition, network cameras to record and store videos on servers, clouds, or DVRs (digital video recorders). This change has had a deep impact on video forensics. The reasons that led to this evolution are improving computer and camera technology along with an ever increasing demand for better surveillance systems. However, in between the analog and network based digital surveillance systems exist the hybrid systems that are being used. This post will discuss the defining features of prominent video surveillance systems and outline the evolution of the modern day video surveillance industry.

Some reasons which led to improvement in Video Surveillance techniques are:

  1. Better image quality
  2. Simplified installation and maintenance
  3. More secure and reliable technology
  4. Longer retention of recorded video
  5. Reduction in costs
  6. Size and scalability
  7. Remote monitoring capabilities
  8. Integration with other systems, and
  9. More built-in system intelligence.

VCR Based Analog CCTV Surveillance Systems

Traditional VCR based analog CCTV surveillance systems used black and white analog CCTV cameras to capture videos. These analog cameras were typically connected to the VCR through a coaxial cable. The VCR recorded videos on the same VHS cassettes which were used in a home VCR (remember Blockbuster Video?). There were many problems with these cassettes, one of the largest being that they couldn’t store more than 8 hours of video and thus needed either regular replacement or constant reuse. The surveillance industry requested an increase in the the size of storage devices and to make them scalable. This led to the evolution of time lapse mode in CCTV video recording. In time lapse mode, instead of every subsequent image being recorded, every second, fourth, eighth, or sixteenth image was recorded. This storage space saving advance in technology significantly enhanced the duration of recorded videos, and the surveillance industry came up with recording specifications such as 15 fps (frames per second), 7.5 fps, 3.75 fps, and 1.875 fps. Further, to record even more cameras, people started using technologically improved devices such as quads and multiplexers.

DVR (Digital Video Recording) Based Analog CCTV Surveillance Systems

In the mid 90’s, DVRs began replacing the traditional VCR recording system. Major advantages of using DVRs over VCRs include improved video quality and increased storage space. Another advantage was that people could easily and quickly scan through surveillance videos. However, the cameras being used were still traditional analog cameras. As this technology improved, the videos were first digitized using a DVR and then stored in hard drives. As the early DVR hard drives were still very expensive and this surveillance technology new, manufacturers did not unite in the use of one recording method and so each used proprietary compression algorithms for storage. This meant that people were tied to the same manufacturer for devices used to replay the videos. However with time, cost of hard drives significantly decreased and compression algorithms such as MPEG 4 became popular.

Network DVR-Based Analog CCTV Systems

With the use of DVRs, it became possible to record videos digitally. Powered with an Ethernet port, it also became possible to transmit the digital videos over long distances, such as through the internet. Some early DVR systems allowed networking and monitoring at the same time, while others just allowed monitoring of the network transmitted videos. Also, some systems required the Windows operating system to be running on a client computer while other systems allowed monitoring through a web browser. This networked approach to video surveillance allowed for remote monitoring and remote operations (control) of the surveillance system.

There were, however, and still are disadvantages of using a DVR. DVR systems were developed by companies using proprietary hardware and software. This made DVR surveillance technology very expensive to maintain and upgrade, and difficult to share with others such as law enforcement. Furthermore, there are systemic vulnerabilities to virus and limits to scalability.

Video Encoder-Based Network Video Systems (NVR’s)

A network video surveillance system allows continuous transportation of video streams over an IP network. The first such network based video surveillance system became possible with the advent of video encoders and video servers. In these systems, surveillance videos captured using analog cameras were digitized and compressed using video encoders. Video encoders send the compressed video to a video server over an IP network using a network switch. The video server is then used to record and monitor the surveillance videos.

Network Camera Based Network Video Systems

A network camera, also called an IP camera, sends surveillance videos over an IP network and consists of no analog components. It has built in computing power — both storage for video recordings and internet connection capabilities — which can provide cutting edge video analytics preinstalled in the network camera. The images are digitized inside the camera and remain digitized throughout the system. Network based video cameras provide the highest degree of clarity. This type of  video surveillance is very prevalent and inexpensive today. In fact, companies such as Samsung, Sony, Vivotek, and many others sell these cameras (which have their own internal storage), and the quality of the surveillance videos they create is excellent.

 

Posted in Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Image Processing for Forensic Video Enhancement Services

Share on FacebookTweet about this on TwitterShare on Google+

Image Processing Techniques for Forensic Video Enhancement Services

Forensic Forensic Video Enhancement Services

Image processing for Forensic Video Enhancement Services

There are various image processing techniques that can be used to improve the quality of images. This post will discuss how today’s digital cameras and computer software include built in image processing capabilities and simple techniques that can be used for forensic video enhancement.

Exposure

Just as a human eye adapts to different lighting conditions, a camera is also able to adjust to changes in light conditions. A camera achieves this using three parameters:

  1. Exposure time or the time an image sensor is exposed to light.
  2. Iris or aperture diameter which also manages the light coming in from the lens
  3. Gain or the digital amplification of the image level to electronically brighten images.

Increasing the gain of an image (number 3 above) will also result in increasing the “noise” in the image, but that does often help our eye see certain details of the image signal more clearly. Another alternative to enhance the image is to increase the exposure time (number 1 above), but unfortunately this can also require reducing the frame rate. All these parameters are automatically checked and calculated by automated camera tools (such as is included in most modern digital cameras, surveillance cameras, etc.). However, if the resulting image has too much noise, a forensic video expert can make use of forensic video enhancement services to improve image quality.

Backlight Compensation

Too much backlight is often difficult for a camera to handle. For example, if there is a section of the image that is very bright, the camera might think that there is too much light in the entire image and automatically reduce the iris opening or decrease the exposure time. In this case, the most important area, the area of interest, in the resulting image may be too dark and of lesser resolution quality. The effect of this is that important details can be lost in the dark. Another method known as backlight compensation introduces a mechanism to ignore specific areas of the image containing high light. Using backlight compensation, the camera will expose properly for the darker areas of the image, and the bright areas will get overexposed. Cameras can also calculate the exposure required by determining which area in an image has the maximum exposure value. Known which method is done automatically by the camera, or manually by an expert, if often vital when considering which method of forensic video enhancement services to use.

Wide Dynamic Range

In situations where both extreme light and extreme dark lighting conditions are combined, an advanced feature called wide or high dynamic range (HDR) is helpful. It uses techniques that handle a wide range of lighting conditions, such as where a shadowed person is standing in front of a backlit bright window. In such situations, the true dynamic range of a scene is the range of light levels from the darkest object to the brightest object. This technique uses different exposure levels for different areas in the image. Although this technique often works well, using wide dynamic range can pose certain problems:

  1. Different noise can occur in different regions, especially dark regions which can have a high level of noise.
  2. In images with many different lighting conditions, large artifacts might show up in between pixels of different lighting conditions.
  3. Because this technique often requires taking two pictures of the image, one quickly followed by the next, and then combining the results, if quick movement occurs at the time when the two images are captured, the final image can contain strange errors due to the movement and location changes of the objects.
  4. Colors can be very weak and regions with different light may receive very low dynamic range.

Bayer De-mosaicing

All digital cameras use a process of de-mosaicing in order to process raw image and convert it into a high quality color image. As each pixel simply records the illumination behind one of the color filters, values from neighboring color pixels are used and interpolated to calculate the actual color of that pixel. This is done using the process of de-mosaicing, which uses an algorithm that converts the raw image into high quality color image.

White Balance

After color interpolation, white balance is performed to ensure correct color balance. The neutral (black, gray, white) colors stay neutral regardless of the illumination. In network cameras indoor and outdoor settings are used to manage the white balance. In cameras having an auto white balance function, two or three different gain factors are used to amplify the red, green, and blue signals.

Sharpening and Contrast

Digital images can be enhanced using Digital Sharpening and Contrast Enhancement. Digital sharpening is the process of increasing contrast at the edges by lightening the light pixels and darkening the dark pixels. Contrast enhancement affects the overall image, by changing how the original pixels are mapped on a display screen.

Posted in Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Facial Recognition Using 3D Mug Shots – Future of Forensic Surveillance

Share on FacebookTweet about this on TwitterShare on Google+

Facial Recognition Using 3D Mug Shots – Future of Forensic Surveillance

Facial Recognition

Turning a 2D photograph in 3D for Facial Recognition

Facial recognition is an essential element of video forensics. Our human visual cognition system is attuned to natural facial recognition, in fact we easily recognize human faces even in tough visual environments, such as bad light conditions or different pose variations. Many technologies have also been developed that allow computers to recognize faces. Although many of these facial recognition systems have been used for a long time they are typically based on 2D image facial recognition. However, 2D image facial recognition systems pose many challenges which are directed related to its data variability. When a surveillance camera records a face, it’s usually at a strange angle – not the standard straight on image captured in a standard ID, such as a Driver’s License or passport photos. A 2D image based facial recognition system is an inadequate tool for matching faces captured from different angles. Other challenges to facial recognition may include challenges due to pose variations, bad lighting conditions, occlusions and facial expressions.

Facial Recognition

To overcome these challenges scientists have been working aggressively to develop computer vision systems that can process and analyze 3D faces exactly as the human vision does. Also called the 3D mug shot, this is an interesting technology which is recently being used by police officers in Tokyo. It creates a map of a face that can be used to match surveillance images — even at strange angles. From April 2016, Tokyo’s 102 Metropolitan Police Department Stations will place 3D cameras that will capture faces and record unprecedented facial information for their 3D identification stations. Besides being used for forensic and surveillance purposes, these systems can also be used in biometric machines, improve human–computer interaction (HCI), facial surgery, video communications and 3D animation.

Basic concept of 3D Facial Recognition

Facial Recognition

Computers can now adapt facial recognition software to compare against different facial expressions

The 3D facial recognition programs attempt to recover facial 3D shapes from cameras and reproduce their actions. Furthermore, the software also attempts to recover these facial shapes under multiple pose and light variations. The fields of computer vision and computer graphics are closely related to this technology of facial recognition. It requires high tech knowledge of capturing and processing human geometry. Such programs should be capable of using techniques for 3D reconstruction of geometric shapes. Way back in 2010 computerized tests were made on about 1.6 million mug shots to pick someone from these mug shots. The advanced algorithms achieved this with an accuracy of 92 percent. Tests were also run on photos of people who were not looking directly at the camera. With such advanced technologies, video forensic experts can use forensic video analysis and convert low quality surveillance images into powerful evidence with unprecedented accuracy.

Geometric and Topological Aspects of the Human Face

Facial Recognition

Resistance is Futile

Some notable geometric and topological features of a human face are considered as distinguishing features of any human face. In effect they pose both a challenge and an opportunity to the field of 3D face recognition. The following points discuss the influence of these aspects and the challenges they pose to 3D face recognition:

  1. Changes in Human face: The human face can change as a result of factors such as age, weight loss, weight gain and facial expressions. As distinguishing 3D shape variations of human face among different individuals are statistically small, these changes pose serious challenges in the field of 3D facial recognition. Besides changing the geometry, some changes such as mouth opening can result in topological changes to 3D facial structure as well. However, following three aspects of human face have helped overcome these challenges and also helped in the development of rigid approaches for 3D face recognitions systems:
    1. The anatomical structure of the face remains unchanged, especially in the case of changes related to human expressions.
    2. Some facial regions such as the nose and forehead are less affected by change in expression. These regions are also called the semirigid regions.
    3. Depending on the facial expression, some other facial regions besides the semirigid regions, will be less deformed.

Fiducial points in a Human face: Some detectable fiducial points in a human face are eye corners, the mid point between the eyes, the tip of the nose and its two lower corners, the furthest chin point, and mouth corners. These fiducial points are often used to establish point-to-point correspondence between two or more facial scans. They also act as standard locations from where other local features are extracted. This helps in the process of facial feature matching and recognition. However this can pose a challenge in case of deformed surfaces. In such cases the issues can be resolved by first establishing relation between these fiducial points.

Posted in Advice for attorneys and police on recent surveillance issues, Enhancement of audio and video | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment