Skip to main content
Home Support

Spectral Cameras

A target is imaged by first determining the height of the scene that the spectral imaging system is exposed to. This is determined by the focal length of the lens, the imaging slit width and the distance to the target. If our height works out to be 0.5 mm, we must take a spectral frame, move either the imaging spectrometer or the target 0.5mm then take the next spectral frame, repeating this process until the entire scene has been imaged. If we move less than 0.5mm we will be oversampling the scene, repeating the data gathered from a single point. If we move further than 0.5mm we will be undersampling, missing data from the target we are imaging.

Specim offers a number of detailed tutorials on how to set up and begin capturing data with your FX10 or FX17 camera and the 40x20 LabScanner:

 

Assembly: https://www.youtube.com/watch?v=LhenIENAD1A

Setup View: https://www.youtube.com/watch?v=rLKPpkM6Fkk

Adjust View: https://www.youtube.com/watch?v=CdwNS6e2K64

Setting the Scanning Speed: https://www.youtube.com/watch?v=GOfUykJGdvU

Setting the Basic Camera Parameters: https://www.youtube.com/watch?v=bfe7Slz1z0A

Capture View: https://www.youtube.com/watch?v=whJsv1AUObk

This is a simple question with a complicated answer. The imaging speed is determined by:

·The sensitivity of the camera and the illumination of the target (lower sensitivity or lower light levels require longer integration times)

  • The data transfer capabilities of the camera
  • The pixel depth (an 8 bit pixel is ½ of the data of a 10 or 12 bit pixel)
  • The transfer speed of the camera to computer interface (CameraLink is fast, USB is much slower)
  • The computer’s ability to process the incoming data

A fast camera with lots of light can produce more than 100 full spectral frames per second.This means if the imaging height of the scene is 0.5 mm you can image more than 50mm/sec. Handling the data at the computer becomes a problem as this 100 frames/second results in 50 megabytes of data per second that needs to be processed. A number of compromises can be made including decreasing the bit depth or the spectral or spatial resolutions depending on the application.

The spectral resolution of the imaging spectrograph is defined by the optics of the prism or grating mechanism and the entrance slit width of the device. The light entering the system is defracted into its components according to wavelength. For example Specim’s ImSpector V10-e provides a spectral resolution of 2.8nm with a 30um slit width (depending on the detector and optics). 

Some CCD and CMOS detectors have a thin coating on the detector surface causing interference phenomena (like Newton rings) that are seen as horizontal waves. This is an aesthetic problem only and does not interfere with spectral imaging. 

The light source has an infrared cutoff filter or the fiber optics absorb the light. The camera is equipped with an infrared cut-off filter (hot mirror). The detector has low response (low QE) above 700 nm. The front objective coatings are not designed for above 700 nm.

The light source (usually halogen) does not produce much energy at the short wavelengths. The camera detector has low response at the short wavelengths. There is a lens coating on the front objective or a UV blocking filter present.

The back focal length of the lens is incorrect for the camera (may not be C-mount). The lens is not focused on your target. Use a focus target to set the focus. The objective lens is loose or incorrectly installed. The objective lens is not suited for spectral imaging (low quality, wrong wavelength range, coatings).

Is the lens cap on the objective lens?
Is the lens aperture open?
Do you have adequate integration time?
Do you have an incompatible light source ie is there an IR cutoff filter?
The target has high absorption/low reflectance.

A data cube is simply a collection of sequential spectral frames placed back to back.If we imaged a target with our 1024 pixel x 1024 pixel imaging spectrograph using an imaging height of 0.5mm and took 200 images the dimensions of our cube would be frames x pixel width x pixel height or 200x 1024x1024. 

A spectral frame is the image captured by the imaging spectrograph. The horizontal dimension or row is spatial. The field of view is defined by the focal length of the objective lens, distance to the target and width of the sensor. This field of view is then divided into the number of pixels of horizontal resolution. The vertical dimension is spectral. Each column of pixels placed on the sensor represents the intensity of light reflected from a portion of the thin slice of the target at a particular wavelength.

A waterfall image is simply a spatial image taken from our data cube. If we take an slice from the data cube in theframe x pixel width (the two spatial dimensions) we will get a recognizable image of the target at a particular wavelength.

An imaging spectrograph transforms a very thin slice of an image into its spectral components by using a prism, grating or both and projects the spectral information onto an imaging sensor, typically a scientific CCD or CMOS camera. 

Spectral imaging is a combination of imaging and spectroscopy, where a complete spectrum is collected at every location of an image plane. This powerful technique is sometimes called hyperspectral or multispectral imaging. Spectral imaging is not restricted to visible light, but works from ultraviolet to infrared. Wikipedia offers a very good overview of hyperspectral imaging: http://en.wikipedia.org/wiki/Hyperspectral_imaging

Spectroscopy captures the entire spectrum, light intensity as a function of wavelength. It's this very detailed spectral response curve that gives spectral imaging the ability to discriminate specific chemicals and elements. The unique reflections and absorbances are the signature of the compound.

When a spectral camera images a scene, the frame can be considered to be three dimensional. What the viewer sees when viewing the image is the two dimensional spectral frame which is defined by the area of the detector. This frame typically has data for each pixel of the camera.What must be remembered is that this is the spectral image of an area defined by the optics of the spectral camera. If the scene being imaged is 0.5mm as an example, each pixel can be considered a 3d cube defined as pixel height x pixel width x scene height. If the scene height and the pixel width are not equal, a waterfall image which is simply a slice taken through the data cube will present a rectangular pixel defined as scene height x pixel width. When this image is presented on a screen with square pixels, the image will appear to be “compressed”, even though the data is completely valid.

Incorrect calibration – the spectral lines from a reference source have not been correctly identified. You can use a simple fluorescent table lamp to indentify spectral lines. The camera detector is too small, misaligned or not centered. There are calculation errors.

Machine Vision

Signal-to-Noise Ratio

An ideal camera sensor would convert a known amount of light into an exactly predictable output voltage. Unfortunately, ideal sensors (like all other electronic devices) do not exist. Due to temperature conditions, electronic interference, etc., sensors will not convert light 100% precisely. Sometimes, the output voltage will be a bit bigger than expected and sometimes, it will be a bit smaller. The difference between the ideal signal that you expect and the real-world signal that you actually see is usually called noise. The relationship between signal and noise is called the signal-to-nose ratio (SNR). 

Signal-to-noise ratio is commonly expressed as a factor such as 20 to 1, 30 to 1, etc. Signal-to-noise ratio is also frequently stated in decibels (dB). The formula for calculating a signal-to-noise ratio in dB is: SNR = 20 x log (Signal/Noise). 

Once noise has become part of a signal, it can’t be filtered or reduced. So it is a good idea to take precautions to reduce noise generation such as: 
 

1. using good quality sensors and electronic devices in your camera
2. using a good electronic architecture when designing your camera
3. lowering the temperature of the sensor and the other analog devices in your camera
4. taking precautions to prevent noisy environmental conditions from influencing the signal (such as using shielded cable)

Many times, camera users will increase the gain setting on their cameras and think that they are improving signal-to-noise ratio. Actually, since increasing gain increases both the signal and the noise, the signal-to-noise ratio does not change significantly when gain is increased. Gain is not an effective tool for increasing the amount of information contained in your signal. Gain only changes the contrast of an existing image. 

PRNU

When a fixed, uniform amount of light falls on the sensor cells in a digital camera, each cell in the camera should output exactly the same voltage. However, due to a variety of factors including small variations in cell size and substrate material, this is not actually true. When a uniform light is shined in the cells in a digital camera, the cells output slightly different voltages. This difference in response to a uniform light source is referred to as “Photo Response Non-Uniformity” or PRNU for short. Since PRNU is caused by the physical properties of the sensor itself, it is almost impossible to eliminate. PRNU is usually considered to be a normal characteristic of the sensor array used in a camera. 

One easy way to deal with PRNU is to use a look up table (LUT). With this method, the sensor cells in a camera are exposed to uniform light and an adjustment factor that would result in a uniform output is calculated for each sensor cell. The adjustment factor for each cell is stored in a table. When an image is captured, a software routine looks in the table and applies the appropriate correction factor to the output from each cell. 

PRNU can be made worse if the gain on your camera is set too low or if your exposure time is set too high (usually > 500 ms).

On PCs with a Windows™ OS, if you configure a Basler GigE camera for a persistent (fixed) IP address and the address is in the range normally reserved for “multicast” IP addresses (224.0.0.0 to 239.255.255.255), the camera will not be discoverable by pylon, even when you use the pylon IP Configuration Tool. This situation occurs because Windows rejects all incoming IP packets from any device (such as a GigE camera) with an IP address in the multicast range. You can find some good basic information about IP multicast on Wikipedia. 

Click here to open the Wikipedia article. 

The pylon IP Configuration Tool works by sending UDP broadcast messages to all attached cameras and waiting for the cameras to answer. But since Windows rejects packets from devices with IP addresses in the multicast range, answers from any camera with an IP address in the range will never reach the configuration tool. 

With the current package of pylon tools, there is no way to discover a camera with an IP address set in the multicast range. However, the Basler pylon API does provide a method for accessing a camera by its MAC address and for forcing a change to the camera’s IP address (the FORCEIP_CMD). This will let you set the camera back to a state where it is discoverable. 

A programming sample is available that illustrates how to use the pylon C++ API to set the camera’s IP configuration. It also illustrates how to use the Force IP command. The programming sample is based on the Basler pylon 2.0 C++ SDK and to build the entire sample project, you must have the pylon 2.0 SDK installed on your PC. The sample also includes a prebuilt “Simple IP Configuration Tool” executable which will run on PCs that have either the pylon SDK or Basler’s free pylon 2.0 runtime package installed. 

You can use the link below to download the programming sample:
 

IP Configuration Sample - zip, 2.7 MB

And you can click here to access the pylon 2.0 runtime package download on the Basler website. 

CCD vs. CMOS

CCD sensors use devices called shift registers to transport charges out of the sensor cells and to the other electronic devices in the camera. The use of shift registers has several disadvantages: 
 

1. Shift registers must be located near to the photosensitive cells. This increases the possibility of blooming and smearing.
2. The serial nature of shift registers makes true area of interest image capture impossible. With shift registers, the readings from all of the sensor cells must be shifted out of the CCD sensor array. After all of the readings have been shifted out, the readings from the area of interest can be selected and the remaining readings are discarded.
3. Due to the nature of the shift registers, large amounts of power are needed to obtain good transfer efficiency when data is moved out of the CCD sensor array at high speed.

CMOS sensors and CCD sensors have completely different characteristics. Instead of the silicon sensor cells and shift registers used in a CCD sensor, CMOS sensors use photo diodes with a matrix oriented addressing scheme. These characteristics give CMOS the following advantages: 
 

1. The matrix addressing scheme means that each sensor cell can be accessed individually. This allows true area of interest processing to be done without the need to collect and then discard data.
2. Since CMOS sensors don’t need shift registers, smear and blooming are eliminated and much less power is needed to operate the sensor (approximately 1/100th of the power needed for a CCD sensor).
3. This low power input allows CMOS sensors to be operated at very high speeds with very low heat generation.

The quality of the signals generated by CMOS sensors is quite good and can be compared favorably with the signals generated by a CCD sensor. Also, CMOS integration technology is highly advanced; this creates the possibility that most of the components needed to produce a digital camera can be contained on one relatively small chip. Finally, CMOS sensors can be manufactured using well-understood, standardized fabrication technologies. Standard fabrication techniques result in lower cost devices. 

3 Chip vs. 1 Chip Color

Three chip color cameras always contain a prism which divides the incoming light rays into their red, green and blue components. Each chip then receives a single color at full resolution. 

One chip area scan cameras use a single sensor that is covered by a color filter with a fixed, repetitive pattern. Filters with several different patterns are used but the Bayer color filter is the most common. The illustration to the right shows a portion of the Bayer filter. When a color filter is used with a single sensor, each individual cell in the sensor gathers light of only one particular color. To reconstruct a complete color image, an interpolation is needed. The red, green and blue information is interpolated across several adjacent cells to determine the total color content of each individual cell. 

One chip line scan cameras use a sensor that has three rows of cells, a red rwo, a green row and a blue row. As an area on an object moves past the camera, the area is examined first by the cells in the red row, second by the cells in the green row and third by the cells in the blue row. The information from the red, green and blue cells is then combined to produce a full color image.
 

3-Chip Color Advantages:
 

1. Full resolution RGB Images
2. Easier software handling of the data output

3-Chip Color Disadvantages: 
 

1. High camera cost due to the need for a prism and three sensor chips
2. Large camera housing needed for prism and sensors
3. Typically require expensive, special optics
4. High weight

 

1-Chip Color Advantages:
 

1. Much less expensive
2. Smaller size
3. Lower weight

1-Chip Color Disadvantages: 
 

1. For area scan cameras, an interpolation algorithm must be run to achieve full color resolution
2. For line scan cameras, spatial correction must be done to combine the color data from the three sensor rows

 

When deciding on a three chip or a one chip camera, you must consider the advantages and disadvantages of each and determine which type is most appropriate for your application. Experience shows that in many cases, a one chip camera is more than adequate and is the cost efficient solution.

Check your network adapter settings. 

Go to Start>Control Panel>Network Connections and right click on your network adapter. Select Properties from the drop down menu. When the properties window opens, click the Configure button. Select the Advanced tab and in the property box on the left, select the property called “Jumbo Frames”. Set the value as high as possible (for jumbo frames, it’s approximately 16KB). 

Be aware that if your adapter doesn’t support jumbo frames, you might not be able to operate your camera at the full frame rate.

Check your network adapter settings. 

Go to Start>Control Panel>Network Connections and right click on your network adapter. Select Properties from the drop down menu. When the properties window opens, click the Configure button. 

Look for a tab with a name such as “Connection speed”. If you see a tab like this, select the tab and set the “Speed & Duplex” property to “Automatic identification” or “Auto”. 

If you do not see a “Connection Speed” tab, select the Advanced” tab and look for the “Speed & Duplex” property. Set the “Speed & Duplex” property to “Automatic identification” or “Auto”.

Color Filters for Single-Sensor Color Cameras 

In general single-sensor color cameras use a monochrome sensor with a color filter pattern. Another way to achieve a color image with only one sensor would be to use a revolving filter wheel in front of a monochrome sensor, but this method has its limitations. 

With the color filter pattern method of color imaging, no object point is projected on more than one sensor pixel, that is, only one measurement (for a single color or sum of a set of colors) can be made for each object point. 

There are several different filter methods for generating a color image from a monochrome sensor. In the following some frequently used filter arrangements are detailed. 

Bayer Color Filter (Primary Color Mosaic Filter) 

The following table 1 shows the filter pattern for a sensor of size xs x ys (xs and ys being multiples of 2):

The following table 2 shows the filter pattern for a sensor of size xs x ys (xs and ys being multiples of 2):

This is basically the same arrangement as the Bayer filter pattern, but instead of using primary colors (R, G, B) it works with complementary colors (magenta, cyan, yellow). The reason for this is that a primary color filter blocks of 2/3 of the spectrum (i.e. green and blue for a red filter) while a complementary filter blocks only 1/3 of the spectrum (i.e. blue for a yellow filter). Thus, the sensor is 2 times more sensitive. The tradeoff is a somewhat more complicated computation of the R, G, B values, requiring the input of each complementary color. 

Primary Color Vertical Stripe Filter 

Table 3 shows the filter pattern for a sensor of size xs x ys (xs being a multiple of 4):

This arrangement is very simple and basically well suited to machine vision applications. The drawback is that the horizontal resolution is only 1/3 of the vertical resolution.

Binning in CCD Cameras

Binning increases the camera’s sensitivity to light by summing the charges from adjacent pixels in the CCD sensor into one pixel. There are three types of binning available: horizontal binning, vertical binning, and full binning. 

With horizontal binning , pairs of adjacent pixels in each line of the sensor are summed (see the drawings below). With vertical binning, pairs of adjacent pixels from two lines in the sensor are summed. Full binning is a combination of horizontal and vertical binning in which four adjacent pixels are summed. 

Using horizontal or vertical binning generally increases the camera’s sensitivity by up to two times normal. Full binning increases sensitivity by up to four times normal. On some camera models, using horizontal or full binning increases the camera’s maximum frame rate (this is not true for all cameras and depends on the architecture of the sensor used in the camera). 

With horizontal binning active, horizontal image resolution is reduced by half, for example, if a camera’s normal horizontal resolution is 1300, with horizontal binning active, this would be reduced to 650. With vertical binning active, vertical image resolution is reduced by half, for example, if a camera’s normal vertical resolution is 1030, with vertical binning active, this would be reduced to 515. When full binning is used, both horizontal and vertical resolution are reduced by half.

Sensitivity

The response curve for a light sensitive sensor can be divided into three parts: the dark area, the linear area and the saturation area. A typical response curve is shown in the graph below. 

The dark area of the response curve shows the sensor’s response to very low light. The output of the sensor in the dark area is very low, is noisy and is unpredictable. As you gradually increase the light falling on a sensor, you will find a point where the output of the sensor begins to increase predictably as the amount of light increases. This point is called the Noise Equivalent Exposure (NEE). 

After the NEE point is reached, the output of the sensor becomes linear. The output remains linear until a point called the Saturation Equivalent Exposure (SEE) is reached. At this point, increasing the light intensity results in a nonlinear increase in the sensor output. 

The gradient of the linear portion of the sensor’s response curve is commonly referred to as sensitivity and is usually measured in V/µJ/cm2. The higher a sensor’s output voltage is for a given amount of light, the higher its sensitivity. 

But when you are discussing sensors, talking about sensitivity alone does not make sense. For one thing, NEE is also very important. Since a sensor with a high NEE will be blind at low light levels, NEE should be as low as possible. 
Another point to consider is that a digital camera is a system and that sensor sensitivity is just one of the factors involved in the output signal from the camera. Electronic devices in the camera such as Analog to Digital converters and amplifiers also influence the output signal. At Basler, we feel that a camera’s “responsivity” is a better measure of camera performance. We also think that since our cameras are digital, responsivity should be stated as DN/µJ/cm2 (DN stands for digital number). The graph below shows a responsivity curve. 

If a camera provides a gain feature as most of them do, responsivity will depend on the gain setting. And responsivity really only makes sense when it is stated in combination with a measurement of the camera’s noise such as peak-to-peak, signal-to-noise ratio. 

Let’s consider an example. Suppose that you are comparing two cameras and that they have the following specifications: 

  Camera One: Responsivity = 1 DN/µJ/cm2 Noise = 2 DN (peak-to-peak) 
  Camera Two: Responsivity = 2 DN/µJ/cm2 Noise = 5 DN (peak-to-peak) 

At first glance, camera two seems better than camera one because its responsivity is higher. However if camera one has a gain feature, we can adjust the gain and increase the responsivity to two. Keep in mind that if we adjust the gain to double the responsivity from one to two, we will also double the noise. Now we have this situation: 

  Camera One: Responsivity = 2 DN/µJ/cm2 Noise = 4 DN (peak-to-peak) 
  Camera Two: Responsivity = 2 DN/µJ/cm2 Noise = 5 DN (peak-to-peak) 

Which camera is better? They now both have the same responsivity, but camera one has lower noise. Camera one would be the better choice. 

The lesson to be learned from all of this is that sensor sensitivity alone does not tell the entire story and that we should be sure to use similar measuring criteria when we are comparing cameras.

Area of Interest Feature

Many of Basler’s area scan cameras include an area of interest (AOI) feature. The AOI feature lets the user specify a portion of the camera’s sensor array and during operation, only the pixel information from the specified portion of the array is transmitted out of the camera. 

The main advantage of the AOI feature is that as you decrease the height of the AOI, there is usually an increase in the camera’s maximum allowed frame rate. In other words, when you capture smaller images, you can capture more images per second. This can be very useful in an application where you need to capture smaller images at higher speeds. 

Be aware that on most area scan cameras with an AOI feature, decreasing the AOI height will result in a higher maximum allowed frame rate - but this is not true for every camera model. Also, on some camera models the maximum allowed frame rate will increase when both the AOI height and the AOI width are decreased. You should consult the user’s manual for your camera model to learn the specific details of the AOI feature on your camera.

RGB Color Space

An RGB color space is an additive color space (a specific organization of colors produced by the mixing of different colors) created based on the RGB color model in which red, green and blue light are combined in various proportions to produce a variety of unique colors. RGB color space features a multitude of specific combinations of the three primaries as well as a white point. The possibilities for mixing these colors together can be represented as a three dimensional coordinate plane with the values for R (red), G (green) and B (blue) on each axis. This coordinate plane produces a cube called the RGB color space:

This is a common color model used for computer graphics based on how humans visual color, which is similar to an RGB color space. The human eye sees color when the cornea and lens focus light from it’s surroundings onto the retina in the back of the eye. Based on this stimulus to the retina, the lens of the eye responds by adjusting it’s thickness to focus light on the photoreceptive cells of the retina (rods and cones) which are sensitive only to red, green and blue. Therefore, theoretically it is possible to decompose ever color visible to the human eye into specific combinations of these three primary colors.

Color monitors can be thought of as a grid of millions of small points in which the color of each individual point is determined by simply mixing different intensities of red, green and blue. While the potential combinations are nearly endless, it is most common to place the range of intensity for each color on a scale from 0 to 255 (one byte). This range of intensity is defined as the “color depth.”

If all three color channels have a value of zero color depth, it means that no light is emitted and the resulting color is visualized as black. If all three color channels are set to their maximum values (255 at a one byte color depth), the resulting color is white. This method of creating color by mixing different light colors is also called “additive color mixing’, with RGB additive color mixing being the most common:

On the RGB color model cube, if you draw a diagonal line from the black (0,0,0) origin point of the color cube to the white (255,255,255) point, you will get a resulting line inwhich each point has identical R,G, and B values. The result of having the same value for all three color channels is the color gray. While the color gray is consistently visualized along this line, the intensity changes as you move from the black origin to the white.

RGB Illustration 1

Images and Information courtesy of Wikipedia and Basler, please reference these sources for more information on this topic or Contact Us.

YUV Color Coding

A CCD or a CMOS sensor alone is not able to detect the color of incident light. In reality, each pixel in the sensor simply detects the intensity of the incident light. But when a color pattern filter is applied to the sensor, each pixel becomes sensitive to only one color - red, green or blue. The following table shows the color arrangement of a “Bayer Pattern” filter on a sensor with a size of X x Y (with X and Y being multiples of 2). 

Since the arrangement of the colors in the Bayer pattern filter is known, a PC can use the raw information transmitted for the pixels to interpolate full RGB color information for each pixel in the sensor. Instead of using the raw sensor information, however, it is more common to use a color coding known as YUV. The block diagram below illustrates the process of conversion inside a Basler IEEE 1394 camera. To keep things simple, we assume that the sensor collects pixel data at an 8 bit depth.

As a first step, an algorithm calculates the RGB values for each pixel. This means, for example, that even if a pixel is sensitive to green light only, the camera gets full RGB information for the pixel by interpolating the brightness information from adjacent red and blue pixels. This is, of course, just an approximation of the real world. There are many algorithms for doing RGB interpretation and the complexity and calculation time of each algorithm will determine the quality of the approximation. Basler IEEE 1394 color cameras have an effective built-in algorithm for this RGB conversion. 

A disadvantage of RGB conversion is that the amount of data for each pixel is inflated. If a single pixel normally has a depth of 8 bits, after conversion it will have a depth of 8 bits per color (red, green and blue) and will thus have a total depth of 24 bits. 

YUV coding converts the RGB signal to an intensity component (Y) that ranges from black to white plus two other components (U and V) which code the color. The conversion from RGB to YUV is linear, occurs without loss of information and does not depend on a particular piece of hardware such as the camera. The standard equations for accomplishing the conversion from RGB to YUV are: 

Y = 0.299 R + 0.587 G + 0.114 B 
U = 0.493 * (B - Y) 
V = 0.877 * (R - Y) 

In practice, the coefficients in the equations may deviate a bit due to the dynamics of the sensor used in a particular camera. If you want to know how the RGB to YUV conversion is accomplished in a particular Basler camera, please refer to the camera’s user manual for the correct coefficients. This information is particularly useful if you want to convert the output from a Basler IEEE 1394 camera from YUV back to RGB

The diagram below illustrates how color can be coded with the U and V components and how the Y component codes the intensity of the signal.

This type of conversion is also known as YUV 4:4:4 sampling. With YUV 4:4:4, each pixel gets brightness and color information and the “4:4:4” indicates the proportion of the Y, U and V components in the signal. 

To reduce the average amount of data transmitted per pixel from 24 bits to 16 bits, it is more common to include the color information for only every other pixel. This type of sampling is also known as YUV 4:2:2 sampling. Since the human eye is much more sensitive to intensity than it is to color, this reduction is almost invisible even though the conversion represents a real loss of information. As defined in the DCAM specification, YUV 4:2:2 digital output from a Basler camera has a depth that alternates between 24 bits per pixel and 8 bits per pixel (for an average bit depth of 16 bits per pixel). 

As shown in the table below, when a Basler camera is set for YUV 4:2:2 output, each quadlet of image data transmitted by the camera will contain data for two pixels. In the table, K represents the number of a pixel in a frame and one row in the table represents a quadlet of data transmitted by the camera.

For every other pixel, both the intensity information and the color information are transmitted and this results in a 24 bit depth for those pixels. For the remaining pixels, only the intensity information is preserved and this results in an 8 bit depth for them. As you can see, the average depth per pixel is 16 bits. 

On all Basler IEEE 1394 color cameras, you are free to choose between an output mode that provides the raw sensor output for each pixel or a high quality YUV 4:2:2 signal. Due to the high bandwidth that would be needed to provide full RGB output at 24 bits/pixel, Basler IEEE 1394 color cameras do not provide RGB output. 

 

If you have multiple network adapters in a single PC, keep the following guidelines in mind: 
 

1. Only one adapter in the PC can be set to use auto IP assignment. If more than one adapter is set to use auto assignment, auto assignment will not work correctly and the cameras will not be able to connect to the network. In the case of multiple network adapters, it is best to assign fixed IP addresses to the adapters and to the cameras. You can also set the cameras and the adapters for DHCP addressing and install a DHCP server on your network.

 

2. Each adapter must be in a different subnet. The recommended range for fixed IP addresses is from 172.16.0.1 to 172.32.255.254 and from 192.168.0.1 to 192.168.255.254. These address ranges have been reserved for private use according to IP standards.

 

3. If you are assigning fixed IP addresses to your cameras, keep in mind that for a camera to communicate properly with a network adapter, it must be in the same subnet as the adapter to which it is attached.

Basic Camera Principles 

The principle of how a camera works is that during line exposure, photons from a light source strike the pixels in the camera’s sensor and generate electrons. At the end of each line exposure, the electrons collected by each pixel are transported to an analog-to-digital converter. For each pixel, the converter provides a digital output signal that is proportional to the number of electrons collected by the pixel. 

Below Minimum Line rates 

If a camera is triggered at a rate below the specified minimum, it is much easier to fall into an over exposure situation. This happens due to an effect called “shutter inefficiency”. The electronic shutter on digital cameras is not 100% efficient, and the pixels in the camera will collect some photons even when the shutter is closed. At very low line rates, you have long periods of time between exposures when the shutter is closed but the pixels are still collecting some photons and generating electrons. When the electrons collected with the shutter closed are added to the electrons collected during an exposure, the electrons can flood the electronics around the pixel. 

After an Over Exposure 

After an over exposure or with a trigger rate below 1kHz, it takes several readout cycles to remove all the electrons from the pixels and the electronics. For this reason, gray values will be abnormally high during the first several readouts after an over exposure. 

Solutions

Use a camera that can operate at line rates near zero such as the L304k, L304kc,   L400k, and L800k

or,

If you use a camera with a higher specified minimum line rate:

  • Don’t operate the camera below its minimum specified rate.
  • Design an application which accepts a few lines that are brighter than normal.
  • Run the camera in free-run mode and collect only the lines that you need.
  • Send dummy trigger signals to the camera and ignore the lines generated by the dummy triggers.

Infrared

An infrared camera, also referred to as an IR camera, thermal imaging camera or thermal camera, is a measuring instrument used for non-contact measurements of the surface temperature of objects. Measuring temperature in this way enables you to visualize infrared radiation of objects or humans which are normally outside the visible spectrum.

An infrared camera uses an integrated infrared detector to record the intensity and distribution of a certain spectrum of electromagnetic radiation. When utilized accordingly, a thermal camera can immediately detect and rectify deviations from parameters defined for particular processes. Thermal cameras can also be used for process monitoring and quality control, as well as research and development purposes.

See the InfraTec website for more information on their infrared cameras!
 

Infrared thermography refers to the process of an IR (thermal) camera collecting radiation information from an object or scene, converting that information to temperature information, and displaying that information to the user as a thermogram image for analysis.

Contact-free measurements of temperature spreads on object surfaces or of processes provides information about the progression of the process or the state of the object. A thermographic camera is installed in close vicinity to the process and then transmits the accumulated data to an evaluation unit, which will compare the actual state with the defined future state based on minimums, maximums and average values. Using this information, infrared cameras can be used for quality control purposes as well as in many other applications of thermography

Infrared thermography provides specific information for quality assurance purposes, which is especially critical when manufacturing thermally sensitive components. Some applications of infrared thermography include plastic and automotive industry quality control, electronics development, building inspection, CFRP processing and weld seam quality assurance.

For more information on applications of thermography, check out or Thermography Applications Page.

 

Still Need Help?

Here you will find help and support for Machine Vision, Infrared, and Security/Surveillance solutions provided by Channel Systems. Information is seperated into product families to help you find the information you need. If you are unable to solve your problem with the information provided here, please contact us for futher assistance.

Contact