Previous Topic: Detectors: CCDs Lecture Index Next Topic: CCDs: Quantum Efficiency

ASTR 3130, Majewski [SPRING 2023]. Lecture Notes

ASTR 3130 (Majewski) Lecture Notes


DETECTORS:
READOUT ARCHITECTURES, AMPLIFIERS AND NOISE

LINEAR READOUT CCD CAMERAS

A linear CCD array.
    One only needs a single linear array of CCD detectors in order to create a two-dimensional image.

    There are two ways that this can be done, and you are almost certainly familiar with mechanisms that make use of linear CCD arrays .

    • Fixed CCD camera but moving target:

      Linear CCD camera used to inspect items on a moving conveyor belt.
    • Fixed target but moving CCD camera:

      Left: Manual barcode reader. Right: Scanning satellite imaging with different perspectives (e.g., forward, nadir and backward looking), which allow stereo/3-D perspective.

    • QUESTION: Can you think of two other devices you commonly use that are of the second mode of operation?

    • In either case, the CCD must be read out more quickly along the row than it takes to step along in the perpendicular direction.


TWO-DIMENSIONAL READOUT ARCHITECTURES
  • Simplest way to read out a CCD array with two dimensions is called line address readout.
    • Arrange columns of CCD-linked pixels parallel to one another.

    • At end of columns is a set of pixels arranged and charge-coupled in a perpendicular row, called a serial register or horizontal register or shift registeri; all columns of data are read through the same set of electronics at the end of this serial register row).
    • Readout CCD by:

      1. shifting all columns by one pixel into serial register.

      2. Then readout the full sequence of serial register pixels in order by shifting charges along register to amplifier.

      3. When serial register completely empty after transfer of entire row of charge, repeat at 1.

      Line address readout cartooned as an automated bucket brigade using conveyor belts.
      Line address readout animation, taken from the website http://www.astro.virginia.edu/class/oconnell/astr511/lec11-f03.html.
    • The actual image that is assembled from this process is put together row by row. Note the difference/correspondence between the physical MOS pixels and the "picture element" pixels:

      • The physical columns of charge-coupled CCD MOS pixels correspond to the image columns of "picture element" pixels.

      • However, on the physical CCD device, the physical MOS pixels are not normally coupled by row -- this coupling takes place in the serial register and results in adjacent picture elements by row.

  • Potential problem -- still collecting photons while cycling through above process for entire array (with time needed to have good CTE and because the readout electronics take time to record the charge packets).

    • Would result in a smearing of the image unless careful.

      Either:

      1. Readout very fast - this is bad because of the CTE considerations as well as the fact that the amplifier can't measure charge packets accurately.
      2. Use a shutter to cover CCD during readout.

    • For the broadcast industry, use either:
      1. Interline transfer - light sensitive columns interleaved with light-shielded columns. Charges shift laterally into shielded columns, which are then read out as above.
      2. Frame transfer - charge quickly transferred to an entirely separate section of CCD that is protected, then readout slowly as needed.
    • For example: Original RCA CCDs were 320 x 512 pixel frame transfer devices, because the original TV image format was 320 x 256 pixels.

      (Astronomers who wanted to use these devices requested that the shielding not be applied and made use of twice the area, but with shuttered readout).


CCD READOUT - OUTPUT AMPLIFIER, THE "GAIN", AND "ADU's"
    As a CCD clocks out charges through the serial register, an amplifier at the end of the serial register converts the electron packet net charge into a digital signal:

  • The εRN is an imposed noise in the readout process -- ignore for now (we will discuss this below).
  • The Gain is the number of electrons combined to make one "count" in the output picture.
  • These converted "counts" are also called "Analog-to-Digital Units" or "ADUs". All three of these expressions are commonly used to describe the digital levels recorded in each image pixel.

  • Typically the gains in CCDs are set to several e-/ADU.

    • For ST-8 CCD, G = 2.3 e-/ADU.
    • For ST-1001E CCD, G = 2.2 e-/ADU.
    • For the SBIG STX-16803 camera you will use for Labs 3 and 4, the gain is 1.2 e-/ADU.

  • Note that normally we think of amplifiers as making a signal larger (rather than smaller, as here) and in the numerator. Thus, G is more accurately called the Inverse Gain -- but you more often see people call G simply the "Gain".
  • The dynamic range of the image output is generally limited by the Analog to Digital Converter (ADC) which is capable of converting to a certain number of distinct digital "bits". Typical limits are:

    12 bit = 212 = 4096 distinct values
    15 bit = 215 = 32768 distinct values
    16 bit = 216 = 65536 distinct values --- ST 8 and ST-1001E (highest number)

    Note the ADC's effect on dynamic range.


SIGNAL-TO-NOISE, SHOT NOISE AND READOUT NOISE


PLEASE REVIEW THIS SECTION CAREFULLY -- THE CONCEPTS ARE IMPORTANT BUT CAN BE CONFUSING.

  • All sources of noise are an annoyance and limit the accuracy and or precision of the experimental results.

  • We discuss the quality of a measurement by giving the Signal-to-Noise (S/N) of that measurement, which is to say that we take the ratio of the measure to the error in the measure.

      The higher the S/N, the more reliable the measure.


  • SHOT NOISE: One source of noise that we can never remove from our experiments is the statistical noise from Nature itself.

    • It can be shown that this statistical "shot noise", also called "Poissonian noise" or "Poisson noise", is given by a square root rule:

      This is a fundamental law of Nature that the standard deviation of the number of randomly occurring events N is given by the square-root of the number of those events seen, (N)1/2.

        That is to say, if one repeatedly counts the number of photoelectrons, Ne- collected from a source in the same integration time, the standard deviation one will get is given by

    • If shot noise is the only source of noise in the experiment, then we have that the Signal-to-Noise is given by:

      S/N = S/(S1/2) = S1/2

    • Note that the inverse of the S/N is the % error.

      • QUESTION: If there is only Poisson noise, what is the S/N in a pixel that accumulated 225 electrons?

      • QUESTION: How many electrons would one have to accumulate to ensure only a 10% error in the measure?

      • QUESTION: How many electrons would one have to accumulate to ensure only a 1% error in the measure?

      • QUESTION: While we can never remove statistical noise entirely from our experiment, we can try to minimize the relative error it introduces. By the above examples, how is this done?

    • Since we always have shot noise in our experiments, and can do nothing to eliminate it, an experiment with only shot noise represents the ideal in terms of S/N.

    • Most empirical scientists think about things in terms of signal-to-noise, and, in particular, in terms of the above ideal limit to the signal-to-noise in an experiment. YOU SHOULD TOO!

    • Be careful in how you apply the square root rule:

      Nature produces Poisson noise in the photon stream from a source, which in the case of CCDs is reflected in the photoelectron count, NOT in the translation to ADU counts.

      Therefore, to calculate the Poisson noise in ADU, you must go back to electrons, take the square root, and then convert that back into ADU by the gain.


  • READ NOISE Another source of error related specifically to CCDs, but also to many other detectors, relates to the accuracy of the amplification process that converts a certain electron packet size (i.e., measured pixel voltage) into a digital number. This is intrinsically limited by the electronics of the associated circuitry.

    • This error in the output amplifier conversion of the electron voltage to an output signal is called the Readout Noise or, more simply, the Read Noise.
    • The εRN is an imposed noise in the readout process.
    • It's units are tyically given in electrons.

    • Because of how the amplifier reads out, the process is more accurate if allowed more time to respond/gauge the size of the charge packet.

      Thus, a slower readout process yields a lower readout noise. See example below:

      The relationship of camera noise on readout time is shown in this figure. Figures (a) and (b) compare the noise only from readout (unilluminated images). The remaining frames show illuminated images, but (from left to right) with decreasing amounts of signal. Note how as the signal decreases (i.e., lower light levels, rightmost images) the effect of the readnoise becomes more prevalent when it is large (top row) but has less effect when the readnoise is low (bottom row). Images show human cervical carcinoma cells. Image from http://www.microscopyu.com/articles/livecellimaging/imagingsystems.html.
    • All sources of noise in an experiment add together in quadrature; that is to say:

    • For a given device and circuitry, the readout noise is of a constant size (in electrons, or, equivalently, in the converted ADUs), in each pixel.

      For example,
      • the readnoise for the ST-1001E CCD is 15 e- (RMS).

      • the readnoise for the QHY163M camera used in Labs 1 and 2 is 2.4 e- when used at low gain and 1.0 e- when used at high gain.

      • the readnoise for the SBIG STX-16803 camera you are using for Labs 3 and 4 is 9 e-.

      Unlike shot noise, the read noise is independent of the signal, S.

      • Thus, at low light levels, the readout noise can dominate other sources of noise (see the examples in the photo just above):
      • (see upper righthand image just above)

        A CCD image taken under these conditions we say to be readnoise-limited.

        • In this case, no matter how low the light level, the absolute error in our ability to measure it converges to the same value given by the readnoise, and:

        • If we were in a totally readnoise-limited situation, the S/N would rise in direct proportion to the signal (e.g., with integration time).

          E.g., in the absence of other noise sources, to double the S/N we simply double the exposure (i.e. signal).

        • Of course, as soon as the signal starts becoming comparable or so to the readnoise level, shot noise starts to contribute more noticeably.

      • At high light levels, the shot noise can be made to be much larger than the read noise

        so that in this case we can approach the ideal experiment:

        • In this case the S/N will be the square root of the signal.

        • Unlike the linear, read-noise-limited regime, here the S/N only improves as the square root of the integration time.

          E.g., to double the S/N requires FOUR TIMES the integration time for the same source.


  • READ NOISE VERSUS SHOT NOISE: In astronomy we prefer to be in the ideal regime, so we try to take CCD exposures in such a way that the readout noise does not dominate other sources of noise in all pixels.
    • This is not always as hard as it seems, because even if our celestial source is limited to only a small number of pixels, there is flux from the sky itself that is contributed to every pixel!

        We will get to this later in the semester, but the sources of "sky" flux are:

      • scattered moonlight

      • unresolved starlight

      • reflected/scattered sunlight

      • auroral emission from molecules in the earth's atmosphere

      • light pollution

    • The flux from the sky is Poissonian in Nature, and there is nothing we can do about eliminating it either.

    • Thus we aim to take pictures so all sources of Poisson noise, including the source and sky, together dominate the read noise.

      Total noise ~ σPOISSON

      We call such an image sky-limited.

    • Using the equations we have introduced on this page, you should be able to show how to achieve a sky-limited image:

        For a CCD with gain G and read noise σRN(e-) given in electrons, you can reach the sky-limited regime if the sky level yields a number of counts N(ADU) per sky pixel (given in ADUs) when


BINNING

  • An effective way to reduce the effects of readout noise is through Pixel Binning.
  • Binning means you combine signals from adjacent pixels before they get to the readout amplifier.
    • In effect, you are combining sets of electron packets from more than one adjacent physical pixel to create one image pixel.

    • One result is to make the dimensions of the final image (and the size of the image file) smaller.

      But the actual area of the sky imaged remains the same, you have simply sacrificed resolution (actually, pixel scale).

      That is, CCD field of view does not change but each image pixel represents more sky area and we have overall coarser resolution.

  • Common binning modes for ST-1001E CCD used in ASTR 3130:
  • 1 x 1 1024 x 1024 No binning
    2 x 2 512 x 512 Final image size
    3 x 3 341 x 341 Final image size
    2 x 1 Not available Mode commonly used for spectroscopy
    3 x 1 Not available Mode commonly used for spectroscopy

    Equal-sized binning in each dimension is what we most often do when we are using CCDs to take pictures of the sky (this is the only mode allowed with our current camera).

    But on some CCDs other, non-square combinations are possible, and this is sometimes useful for spectroscopic or other applications.

  • How physically do we do the binning?

  • Well, what's the point? How does binning help with reducing noise?
      1. Fewer actual amplifier readouts for the same picture area.

      2. The final binned pixels that are read out have more total counts (collected together from individual pixels).

    • The net effect, then is to increase the signal compared to the readout noise.

      • E.g. 4 pixels with 1x1 (i.e., no) binning:

        4 pixels with 2x2 binning:

        Effect of 2x2 binning is 2x less read noise per area of the sky imaged.
      • Thus, binning effectively increases sensitivity at faint levels (less integration time needed to detect a given object).
      • Where has this "binning" concept already been discussed in this class??


  • ANCILLARY BENEFITS TO BINNING:

    • Because the readout amplifier has elements that work capacitatively, there is a finite response time for it to work well, and this tends to dominate the time it takes to readout the CCD.
      • One can always try to speed up the amplifier readout (shorten the duration of the sensing phase), but always at the risk of a higher readout noise per pixel.
    • Binning results in a faster total chip readout because fewer amplifier reads are needed (less of the time limiting process).
      • e.g. 2x2 binning is 4x faster than 1x1 binning
      • ST-8 read (digitization) rate = 30 kHz / pixel

        Thus:
        Binning # Readouts Read Time*
        1x1 1.56x106 52 sec
        2x2 3.90x105 13 sec
        3x3 1.73x105 6 sec
      • *Note there is additional time needed to write image to computer disk, display image, etc.


  • OTHER SPEED CONSIDERATIONS:

    Note, among the largest CCDs now available are those with 4096x4096 pixel format. At 30 kHz, it would take 560 sec = 9 minutes to read!
  • Here are some new developments to deal with this problem, apart from binning:

    • Latest electronics approaching > 100 kHz.
    • Can build CCDs with multiple amplifiers to speed up readout with parallel processes (e.g., Fan Mtn. CCD):
    • Quad-Amp Readout 4x Faster

    • Note: In future circuitry non-destructive readout.

        Send same charge packet to amplifier M times - readout noise reduced to σ / M1/2.


  • When bin?
  • Let's review when it makes sense to consider binning.

    • Faint or low surface brightness objects where you are starved for photons (higher S/N) and are willing to trade to get higher S/N at the expense of pixel resolution.

    • When you are taking short integrations and the sky flux will contribute very little to the "blank sky" pixels (or filling most traps), driving you from the sky-limited regime for these pixels (higher S/N).
    • When faster CCD readout speed is desired.

    • If smaller images (in Megabytes, not sky area!) are desired.

    • When loss of resolution is not important.

      (resolution is the potential trade off for higher S/N, faster readout, smaller images).
      • e.g. ST-8 with 9 micron pixels on the 26" refractor:

        1x1 pixels about 0.19" x 0.19" FOV

        2x2 pixels about 0.38" x 0.38" FOV

        3x3 pixels about 0.57" x 0.57" FOV

        Since the seeing is typically > 1.5" at McCormick -- there is no real loss of useful resolution by binning this CCD camera.
      • Always want to Nyquist sample -- means you need to sample at twice the frequency of information you want to see. So, for a star with seeing width 1.5" need pixels smaller than 0.75".

        Click here to get an explanation of where Nyquist sampling theorem comes from.

      • Another example is the ST-1001E CCD, which has much larger, 24 micron pixels. On the 26" refractor we then have:

        1x1 pixels about 0.5" x 0.5"

        2x2 pixels about 1.0" x 1.0"

        3x3 pixels about 1.5" x 1.5"

        Only if the seeing is fairly poor -- i.e. worse than 2 arcsec -- does it make sense to bin the already large pixels of the ST-1001E CCD camera.

      • Note also that the ST-8 camera, which has a 1530 X 1024 format, has more pixels than the ST-1001E, but because they are physically smaller (9 microns compared to 24 microns in the ST-1001E) the ST-8 covers a significantly smaller area on the sky.

        The 0.19"/pixel scale of the ST-8 is a poor match to the 26-inch telescope plate scale, whereas the 0.5"/pixel scale of the ST-1001E is a very nice match for Nyquist sampling the typical seeing profile.


TIME DELAY INTEGRATION

Another CCD readout trick is Time Delay Integration (TDI) or Drift Scanning.

  • This is something like the sweeping detector concept we saw above with the linear CCD array, except we do the same thing now with a two-dimensional CCD array.

  • In TDI we read the chip along columns at exactly the rate the CCD camera sweeps past a fixed scene to build a long strip image.

    • In this case each physical CCD pixel creates multiple image pixels.
    • Indeed, every physical pixel in a CCD column contributes to making each image pixel for the corresponding column in the image.

  • In astronomy, a common TDI technique is to turn off clock drive on telescope and have the CCD + telescope move with the Earth past the stellar scene; clock the charge packets in the pixels along columns in the same direction as the sky is moving and at the sidereal rate (i.e., keeping the charge packets under the same piece of sky at all times until they reach the serial register).
    • Total integration time per picture pixel is time to cross CCD array.
    • Note each image pixel in a column created from equal contributions of each CCD pixel in column. Time each image pixel spent in each physical pixel is = total transit time across CCD/number of pixels in column.
    • Result is a single image covering a much greater extent.

      Example of a drift scanned image made of the Pleiades star cluster (Messier 45) made with an SBIG ST-7 camera. Image from http://www.driftscan.com/images/m45a.htm .
    • A great advantage of this kind of image is that the final picture is "smoother" than a normal CCD image, because the output image has had every image pixel created from the average Q.E. response from many individual CCD pixels that contributed to making it.
  • EXAMPLE: Sloan Digital Sky Survey to map large areas of the sky.

    Sloan camera - 54 CCDs!
    • 30 "photometry" chips (2048 x 2048)
    • 22 "astrometry" chips (2048 x 400)
      • Because they are narrower, they have a reduced star transit time, less net integration, so that brighter stars can be observed without saturation.

    • 2 "focus" chips (2048 x 400) that give real time information on the telescope focus.

    Various images showing the arrangement of CCDs in the Sloan Digital Sky Survey camera (now at the Smithsonian Museum).

    How "filled" images of the sky are made by interleaving SDSS images.

    Of course, ever larger cameras made of numerous CCDs that can be read out simultaneously are a modern focus of astronomy.

    • Allows one to cover more area at the same time -- equal to largest photographic plates, but with much better QE.

    • Currently the largest camera is the Dark Energy Camera (DECam), a 570-megapixel camera being used for studying the distributions of galaxies as a means to understand dark energy, built at Fermilab in Illinois and mounted on the Blanco 4-m telescope in Chile.

      Each DECam image is 1 Gbyte in file size and covers a FOV of 2.2 degrees across (an area equal to 20 times that of the moon as seen from Earth).

      The camera contains 62 CCDs with 2048 x 4096 pixels each and 12 CCDs of 2048 x 2048 for guiding, alignment and focus.

    Image of the DECam CCD array.

    One of the first images taken with the DECam CCD array on September 12, 2012. The image is of the globular cluster 47 Tucanae.


    THE COMPLEMENTARY METAL OXIDE SEMICONDUCTOR (CMOS) ARRAY

    As demonstrated above, an important limitation of the CCD is the lengthy amount of time needed for its limited number of amplifiers (often just one) to read out the full array.

    A year before the CCD was invented (1969), another silicon substrate device with MOS pixels but an alternative readout structure had been invented -- the Complementary Metal Oxide Semiconducter (CMOS) array.

    • The CMOS array is an example of a device that has active-pixel sensors --- that is, the essence of the CMOS array is that each pixel has its own readout amplifier and circuitry (unlike in the case of the CCD, where the pixels are passive-pixel sensors).

    • In addition, in a CMOS array, each and any pixel output can be addressed directly by its x-y position, rather than having to be accessed after a sequence of bucket brigade transfers of charge.

    Cartoon showing basic format of a CMOS array. From http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm .
    The CMOS architecture has a number of advantages over the CCD architecture:

    • Manufacturing: It turns out that CMOS arrays are easier to manufacture, because they can be made using similar processes made for creating computer processeors, memories and other commonly made integrated cirucit components.

      In contrast, because CCDs require multiple clocking circuits and inputs, they require special processes to manufacture.

    • Power Consumption: CCDs use a great deal more power because the clocking circuits are constantly charging and discharging large capacitors in their operation.

      "In contrast, CMOS arrays use only a single voltage input and clock, meaning they consume much less power than CCDs..." (http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm).

      CMOS sensors can be as much as 100 times less power hungry than CCDs. This makes them particularly attractive options for battery-powered devices (like cellphones).

    • Addressible Pixels: In a CCD, one cannot read a single particular pixel, but has to go through the entire shift and read process along columns, then rows until the charge packet of the desired pixel is reached.

      In contrast, the pixels in CMOS imagers can be addressed directly by their x-y position.

      This makes it easier to read out sub-arrays and do other imaging techniques.

    • Blooming: In a CCD the pixels are connected to one another along columns. Large, oversaturated charge packets can "bleed" into adjacent charge packets, creating "blooming" along the columns.

      In CMOS arrays, the pixels are disconnected from one another, which basically eliminates blooming, excpet in the most extreme situations.

    • Speed: Perhaps the most important advantage of the CMOS array is that all pixels can readout in parallel, and then all that is passed along the cross-array circuitry are digital signals.

      Compared to CCDs, the massively parallel CMOS technology allows a much faster array readout.

    The speed, pixel addressibility, and low power consumption of CMOS arrays make them preferable for commercial applications like digital cameras.

    However, CCD arrays at first won out and doiminated over CMOS arrays, especially for low light applications, like astronomy, because of certain disadvantages compared to CCDs:

    • Reduced Pixel Fill Factors: Because of the associated circuitry in each pixel, some amount of each pixel is "dead" -- i.e., not sensitive to light.

      Note that originally fill factors were as low as 40%, but as technology improves, circuit miniaturization is improving and fill factors as high as 80% can be reached.

      In contrast, CCD arrays have nearly 100% fill factors, and therefore have higher net quantum efficiency.

      Cartoon showing basic format of a CMOS pixel -- showing the reduced fill factors that resiult in lost light. From http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm .
    • Time Delay Integration: Obviously possible with CCDs, TDI is currently not feasible with CMOS arrays.

    • Fixed Pattern Noise: Because each pixel has its own amplifier, there can be large variations in bias levels, gains and read noise in CMOS arrays.

      In contrast, with a common amplifier for all pixels, the uniformity of CCD arrays is much greater.

      Until recently, CCDs produced far superior image quality.

    For some time, semiconductor lithography simply did not make it possible for CMOS arrays to be made with the uniformity to compete with CCDs.

    Because CCDs matured faster, for some time they were superior quality, had more pixels, and greater sensitivity than CMOS arrays,

    However,

    • recently "Renewed interest in CMOS was based on expectations of lowered power consumption, camera-on-a-chip integration, and lowered fabrication costs from the reuse of mainstream logic and memory device fabrication. Achieving these benefits in practice while simultaneously delivering high image quality has taken far more time, money, and process adaptation than original projections suggested, but CMOS imagers have [now] joined CCDs as mainstream, mature technology."

    • "With the promise of lower power consumption and higher integration for smaller components, CMOS designers focused efforts on imagers for mobile phones, the highest volume image sensor application in the world. An enormous amount of investment was made to develop and fine tune CMOS imagers and the fabrication processes that manufacture them. As a result of this investment, we witnessed great improvements in image quality, even as pixel sizes shrank. Therefore, in the case of high volume consumer area and line scan imagers, based on almost every performance parameter imaginable, CMOS imagers outperform CCDs..." (http://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/)

    • Money talks, and according to CMOS inventor Eric Fossum, "The force of marketing is greater than the force of engineering..."

    • "CMOS chips can be fabricated on just about any standard silicon production line, so they tend to be extremely inexpensive compared to CCD sensors." (https://electronics.howstuffworks.com/cameras-photography/digital/question362.htm)

    Note that the QHY163M camera that we used in labs 1 and 2 is actually a CMOS detector.

    Note that whether the array is a CCD or a CMOS architecture, the modern digital camera industry uses the same method to make color images, through use of a Bayer filter to combine/average each 2x2 array of pixels into a single colored pixel. (Obviously this is not done for professional astronomical imaging, where the color separation is done externally by large single filters over all pixels -- something that allows more flexibility in what wavelengths are imaged.)

    (Left) A Bayer filter for RGB color imaging. Note that green has double the net detector area, which is a reflection of this being the peak sensitivity of the human eye.

    (Right) Configuration of a color imaging system using an interline transfer CCD. Both images from http://www.camerarepair.org/2012/05/ccd-vs-cmos-the-sensor-breakdown/ .


Previous Topic: Detectors: CCDs Lecture Index Next Topic: CCDs: Quantum Efficiency

Linear CCD image taken from http://www.infinetivity.com/~jsampson/rsoh/020315/. The conveyor-belt linear CCD images were taken from http://www.autovision.net/web.html while the satellite imaging linear CCD images were taken from http://www.photogrammetry.ethz.ch/research/cloudmap/cloudmap2/sens_model.html. The line address readout conveyor belt image from http://www.ing.iac.es/~smt/CCD_Primer/CCD_Primer.htm. The barcode reader image is from http://www.blueleaf.co.uk/C100.HTM. All other material copyright © 2002,2006,2008,2012,2015,2019,2021,2022 Steven R. Majewski. All rights reserved. These notes are intended for the private, noncommercial use of students enrolled in Astronomy 313 and Astronomy 3130 at the University of Virginia.