summaryrefslogtreecommitdiffstats
path: root/src/3rdparty/libjpeg/structure.doc
diff options
context:
space:
mode:
Diffstat (limited to 'src/3rdparty/libjpeg/structure.doc')
-rw-r--r--src/3rdparty/libjpeg/structure.doc46
1 files changed, 23 insertions, 23 deletions
diff --git a/src/3rdparty/libjpeg/structure.doc b/src/3rdparty/libjpeg/structure.doc
index b9b20cc83..51c9def7e 100644
--- a/src/3rdparty/libjpeg/structure.doc
+++ b/src/3rdparty/libjpeg/structure.doc
@@ -81,13 +81,13 @@ provides multiple implementations that cover most of the useful tradeoffs,
ranging from very-high-quality down to fast-preview operation. On the
compression side we have generally not provided low-quality choices, since
compression is normally less time-critical. It should be understood that the
-low-quality modes may not meet the JPEG standard's accuracy retquirements;
+low-quality modes may not meet the JPEG standard's accuracy requirements;
nonetheless, they are useful for viewers.
*** Portability issues ***
-Portability is an essential retquirement for the library. The key portability
+Portability is an essential requirement for the library. The key portability
issues that show up at the level of system architecture are:
1. Memory usage. We want the code to be able to run on PC-class machines
@@ -209,7 +209,7 @@ fill in the function pointers with references to whichever module we have
determined we need to use in this run. Then invocation of the module is done
by indirecting through a function pointer; on most machines this is no more
expensive than a switch statement, which would be the only other way of
-making the retquired run-time choice. The really significant benefit, of
+making the required run-time choice. The really significant benefit, of
course, is keeping the source code clean and well structured.
We can also arrange to have private storage that varies between different
@@ -283,7 +283,7 @@ input data => processing step A => buffer => processing step B => output data
| | |
------------------ controller ------------------
-The controller knows the dataflow retquirements of steps A and B: how much data
+The controller knows the dataflow requirements of steps A and B: how much data
they want to accept in one chunk and how much they output in one chunk. Its
function is to manage its buffer and call A and B at the proper times.
@@ -299,7 +299,7 @@ be had by replacing implementations of a control module. For example:
control modules.)
* In some processing modes, a given interstep buffer need only be a "strip"
buffer large enough to accommodate the desired data chunk sizes. In other
- modes, a full-image buffer is needed and several passes are retquired.
+ modes, a full-image buffer is needed and several passes are required.
The control module determines which kind of buffer is used and manipulates
virtual array buffers as needed. One or both processing steps may be
unaware of the multi-pass behavior.
@@ -345,12 +345,12 @@ The objects shown above are:
JPEG color space; also changes the data from pixel-interleaved layout to
separate component planes. Processes one pixel row at a time.
-* Downsampling: performs reduction of chroma components as retquired.
+* Downsampling: performs reduction of chroma components as required.
Optionally may perform pixel-level smoothing as well. Processes a "row
group" at a time, where a row group is defined as Vmax pixel rows of each
component before downsampling, and Vk sample rows afterwards (remember Vk
differs across components). Some downsampling or smoothing algorithms may
- retquire context rows above and below the current row group; the
+ require context rows above and below the current row group; the
preprocessing controller is responsible for supplying these rows via proper
buffering. The downsampler is responsible for edge expansion at the right
edge (i.e., extending each sample row to a multiple of 8 samples); but the
@@ -380,7 +380,7 @@ The objects shown above are:
In addition to the above objects, the compression library includes these
objects:
-* Master control: determines the number of passes retquired, controls overall
+* Master control: determines the number of passes required, controls overall
and per-pass initialization of the other modules.
* Marker writing: generates JPEG markers (except for RSTn, which is emitted
@@ -392,7 +392,7 @@ objects:
surrounding application may provide its own destination manager.
* Memory manager: allocates and releases memory, controls virtual arrays
- (with backing store management, where retquired).
+ (with backing store management, where required).
* Error handler: performs formatting and output of error and trace messages;
determines handling of nonfatal errors. The surrounding application may
@@ -433,10 +433,10 @@ shown are:
* Main controller: buffer controller for the subsampled-data buffer, which
holds the output of JPEG decompression proper. This controller's primary
task is to feed the postprocessing procedure. Some upsampling algorithms
- may retquire context rows above and below the current row group; when this
+ may require context rows above and below the current row group; when this
is true, the main controller is responsible for managing its buffer so as
to make context rows available. In the current design, the main buffer is
- always a strip buffer; a full-image buffer is never retquired.
+ always a strip buffer; a full-image buffer is never required.
* Coefficient controller: buffer controller for the DCT-coefficient data.
This controller handles MCU disassembly, including deletion of any dummy
@@ -481,7 +481,7 @@ shown are:
* Color quantization: reduce the data to colormapped form, using either an
externally specified colormap or an internally generated one. This module
is not used for full-color output. Works on one pixel row at a time; may
- retquire two passes to generate a color map. Note that the output will
+ require two passes to generate a color map. Note that the output will
always be a single component representing colormap indexes. In the current
design, the output values are JSAMPLEs, so an 8-bit compilation cannot
quantize to more than 256 colors. This is unlikely to be a problem in
@@ -499,7 +499,7 @@ quantize in one step).
In addition to the above objects, the decompression library includes these
objects:
-* Master control: determines the number of passes retquired, controls overall
+* Master control: determines the number of passes required, controls overall
and per-pass initialization of the other modules. This is subdivided into
input and output control: jdinput.c controls only input-side processing,
while jdmaster.c handles overall initialization and output-side control.
@@ -592,7 +592,7 @@ specification that sample values run from -128..127 is accommodated by
subtracting 128 just as the sample value is copied into the source array for
the DCT step (this will be an array of signed ints). Similarly, during
decompression the output of the IDCT step will be immediately shifted back to
-0..255. (NB: different values are retquired when 12-bit samples are in use.
+0..255. (NB: different values are required when 12-bit samples are in use.
The code is written in terms of MAXJSAMPLE and CENTERJSAMPLE, which will be
defined as 255 and 128 respectively in an 8-bit implementation, and as 4095
and 2048 in a 12-bit implementation.)
@@ -602,7 +602,7 @@ choice costs only a small amount of memory and has several benefits:
* Code using the data structure doesn't need to know the allocated width of
the rows. This simplifies edge expansion/compression, since we can work
in an array that's wider than the logical picture width.
-* Indexing doesn't retquire multiplication; this is a performance win on many
+* Indexing doesn't require multiplication; this is a performance win on many
machines.
* Arrays with more than 64K total elements can be supported even on machines
where malloc() cannot allocate chunks larger than 64K.
@@ -695,7 +695,7 @@ the entropy codec must be able to stop before having produced or consumed all
the data that they normally would handle in one call. That part is reasonably
straightforward: we make the controller call interfaces include "progress
counters" which indicate the number of data chunks successfully processed, and
-we retquire callers to test the counter rather than just assume all of the data
+we require callers to test the counter rather than just assume all of the data
was processed.
Rather than trying to restart at an arbitrary point, the current Huffman
@@ -715,7 +715,7 @@ bytes should be enough.
In a successive-approximation AC refinement scan, the progressive Huffman
decoder has to be able to undo assignments of newly nonzero coefficients if it
-suspends before the MCU is complete, since decoding retquires distinguishing
+suspends before the MCU is complete, since decoding requires distinguishing
previously-zero and previously-nonzero coefficients. This is a bit tedious
but probably won't have much effect on performance. Other variants of Huffman
decoding need not worry about this, since they will just store the same values
@@ -734,7 +734,7 @@ without causing problems; otherwise a 64K buffer would be needed in the worst
case.)
The JPEG marker writer currently does *not* cope with suspension. I feel that
-this is not necessary; it is much easier simply to retquire the application to
+this is not necessary; it is much easier simply to require the application to
ensure there is enough buffer space before starting. (An empty 2K buffer is
more than sufficient for the header markers; and ensuring there are a dozen or
two bytes available before calling jpeg_finish_compress() will suffice for the
@@ -770,8 +770,8 @@ peak memory usage would be about the same anyway; and having per-pass storage
substantially complicates the virtual memory allocation rules --- see below.)
The memory manager deals with three kinds of object:
-1. "Small" objects. Typically these retquire no more than 10K-20K total.
-2. "Large" objects. These may retquire tens to hundreds of K depending on
+1. "Small" objects. Typically these require no more than 10K-20K total.
+2. "Large" objects. These may require tens to hundreds of K depending on
image size. Semantically they behave the same as small objects, but we
distinguish them for two reasons:
* On MS-DOS machines, large objects are referenced by FAR pointers,
@@ -795,7 +795,7 @@ In the present implementation, virtual arrays are only permitted to have image
lifespan. (Permanent lifespan would not be reasonable, and pass lifespan is
not very useful since a virtual array's raison d'etre is to store data for
multiple passes through the image.) We also expect that only "small" objects
-will be given permanent lifespan, though this restriction is not retquired by
+will be given permanent lifespan, though this restriction is not required by
the memory manager.
In a non-virtual-memory machine, some performance benefit can be gained by
@@ -837,7 +837,7 @@ of the array contain garbage. (This feature exists primarily because the
equivalent logic would otherwise be needed in jdcoefct.c for progressive
JPEG mode; we may as well make it available for possible other uses.)
-The first write pass on a virtual array is retquired to occur in top-to-bottom
+The first write pass on a virtual array is required to occur in top-to-bottom
order; read passes, as well as any write passes after the first one, may
access the array in any order. This restriction exists partly to simplify
the virtual array control logic, and partly because some file systems may not
@@ -885,7 +885,7 @@ It may be necessary to ensure that backing store objects are explicitly
released upon abnormal program termination. For example, MS-DOS won't free
extended memory by itself. To support this, we will expect the main program
or surrounding application to arrange to call self_destruct (typically via
-jpeg_destroy) upon abnormal termination. This may retquire a SIGINT signal
+jpeg_destroy) upon abnormal termination. This may require a SIGINT signal
handler or equivalent. We don't want to have the back end module install its
own signal handler, because that would pre-empt the surrounding application's
ability to control signal handling.