With the synopsis of the recode
call, we stress the difference
between using this program as a file filter, or recoding many files
at once. The first parameter of any call states the recoding request,
and this deserves a section on its own. Options are then presented,
but somewhat grouped according to the related functionalities they
control.
recode
callrecode
call ¶The general format of the program call is one of:
recode [option]... [charset | request [file]... ]
Some calls are used only to obtain lists produced by Recode itself, without actually recoding any file. They are recognised through the usage of listing options, and these options decide what meaning should be given to an optional charset parameter. See Asking for various lists.
In other calls, the first parameter (request) always explains which transformations are expected on the files. There are many variations to the aspect of this parameter. We will discuss more complex situations later (see The request parameter), but for many simple cases, this parameter merely looks like this3:
before..after
where before and after each gives the name of a charset. Each
file will be read assuming it is coded with charset before, it
will be recoded over itself so to use the charset after. If there
is no file on the recode
command, the program rather acts
as a Unix filter and transforms standard input onto standard output.
The capability of recoding many files at once is very convenient. For example, one could easily prepare a distribution from Latin-1 to MSDOS, this way:
mkdir package cp -p Makefile *.[ch] package recode Latin-1..MSDOS package/* zoo ah package.zoo package/* rm -rf package
(In this example, the non-mandatory ‘-p’ option to cp
is for
preserving timestamps, and the zoo
program is an archiver from
Rahul Dhesi which once was quite popular.)
The filter operation is especially useful when the input files should not be altered. Let us make an example to illustrate this point. Suppose that someone has a file named datum.txt, which is almost a TeX file, except that diacriticised characters are written using Latin-1. To complete the recoding of the diacriticised characters only and produce a file datum.tex, without destroying the original, one could do:
cp -p datum.txt datum.tex recode -d l1..tex datum.tex
However, using recode
as a filter will achieve the same goal more
neatly:
recode -d l1..tex <datum.txt >datum.tex
This example also shows that l1
could be used instead of
Latin-1
; charset names often have such aliases.
Recode has three modes are for when to set the exit status to non-zero:
In the case where the request is merely written as before..after, then before and after specify the start charset and the goal charset for the recoding.
For Recode, charset names may contain any character, besides a comma, a forward slash, or two periods in a row. But in practice, charset names are currently limited to alphabetic letters (upper or lower case), digits, hyphens, underlines, periods, colons or round parentheses.
The complete syntax for a valid request allows for unusual things, which might be surprising at first. (Do not pay too much attention to these facilities on first reading.) For example, request may also contain intermediate charsets, like in the following example:
before..interim1..interim2..after
meaning that Recode should internally produce the interim1 charset from the start charset, then work out of this interim1 charset to internally produce interim2, and from there towards the goal charset. In fact, Recode internally combines recipes and automatically uses interim charsets, when there is no direct recipe for transforming before into after. But there might be many ways to do it. When many routes are possible, the above chaining syntax may be used to more precisely force the program towards a particular route, which it might not have naturally selected otherwise. On the other hand, because Recode tries to choose good routes, chaining is only needed to achieve some rare, unusual effects.
Moreover, many such requests (sub-requests, more precisely) may be separated with commas (but no spaces at all), indicating a sequence of recodings, where the output of one has to serve as the input of the following one. For example, the two following requests are equivalent:
before..interim1..interim2..after before..interim1,interim1..interim2,interim2..after
In this example, the charset input for any recoding sub-request is identical to the charset output by the preceding sub-request. But it does not have to be so in the general case. One might wonder what would be the meaning of declaring the charset input for a recoding sub-request of being of different nature than the charset output by a preceding sub-request, when recodings are chained in this way. Such a strange usage might have a meaning and be useful for the Recode expert, but they are quite uncommon in practice.
More useful is the distinction between the concept of charset, and the concept of surfaces. An encoded charset is represented by:
pure-charset/surface1/surface2...
using slashes to introduce surfaces, if any. The order of application of surfaces is usually important, they cannot be freely commuted. In the given example, surface1 is first applied over the pure-charset, then surface2 is applied over the result. Given this request:
before/surface1/surface2..after/surface3
Recode will understand that the input files should have surface2 removed first (because it was applied last), then surface1 should be removed. The next step will be to translate the codes from charset before to charset after, prior to applying surface3 over the result.
Some charsets have one or more implied surfaces. In this case, the
implied surfaces are automatically handled merely by naming the charset,
without any explicit surface to qualify it. Let’s take an example to
illustrate this feature. The request ‘pc..l1’ will indeed decode MS-DOS
end of lines prior to converting IBM-PC codes to Latin-1, because ‘pc’
is the name of a charset4 which has CR-LF
for its usual surface.
The request ‘pc/..l1’ will not decode end of lines, since
the slash introduces surfaces, and even if the surface list is empty, it
effectively defeats the automatic removal of surfaces for this charset.
So, empty surfaces are useful, indeed!
Both charsets and surfaces may have predefined alternate names, or aliases. However, and this is rather important to understand, implied surfaces are attached to individual aliases rather than on genuine charsets. Consequently, the official charset name and all of its aliases do not necessarily share the same implied surfaces. The charset and all its aliases may each have its own different set of implied surfaces.
Charset names, surface names, or their aliases may always be abbreviated to any unambiguous prefix. Internally in Recode, disambiguating tables are kept separate for charset names and surface names.
While recognising a charset name or a surface name (or aliases thereof), Recode ignores all characters besides letters and digits, so for example, the hyphens and underlines being part of an official charset name may safely be omitted (no need to un-confuse them!). There is also no distinction between upper and lower case for charset or surface names.
One of the before or after keywords may be omitted. If the double dot separator is omitted too, then the charset is interpreted as the before charset.5
When a charset name is omitted or left empty, the value of the
DEFAULT_CHARSET
variable in the environment is used instead.
If this variable is not defined, the Recode library uses the current locale’s
encoding. On POSIX systems, this depends on the first non-empty value
among the environment variables LC_ALL
, LC_CTYPE
,
and LANG
, and can be determined through the
command ‘locale charmap’. If the current locale’s encoding may not
be resolved, then Recode presumes ASCII
.
If the charset name is omitted but followed by surfaces, the surfaces then qualify the usual or default charset. For example, the request ‘../x’ is sufficient for applying an hexadecimal surface to the input text6.
The allowable values for before or after charsets, and various surfaces, are described in the remainder of this document.
Many options control listing output generated by Recode itself, they are not meant to accompany actual file recodings. These options are:
The program merely prints its version numbers on standard output, and exits without doing anything else.
The program merely prints a page of help on standard output, and exits without doing any recoding.
Given this option, all other parameters and options are ignored. The program prints briefly the copyright and copying conditions. See the file COPYING in the distribution for full statement of the Copyright and copying conditions.
Instead of recoding files, Recode writes a language source file on standard output and exits. This source is meant to be included in a regular program written in the same programming language: its purpose is to declare and initialise an array, named name, which represents the requested recoding. The only acceptable values for language are ‘c’ or ‘perl’, and may may be abbreviated. If language is not specified, ‘c’ is assumed. If name is not specified, then it defaults to ‘before_after’. Strings before and after are cleaned before being used according to the syntax of language.
Even if Recode tries its best, this option does not always succeed in
producing the requested source table, it then prints ‘Recoding
is too complex for a mere table’. It will succeed however, provided
the recoding can be internally represented by only one step after the
optimisation phase, and if this merged step conveys a one-to-one or
a one-to-many explicit table. To increase the probability that this
happens, iconv
initialisation is currently inhibited whenever
this option is used. Also, when attempting to produce sources tables,
Recode relaxes its checking a tiny bit: it ignores the algorithmic part
of some tabular recodings, it also avoids the processing of implied
surfaces. But this is all fairly technical. Better try and see!
Most tables are produced using decimal numbers to refer to character values7. Yet, users who know all Recode tricks and stunts could indeed force octal or hexadecimal output for the table contents. For example:
recode ibm297/test8..cp1252/x < /dev/null
produces a sequence of hexadecimal values which represent a conversion
table from IBM297
to CP1252
.
Beware that other options might affect the produced source tables, these are: ‘-d’, ‘-g’ and, particularly, ‘-s’.
This particular option is meant to help identifying an unknown charset, using as hints some already identified characters of the charset. Some examples will help introducing the idea.
Let’s presume here that Recode is run in a UTF-8 locale, and
that DEFAULT_CHARSET
is unset in the environment.
Suppose you have guessed that code 130 (decimal) of the unknown charset
represents a lower case ‘e’ with an acute accent. That is to say
that this code should map to code 233 (decimal) in the usual charset.
By executing:
recode -k 130:233
you should obtain a listing similar to:
AtariST CWI cp-hu CWI-2 IBM437/CR-LF 437/CR-LF CP437/CR-LF IBM850/CR-LF 850/CR-LF CP850/CR-LF IBM851/CR-LF 851/CR-LF CP851/CR-LF IBM852/CR-LF 852/CR-LF CP852/CR-LF pcl2 pclatin2 IBM857/CR-LF 857/CR-LF CP857/CR-LF IBM860/CR-LF 860/CR-LF CP860/CR-LF IBM861/CR-LF 861/CR-LF CP861/CR-LF cp-is IBM863/CR-LF 863/CR-LF CP863/CR-LF IBM865/CR-LF 865/CR-LF CP865/CR-LF
You can give more than one clue at once, to restrict the list further. Suppose you have also guessed that code 211 of the unknown charset represents an upper case ‘E’ with diaeresis, that is, code 203 in the usual charset. By requesting:
recode -k 130:233,211:203
you should obtain:
IBM850/CR-LF 850/CR-LF CP850/CR-LF IBM852/CR-LF 852/CR-LF CP852/CR-LF pcl2 pclatin2 IBM857/CR-LF 857/CR-LF CP857/CR-LF
The usual charset may be overridden by specifying one non-option argument. For example, to request the list of charsets for which code 130 maps to code 142 for the Macintosh, you may ask:
recode -k 130:142 mac
and get:
AtariST CWI cp-hu CWI-2 IBM437/CR-LF 437/CR-LF CP437/CR-LF IBM850/CR-LF 850/CR-LF CP850/CR-LF IBM851/CR-LF 851/CR-LF CP851/CR-LF IBM852/CR-LF 852/CR-LF CP852/CR-LF pcl2 pclatin2 IBM857/CR-LF 857/CR-LF CP857/CR-LF IBM860/CR-LF 860/CR-LF CP860/CR-LF IBM861/CR-LF 861/CR-LF CP861/CR-LF cp-is IBM863/CR-LF 863/CR-LF CP863/CR-LF IBM865/CR-LF 865/CR-LF CP865/CR-LF
which, of course, is identical to the result of the first example, since the code 142 for the Macintosh is a small ‘e’ with acute.
More formally, option ‘-k’ lists all possible before
charsets for the after charset given as the sole non-option
argument to recode
, but subject to restrictions given in
pairs. If there is no non-option argument, the after
charset is taken to be the default charset for this recode
.
The restrictions are given as a comma separated list of pairs, each pair consisting of two numbers separated by a colon. The numbers are taken as decimal when the initial digit is between ‘1’ and ‘9’; ‘0x’ starts an hexadecimal number, or else ‘0’ starts an octal number. The first number is a code in any before charset, while the second number is a code in the specified after charset. If the first number would not be transformed into the second number by recoding from some before charset to the after charset, then this before charset is rejected. A before charset is listed only if it is not rejected by any pair. The program will only test those before charsets having a tabular style internal description (see Tabular sources (RFC 1345)), so should be the selected after charset.
The produced list is in fact a subset of the list produced by the option ‘-l’. As for option ‘-l’, the non-option argument is interpreted as a charset name, possibly abbreviated to any non ambiguous prefix.
This option asks for information about all charsets, or about one particular charset. No file will be recoded.
If there is no non-option arguments, Recode ignores the format value of the option, it writes a sorted list of charset names on standard output, one per line. When a charset name have aliases or synonyms, they follow the true charset name on its line, sorted from left to right. Each charset or alias is followed by its implied surfaces, if any. This list is over two hundred lines. It is best used with ‘grep -i’, as in:
recode -l | grep -i greek
Within a collection of names for a single charset, the Recode
library distinguishes one of them as being the genuine charset name,
while the others are said to be aliases. The list normally integrates
all charsets from the external iconv
library, unless this is
defeated through options like ‘--ignore=:iconv:’ or ‘-x:’.
The portable libiconv
library relates its own aliases of a same
charset, and for a given set of aliases, if none of them are known to
Recode already, then Recode will pick one as being the
genuine charset. The iconv
library within GNU libc
makes
all aliases appear as different charsets, and each will be presented as
a charset by Recode, unless it is known otherwise.
There might be one non-option argument, in which case it is interpreted as a charset name, possibly abbreviated to any non ambiguous prefix. This particular usage of the ‘-l’ option is obeyed only for charsets having a tabular style internal description (see Tabular sources (RFC 1345)). Even if most charsets have this property, some do not, and the option ‘-l’ cannot be used to detail these particular charsets. For knowing if a particular charset can be listed this way, you should merely try and see if this works. The format value of the option is a keyword from the following list. Keywords may be abbreviated by dropping suffix letters, and even reduced to the first letter only:
This format asks for the production on standard output of a concise tabular display of the charset, in which character code values are expressed in decimal.
This format uses octal instead of decimal in the concise tabular display of the charset.
This format uses hexadecimal instead of decimal in the concise tabular display of the charset.
This format requests an extensive display of the charset on standard output,
using one line per character showing its decimal, hexadecimal, octal and
UCS-2
code values, and also a descriptive comment which should be
the 10646 name for the character.
The descriptive comment is given in English and ASCII, yet if the English
description is not available but a French one is, then the French description
is given instead, using Latin-1. However, if the LC_MESSAGES
environment variable begins with the letters ‘fr’, then listing
preference goes to French when both descriptions are available.
When option ‘-l’ is used together with a charset argument,
the format defaults to decimal
.
This option is a maintainer tool for evaluating the redundancy of those
charsets, in Recode, which are internally represented by an UCS-2
data table. After the listing has been produced, the program exits
without doing any recoding. The output is meant to be sorted, like
this: ‘recode -T | sort’. The option triggers Recode into
comparing all pairs of charsets, seeking those which are subsets of others.
The concept and results are better explained through a few examples.
Consider these three sample lines from ‘-T’ output:
[ 0] IBM891 == IBM903 [ 1] IBM1004 < CP1252 [ 12] INVARIANT < CSA_Z243.4-1985-1
The first line means that IBM891
and IBM903
are completely
identical as far as Recode is concerned, so one is fully redundant
to the other. The second line says that IBM1004
is wholly
contained within CP1252
, yet there is a single character which is
in CP1252
without being in IBM1004
. The third line says
that INVARIANT
is wholly contained within CSA_Z243.4-1985-1
,
but twelve characters are in CSA_Z243.4-1985-1
without being in
INVARIANT
. The whole output might most probably be reduced and
made more significant through a transitivity study.
The following options have the purpose of giving the user some fine grain control over the recoding operation themselves.
With Texte
Easy French conventions, use the column :
instead of the double-quote " for marking diaeresis.
See Easy French conventions.
This option is only meaningful while getting out of the
IBM-PC
charset. In this charset, characters 176 to 223 are used
for constructing rulers and boxes, using simple or double horizontal or
vertical lines. This option forces the automatic selection of ASCII
characters for approximating these rulers and boxes, at cost of making
the transformation irreversible. Option ‘-g’ implies ‘-f’.
The touch option is meaningful only when files are recoded over themselves. Without it, the time-stamps associated with files are preserved, to reflect the fact that changing the code of a file does not really alter its informational contents. When the user wants the recoded files to be time-stamped at the recoding time, this option inhibits the automatic protection of the time-stamps.
Before doing any recoding, the program will first print on the stderr
stream the list of all intermediate charsets planned for recoding, starting
with the before charset and ending with the after charset.
It also prints an indication of the recoding quality, as one of the word
‘reversible’, ‘one to one’, ‘one to many’, ‘many to
one’ or ‘many to many’.
This information will appear once or twice. It is shown a second time only when the optimisation and step merging phase succeeds in replacing many single steps by a new one.
This option also has a second effect. The program will print on
stderr
one message per recoded file, so as to keep the user
informed of the progress of its command.
An easy way to know beforehand the sequence or quality of a recoding is by using the command such as:
recode -v before..after < /dev/null
using the fact that, in Recode, an empty input file produces an empty output file.
This option tells the program to ignore any recoding path through the specified charset, so disabling any single step using this charset as a start or end point. This may be used when the user wants to force Recode into using an alternate recoding path (yet using chained requests offers a finer control, see The request parameter).
charset may be abbreviated to any unambiguous prefix.
The following options are somewhat related to reversibility issues:
With this option, irreversible or otherwise erroneous recodings are run
to completion, and recode
does not exit with a non-zero status if
it would be only because irreversibility matters. See Reversibility issues.
Without this option, Recode tries to protect you against recoding
a file irreversibly over itself8. Whenever an irreversible recoding is
met, or any other recoding error, recode
produces a warning on
standard error. The current input file does not get replaced by its
recoded version, and recode
then proceeds with the recoding of
the next file.
When the program is merely used as a filter, standard output will have
received a partially recoded copy of standard input, up to the first
error point. After all recodings have been done or attempted, and if
some recoding has been aborted, recode
exits with a non-zero status.
This option has the sole purpose of inhibiting warning messages about
irreversible recodings, and other such diagnostics. It has no other
effect, in particular, it does not prevent recodings to be aborted
or recode
to return a non-zero exit status when irreversible
recodings are met.
This option is set automatically for the children processes, when recode splits itself in many collaborating copies. Doing so, the diagnostic is issued only once by the parent. See option ‘-p’.
By using this option, the user requests that Recode be very strict while recoding a file, merely losing in the transformation any character which is not explicitly mapped from a charset to another. Such a loss is not reversible and so, will bring Recode to fail, unless the option ‘-f’ is also given as a kind of counter-measure.
Using ‘-s’ without ‘-f’ might render Recode very susceptible to the slighest file abnormalities. Despite the fact that it might be irritating to some users, such paranoia is sometimes wanted and useful.
Even if Recode tries hard to keep the recodings reversible, you should not develop an unconditional confidence in its ability to do so. You ought to keep only reasonable expectations about reverse recodings. In particular, consider:
IBM-PC
to Latin-1
. End of lines are represented as
‘\r\n’ in IBM-PC
and as ‘\n’ in Latin-1
. There
is no way by which a faulty IBM-PC
file containing a ‘\n’
not preceded by ‘\r’ be translated into a Latin-1
file, and
then back.
LaTeX
charset file, the string ‘\^\i{}’
could be recoded back and forth through another charset and become
‘\^{\i}’. Even if the resulting file is equivalent to the
original one, it is not identical.
Unless option ‘-s’ is used, Recode automatically tries to fill mappings with invented correspondences, often making them fully reversible. This filling is not made at random. The algorithm tries to stick to the identity mapping and, when this is not possible, it prefers generating many small permutation cycles, each involving only a few codes.
For example, here is how IBM-PC
code 186 gets translated to
control-U in Latin-1
. Control-U is 21. Code 21 is the
IBM-PC
section sign, which is 167 in Latin-1
. Recode
cannot reciprocate 167 to 21, because 167 is the masculine ordinal indicator
within IBM-PC
, which is 186 in Latin-1
. Code 186 within
IBM-PC
has no Latin-1
equivalent; by assigning it back to 21,
Recode closes this short permutation loop.
As a consequence of this map filling, Recode may sometimes produce funny characters. They may look annoying, they are nevertheless helpful when one changes his (her) mind and wants to revert to the prior recoding. If you cannot stand these, use option ‘-s’, which asks for a very strict recoding.
This map filling sometimes has a few surprising consequences, which some users wrongly interpreted as bugs. Here are two examples.
recode l1..us < File-Latin1 > File-ASCII cmp File-Latin1 File-ASCII
then cmp
will not report any difference. This is quite normal.
Latin-1
gets correctly recoded to ASCII for charsets commonalities
(which are the first 128 characters, in this case). The remaining last
128 Latin-1
characters have no ASCII correspondent. Instead
of losing
them, Recode elects to map them to unspecified characters of ASCII, so
making the recoding reversible. The simplest way of achieving this is
merely to keep those last 128 characters unchanged. The overall effect
is copying the file verbatim.
If you feel this behaviour is too generous and if you do not wish to
care about reversibility, simply use option ‘-s’. By doing so,
Recode will strictly map only those Latin-1
characters
which have
an ASCII equivalent, and will merely drop those which do not. Then,
there is more chance that you will observe a difference between the
input and the output file.
recode 437..l1 < File-Latin1 > Temp1 recode 437..l1 < Temp1 > Temp2
so declaring wrongly File-Latin1 to be an IBM-PC file, and
recoding to Latin-1
. This is surely ill defined and not meaningful.
Yet, if you repeat this step a second time, you might notice that
many (not all) characters in Temp2 are identical to those in
File-Latin1. Sometimes, people try to discover how Recode
works by experimenting a little at random, rather than reading and
understanding the documentation; results such as this are surely confusing,
as they provide those people with a false feeling that they understood
something.
Reversible codings have this property that, if applied several times in the same direction, they will eventually bring any character back to its original value. Since Recode seeks small permutation cycles when creating reversible codings, besides characters unchanged by the recoding, most permutation cycles will be of length 2, and fewer of length 3, etc. So, it is just expectable that applying the recoding twice in the same direction will recover most characters, but will fail to recover those participating in permutation cycles of length 3. On the other end, recoding six times in the same direction would recover all characters in cycles of length 1, 2, 3 or 6.
Recode can split itself into multiple parallel processes when it is discovered that many passes are needed to comply with the request. For example, suppose that four elementary steps were selected at recoding path optimisation time. Then Recode will split itself into four different interconnected tasks, logically equivalent to:
step1 <input | step2 | step3 | step4 >output
On systems where the pipes method is not available, the steps are performed in series.
When the recoding requires a combination of two or more elementary recoding steps, this option forces many passes over the data, using in-memory buffers to hold all intermediate results. If this option is selected in filter mode, that is, when the program reads standard input and writes standard output, it might take longer for programs further down the pipe chain to start receiving some recoded data.
When the recoding requires a combination of two or more elementary
recoding steps, this option forces the program to fork itself into a few
copies interconnected with pipes, using the pipe(2)
system call.
All copies of the program operate in parallel. This is the default
behaviour in filter mode. If this option is used when files are recoded
over themselves, this should also save disk space because some temporary
files might not be needed, at the cost of more system overhead.
This option is accepted for backwards compatibility, and acts like ‘--sequence=memory’.
In real life and practice, textual files are often made up of many charsets at once. Some parts of the file encode one charset, while other parts encode another charset, and so forth. Usually, a file does not toggle between more than two or three charsets. The means to distinguish which charsets are encoded at various places is not always available. Recode is able to handle only a few simple cases of mixed input.
The default Recode behaviour is to expect pure charset files, to be recoded as other pure charset files. However, the following options allow for a few precise kinds of mixed charset files.
While converting to or from one of HTML
, LaTeX
or BibTeX
charset, limit conversion to some subset of all characters.
For HTML
, limit conversion to the subset of all non-ASCII
characters. For LaTeX
or BibTeX
, limit conversion to the subset of all
non-English letters. This is particularly useful, for example, when
people create what would be valid HTML
, TeX or LaTeX
files, if only they were using provided sequences for applying
diacritics instead of using the diacriticised characters directly
from the underlying character set.
While converting to HTML
, LaTeX
or BibTeX
charset, this option
assumes that characters not in the said subset are properly coded
or protected already; Recode then transmits them literally.
While converting the other way, this option prevents translating back
coded or protected versions of characters not in the said subset.
See World Wide Web representations. See LaTeX macro calls. See BibTeX macro calls.
The bulk of the input file is expected to be written in ASCII
,
except for parts, like comments and string constants, which are written
using another charset than ASCII
. When language is ‘c’,
the recoding will proceed only with the contents of comments or strings,
while everything else will be copied without recoding. When language
is ‘po’, the recoding will proceed only within translator comments
(those having whitespace immediately following the initial ‘#’)
and with the contents of msgstr
strings.
For the above things to work, the non-ASCII
encoding of the comment
or string should be such that an ASCII
scan will successfully find
where the comment or string ends.
Even if ASCII
is the usual charset for writing programs, some
compilers are able to directly read other charsets, like UTF-8
, say.
There is currently no provision in Recode for reading mixed charset
sources which are not based on ASCII
. It is probable that the need
for mixed recoding is not as pressing in such cases.
For example, after one does:
recode -Spo pc/..u8 < input.po > output.po
file output.po holds a copy of input.po in which
only translator comments and the contents of msgstr
strings
have been recoded from the IBM-PC
charset to pure UTF-8
,
without attempting conversion of end-of-lines. Machine generated comments
and original msgid
strings are not to be touched by this recoding.
If language is not specified, ‘c’ is assumed.
The fact the recode
program acts as a filter, when given no
file arguments, makes it quite easy to use from within GNU Emacs. For
example, recoding the whole buffer from the IBM-PC
charset to
current charset (for example, UTF-8
on Unix) is easily done
with:
C-x h C-u M-| recode ibmpc RET
‘C-x h’ selects the whole buffer, and ‘C-u M-|’ filters and
replaces the current region through the given shell command. Here is
another example, binding the keys ‘C-c T’ to the recoding of
the current region from Easy French to Latin-1
(on Unix) and the key
‘C-u C-c T’ from Latin-1
(on Unix) to Easy French:
(global-set-key "\C-cT" 'recode-texte) (defun recode-texte (flag) (interactive "P") (shell-command-on-region (region-beginning) (region-end) (concat "recode " (if flag "..txte" "txte")) t) (exchange-point-and-mark))
It is our experience that when Recode does not provide satisfying
results, either the recode
program was not called properly,
correct results raised some doubts nevertheless, or files to recode were
somewhat mangled. Genuine bugs are surely possible.
Unless you already are a Recode expert, it might be a good idea to quickly revisit the tutorial (see Quick Tutorial) or the prior sections in this chapter, to make sure that you properly formatted your recoding request. In the case you intended to use Recode as a filter, make sure that you did not forget to redirect your standard input (through using the < symbol in the shell, say). Some Recode false mysteries are also easily explained, See Reversibility issues.
For the other cases, some investigation is needed. To illustrate how to
proceed, let’s presume that you want to recode the nicepage file,
coded UTF-8
, into HTML
. The problem is that the command
‘recode u8..h nicepage’ yields:
recode: Invalid input in step `UTF-8..ISO-10646-UCS-2'
One good trick is to use recode
in filter mode instead of in file
replacement mode, See Synopsis of recode
call. Another good trick is to use the
‘-v’ option asking for a verbose description of the recoding steps.
We could rewrite our recoding call as ‘recode -v u8..h <nicepage’,
to get something like:
Request: UTF-8..:iconv:..ISO-10646-UCS-2..HTML_4.0 Shrunk to: UTF-8..ISO-10646-UCS-2..HTML_4.0 [...some output...] recode: Invalid input in step `UTF-8..ISO-10646-UCS-2'
This might help you to better understand what the diagnostic means. The
recoding request is achieved in two steps, the first recodes UTF-8
into UCS-2
, the second recodes UCS-2
into HTML
.
The problem occurs within the first of these two steps, and since, the
input of this step is the input file given to Recode, this is
this overall input file which seems to be invalid. Also, when used in
filter mode, Recode processes as much input as possible before the
error occurs and sends the result of this processing to standard output.
Since the standard output has not been redirected to a file, it is merely
displayed on the user screen. By inspecting near the end of the resulting
HTML
output, that is, what was recoding a bit before the recoding
was interrupted, you may infer about where the error stands in the real
UTF-8
input file.
If you have the proper tools to examine the intermediate recoding data,
you might also prefer to reduce the problem to a single step to better
study it. This is what I usually do. For example, the last recode
call above is more or less equivalent to:
recode -v UTF-8..ISO_10646-UCS-2 <nicepage >temporary recode -v ISO_10646-UCS-2..HTML_4.0 <temporary rm temporary
If you know that the problem is within the first step, you might prefer to
concentrate on using the first recode
line. If you know that the
problem is within the second step, you might execute the first recode
line once and for all, and then play with the second recode
call,
repeatedly using the temporary file created once by the first call.
Note that the ‘-f’ switch may be used to force the production of
HTML
output despite invalid input, it might be satisfying enough
for you, and easier than repairing the input file. That depends on how
strict you would like to be about the precision of the recoding process.
If you later see that your HTML file begins with ‘@lt;html@gt;’ when
you expected ‘<html>’, then Recode might have done a bit more
that you wanted. In this case, your input file was half-UTF-8
,
half-HTML
already, that is, a mixed file (see Using mixed charset input). There is a
special -d
switch for this case. So, your might be end up calling
‘recode -fd nicepage’. Until you are quite sure that you accept
overwriting your input file whatever what, I recommend that you stick with
filter mode.
If, after such experiments, you seriously think that Recode does not behave properly, there might be a genuine bug either in the program or the library itself, in which case I invite you to to contribute a bug report, See Contributions and bug reports.
In previous versions of Recode, a single colon ‘:’ was used instead of the two dots ‘..’ for separating charsets, but this created problems, because colons are allowed in official charset names.
More precisely, pc
is an alias for
the charset IBM-PC
.
Both before and after may be omitted, in which case the double dot separator is mandatory. This is not very useful, as the recoding reduces to a mere copy in that case.
MS-DOS is one of those systems for which the default charset
has implied surfaces, CR-LF
here. Such surfaces are automatically
removed or applied whenever the default charset is read or written,
exactly as it would go for any other charset. In the example above, on
such systems, the hexadecimal surface would then replace the implied
surfaces. For adding an hexadecimal surface without removing any,
one should write the request as ‘/../x’.
The author of Recode by far prefer expressing numbers in decimal than octal or hexadecimal, as he considers that the current state of technology should not force users anymore in such strange things. But Unicode people see things differently, to the point Recode cannot escape being tainted with some hexadecimal.
There are still some cases of ambiguous output which are rather difficult to detect, and for which the protection is not active.