\section*{New HBOOK file format}
The HBOOK files created by the new version of HBOOK are by default in
exchange mode. They can be transported between machines using the standard
binary FTP or they can be NFS mounted in a heterogeneous environment.
The old HBOOK files can still be processed by HBOOK and PAW.
A conversion program called \Lit{HTONEW} is available to convert old files to
the new format. This program also converts automatically old format Ntuples
to the new Ntuples described below.
It is available in the usual places where executables are kept on the
various machines.
\subsection*{Transfer HBOOK files between systems}
\subsubsection*{Between Unix machines with FTP}
\begin{XMP}
ftp remote
bin
get remote.hbook
\end{XMP}
\subsubsection*{CERNVM to Unix workstation}
On CERNVM type the following:
\begin{XMP}
FTP workstation
BIN F 4096 ! block size in bytes
PUT file.hbook
\end{XMP}
\subsubsection*{On VMS from any machine}
\begin{XMP}
ftp remote
bin
get remote.hbook
quit
resize -s 4096 remote.hbook;1
\end{XMP}
The \Lit{resize} command does not copy the file.
It simply changes the header information, from 512 bytes record to 4096 bytes.
The \Lit{resize} tool is available on request from the CERN Program Library.
\subsection*{Proposed HBOOK file naming convention}
Users are encouraged to name their HBOOK files with the suffix \Lit{.hbook}, so
that the PAW++ browser will be able to recognize these files automatically.
\subsection*{Maximum default size for HBOOK files}
The default maximum file size for HBOOK files has been changed from 4000 blocks
to 16000 blocks.
\section*{New Ntuples in HBOOK}
With the new version of PAW/HBOOK the new Ntuple routines, as
described in the current HBOOK manual, become operational.
Below the more important aspects of the new Ntuples are discussed.
For more details see section 3.2 of the HBOOK manual.
\subsection*{Data Types}
The new Ntuples support the storage of all basic data types: floating
point numbers (REAL*4 and REAL*8), integers, bit patterns
(unsigned integers), booleans and character strings.
The PAW command \Lit{UWFUNC} has been modified to generate the COMIS functions
with the corresponding data types.
\subsection*{Data Compression}
Floating point numbers, integers and bit patterns can be packed by
specifying a range of values or by explicitly specifying the number
of bits that should be used to store the data. Booleans are always
stored using one bit. Unused trailing array elements will not be
stored when an array depends on an index variable. In that case only
as many array elements will be stored as specified by the index
variable.
For example, the array definition \Lit{NHITS(NTRACK)}
defines \Lit{NHITS} to depend on the index variable \Lit{NTRACK}.
When \Lit{NTRACK} is 16, the elements \Lit{NHITS(1..16)} are stored,
when \Lit{NTRACK} is 3, only elements \Lit{NHITS(1..3)} are stored, etc.
\subsection*{User Routines}
A new Ntuple is booked and defined using the routines: \Rind{HBNT},
\Rind{HBNAME} and \Rind{HBNAMC}.
They are filled using the routines: \Rind{HFNT} and \Rind{HFNTB}.
Information is retrieved using:
\Rind{HGNT}, \Rind{HGNTB}, \Rind{HGNTV} and \Rind{HGNTF}.
Note that routine \Rind{HGN} cannot be used to retrieve
information from a new Ntuple.
Global Ntuple options (like buffer size) are set using \Rind{HBSET}.
The Ntuple definition can be printed using \Rind{HPRNT} and a user
function to access the Ntuple data can be created with \Rind{HUWFUN}.
\subsection*{Storage Model}
To improve data access time and to facilitate the compression
mechanism, new Ntuples are stored column wise, as opposed to row
wise for old Ntuples.
Column wise storage allows direct access to any column in the Ntuple.
Histogramming one column from a 300 column Ntuple requires reading
only 1/300 of the total data set.
However, this storage scheme requires one memory buffer per column
as opposed to only one buffer in total for the old Ntuples.
By default the buffer length is 1024 words,
in which case a 100 column Ntuple requires 409600 bytes of buffer space.
In general, performance increases with increasing buffer size.
Therefore, one should tune the buffer size (using routine \Rind{HBSET})
as a function of the number of columns and the amount of available memory.
Highest efficiency is obtained when setting the buffer size equal to the
record length of the RZ HBOOK file (as specified in the call to
\Rind{HROPEN}).
A further advantage of column wise storage is that Ntuples
can easily be extended with one or more new columns.
Columns are logically grouped into blocks (physically, however, all
columns are independent).
Blocks allow users to extend Ntuples with private columns or to
group relevant columns together.
New blocks can even be defined after the Ntuple has been filled.
The newly created blocks can be filled using the routine \Rind{HFNTB}.
Note that arrays are treated as a single column.
This means that when you define only
one array of \Lit{NVAR} elements the behaviour of the old Ntuples
will be reproduced (with in addition data typing and data compression).
It is however not recommended to use this technique,
since the direct column access capabilities of the new Ntuples are lost.
\subsection*{Performance}
Accessing a relatively small number of the total number of
defined columns results in a huge increase in performance compared to
the old Ntuples.
However, reading a complete Ntuple will take
slightly longer than reading an old Ntuple due to the overhead
introduced by the type checking and compression mechanisms and
because the data is not stored sequentially on disk.
This performance increase will most clearly show up when analyzing
new Ntuples with PAW where one typically histograms one
column with cuts on a few other columns.
The advantages of having different data types and data compression generally
outweighs the performance penalty incurred when reading complete Ntuples.
\section*{New or modified HBOOK routines}
\subsection*{New routines}
The following routine \Rind{HQUAD} has been added to HBOOK.
It was implemented by J.Allison (OPAL/Manchester) and is
automatically called
in PAW by the existing command \Lit{SMOOTH}
(see page~\pageref{sec:SMOOTH}).
\begin{XMP}
\begin{Bcommand}
CALL HQUAD (ID,CHOPT,MODE,SENSIT,SMOOTH,
NSIG*,CHISQ*,NDF*,FMIN*,FMAX*,IERR*)
\end{Bcommand}
\end{XMP}
\label{sec:HQUAD}
This routine fits multiquadric radial basis functions to the bin contents of a
histogram or the event density of an Ntuple.
(For Ntuples this is currently limited to ``simple'' ones, i.e., with 1, 2 or 3
variables; all events are used -- no selection mechanism is implemented. Thus
the recommended practice at the moment is to create a ``simple'' Ntuple and
fill it from your ``master'' Ntuple with the \Lit{NTUPLE/LOOP} command and an
appropriate \Lit{SELECT.FOR} function.)
Input parameters:
\begin{DLtt}{123456}
\item[ID] Histogram or Ntuple ID.
\item[CHOPT] Character variable containing option characters:
\begin{DLtt}{1}
\item[0] Replace original histogram by smoothed.
\item[2] Do not replace original histogram but store values of smoothed
function and its parameters. (The fitted function is regenerated
from the values or the parameters with the \Lit{FUNC} option in
\Lit{HISTOGRAM/PLOT} for histograms or with \Lit{NTUPLE/DRAW} for Ntuples.)
\item[V] Verbose.
\end{DLtt}
\item[MODE] Mode of operation
\begin{DLtt}{1}
\item[0] Same as \Lit{MODE = 3} (see below).
\item[3] find significant points and perform unconstrained fit. If
the histogram or Ntuple is unweighted perform a Poisson likelihood
fit, otherwise a least squares fit (see \Lit{MODE = 4}).
\item[4] force an unconstrained least squares fit in all cases.
(This is a linear least squares problem and therefore the most
efficient possible since it allows a single step calculation of the
best fit and covariances. But note it assumes gaussian errors,
even for low statistics, including the error on zero being 1.)
\end{DLtt}
\item[SENSIT] Sensitivity parameter.
It controls the sensitivity to statistical fluctuations (see Remarks).
\Lit{SENSIT = 0.} is equivalent to \Lit{SENSIT = 1.}
\item[SMOOTH] Smoothness parameter.
It controls the (radius of) curvature of the multiquadric basis functions.
\Lit{SMOOTH = 0.} is equivalent to \Lit{SMOOTH = 1.}
\end{DLtt}
Output parameters:
\begin{DLtt}{123456}
\item[NSIG] no. of significant points or centres found, i.e., no. of basis
functions used.
\item[CHISQ] chi-squared (see Remarks).
\item[NDF] no. of degrees of freedom.
\item[FMIN] minimum function value.
\item[FMAX] maximum function value.
\item[IERR] error flag, 0 if all's OK. (Hopefully helpful error messages are
printed where possible.)
\end{DLtt}
Remarks:
\begin{Itemize}
\item Empty bins are taken into account. (Poisson statistics are used for the
unweighted case.)
\item The multiquadric basis functions are $\sqrt{r^2+\Delta^2}$, where $r$ is
the radial distance from its ``centre'', and $\Delta$ is a scale
parameter and also the curvature at the ``centre''. ``Centres'', also
referred to as ``significant points'', are located at points where the
2nd differential or Laplacian of event density is statistically
significant.
\item The data must be statistically independent, i.e., events (weighted or
unweighted) drawn randomly from a parent probability distribution or
differential cross-section, e.g., you cannot further smooth a previously
smoothed distribution.
\item For histograms, the chi-squared (\Lit{CHISQ}) is that of the fit to the
original histogram assuming gaussian errors on the original histogram
even for low statistics, including the error on zero being 1. It is
calculated like this even for a Poisson likelihood fit; in that case the
maximum likelihood may not correspond to the minimum chi-squared, but
\Lit{CHISQ} can still be used, with \Lit{NDF} (the no. of degrees of freedom), as a
goodness-of-fit estimator. For Ntuples, an internally generated and
temporary histogram is used to calculate \Lit{CHISQ} in the same way.
\end{Itemize}
The following routine \Rind{HDIFFB} (from the D0 group) has been included into HBOOK.
It compares two histograms, bin by bin.
The authors are
R. J. Genik II, D. Gilliland, J. Linnemann, J. McCampbell and J. McKinley.
\begin{XMP}
\begin{Bcommand}
CALL HDIFFB(ID1,ID2,TOL,NBINS,CHOPT,
NBAD*,DIFFS*)
\end{Bcommand}
\end{XMP}
Input parameters:
\begin{DLtt}{12345}
\item[ID1] The first histogram to be compared.
The ``reference'' histogram in options \Lit{A} and \Lit{C}.
\item[ID2] The second histogram to be compared.
The ``data'' histogram in options \Lit{A} and \Lit{C}.\\
\Lit{ID1}, \Lit{ID2} are a pair of 1-D, 2-D, or profile
histograms booked with the same number of bins.
\item[TOL] The tolerance for a passing the test.
Under options \Lit{S} and \Lit{C}, \Lit{TOL} is a number between 0 and 1 which
represents the smallest probability considered as an acceptable
match. Thus \Lit{TOL} is the fraction of bins expected to fail by
chance if \Lit{ID1} and \Lit{ID2} are drawn from the same distribution.
Under option \Lit{A}, \Lit{TOL} is the degree of precision of match
required for the test to be considered as passed, e.g. \Lit{TOL=2.0}
would mean that the value from the data bin had to be within 2
times the reference error of the reference mean to be considered
as compatible.
\item[NBINS] The dimension of the user-supplied \Lit{DIFFS} array.
It is the number of bins in the comparison (the size of the 1-D
histogram, plus 0, 1 or 2, depending on whether the overflow and
underflow channels are included). For a 2-D histogram, this
should be the total number of bins (\Lit{X*Y}) plus room for overflow bins
along any of the axes.
\item[CHOPT] A string allowing specification of the following options:
The default (no options selected) is for option \Lit{S} (statistical
comparison), ignoring all underflow and overflow bins, and
automatically correcting for the difference in events between \Lit{ID1}
and \Lit{ID2}. No such correction is done for profile histograms.
\begin{DLtt}{1}
\item[N] Use the absolute contents of each histogram, thus including the
normalization of the histogram as well as its shape in the
comparison.
\item[D] Debug printout, dumps the critical variables in the comparisons,
along with indicators of its weight, etc.
\item[O] Overflow, requests that overflow bins be taken into account.
\item[U] Underflow, requests that underflow bins be taken into account.
\item[R] Right overflow bin. For a 2-D histogram, it includes the X-Axis
overflow bin in the comparisons. If the \Lit{O} option is used, this
is automatic.
\item[L] Left underflow bin. Same as above, but the X-Axis underflow is
used. The \Lit{U} option uses this automatically.
\item[T] Top overflow bin. Same as \Lit{R} but for the Y-Axis
\item[B] Bottom underflow bin. Option \Lit{L} for the Y-Axis
\item[S] Statistical comparison. For standard 1-D histograms, calculates
the probability that both bins were produced from
a Poisson distribution with the same mean. For large statistics,
R and D greater than 25, the mean for each bin is the average of
the bin contents of \Lit{ID1}, \Lit{ID2}, adjusted for scaling (not adjusted
for scaling if option \Lit{N} is selected). For small statistics, the
unbiased ultimately most powerful comparison is made. This
returns the confidence level that the two bins came from a
Poisson distribution with the same mean. For a profile
histogram, calculates t-test probability that both bin means
were produced from a population with the same mean.
This probability is referred to in \Lit{TOL} and \Lit{DIFFS}.
The \Lit{S} option should be used when comparing two sets of data.
Using the \Lit{S} option when comparing data to a function or known
reference yields poor results. In this case, the \Lit{C} option should
be selected.
\item[C] Compatibility test. Calculates the probability that the data
(from \Lit{ID2}) was produced from a distribution with the mean and
error in the bin of the reference histogram (\Lit{ID1}). The test
is for Poisson statistics for 1-D histogram, Gaussian
statistics for profile histograms. The \Lit{C} option should be used
when comparing data to either a function, or a known reference or
calibration distribution.
\item[A] Absolute test. Here the test is on the number of standard
deviations by which the data from \Lit{ID2} deviates from the
reference histogram (\Lit{ID1}) mean. The standard deviation is
taken from \Lit{ID1}. \Lit{TOL} is the number of standard deviations, rather
than a probability.
An arbitrary tolerance interval may be formed by using \Lit{HPAK}
and \Lit{HPAKE} to fill the reference histogram; asymmetric intervals
may be implemented by setting \Lit{TOL} to \Lit{1.0} and choosing the error
and ``mean'' so that the allowed interval corresponds to $\pm 1.0$
standard deviations.
\item[Z] Ignore in the comparison any bins with zero contents in \Lit{ID1}.
The default action is to consider all bins as significant.
\end{DLtt}
\end{DLtt}
Output parameters:
\begin{DLtt}{12345}
\item[NBAD*] The number of bins failing the compatibility test according
to the criteria defined by \Lit{TOL} and \Lit{CHOPT}.
\item[DIFFS*] An array of length the number of bins being compared, which
gives the results of the test bin by bin (probabilities for
options \Lit{S} and \Lit{C}, deviations for option \Lit{A}).
Results are passed back in the form:
\begin{Description}
\item[1-D\ \ ] \Lit{DIFFS(NX)} for no over or underflow or \Lit{DIFFS(0:NX+1)},
for overflow and/or underflow.
\item[2-D\ \ ] \Lit{DIFFS(NX,NY)} or \Lit{DIFFS(0:NX+1, 0:NY+1)}.
\end{Description}
\end{DLtt}
\subsubsection*{Errors messages of \Rind{HDIFFB}:}
\newcommand{\erritem}[1]{\item[\underline{\footnotesize\tt#1}]\mbox{}\\}
\begin{Description}
\erritem{Warning: Zero tolerance}
The passed value \Lit{TOL} is equal to 0.
\erritem{Warning: Only one comparison at a time, please}
More than one type of comparison was selected. Only one of options \Lit{S}, \Lit{C},
and \Lit{A} may be used at a time. This is only a warning and the test defaults
to the \Lit{S} mode.
\erritem{Warning: Different binning}
The \Lit{XMIN} values for a 1-D histogram or the \Lit{XMIN} and/or \Lit{YMN} values on a
2-D histogram are different. This may give inaccurate results.
\erritem{Warning: Weighted or saturated events in 2 dimensions}
HBOOK does not compute error bars for two dimensional histograms, thus
weighted event are not allowed, and \Rind{HDIFFB} can not compute the correct
statistics. An answer is still given, but it is probably not right.
\erritem{Integral is zero!}
The sum of the content bins is zero.
\erritem{Both histograms must be the same dimension}
A 1-D and a 2-D histogram have been specified. In order for the
routine to work, both must be the same dimension.
\erritem{Both histograms must be the same type}
Two different types of histograms have been specified. Both must be
profile or non-profile, you can not have a mix.
\erritem{Not enough bins DIFF to hold result}
The parameter \Lit{NBINS} is less that the number of bins in the histograms.
\erritem{Number of channels is different}
The number of channels in the two histograms to compare are different. They
must be the same before the routine will process the data.
\erritem{U/O/L/R/T/B Option with weighted events}
HBOOK does not compute an error bar for over-/under-flow bins, thus it may
not be used with weighted events.
\erritem{Weighted options and no \Rind{HBARX}}
The user had not told HBOOK to figure the error bars for the histograms.
Therefore, the operations will not be valid.
\erritem{Both histograms must be the same, weighted or unweighted}
As it states, the histograms must be of the same type.
\end{Description}
\subsubsection*{Statistical comments:}
The methods used for the \Lit{S} and \Lit{C} mode are correct for unweighted events and
Poisson statistics for one or two-dimensional histograms. For weighted events,
a Gaussian approximation is used, which results in \Lit{DIFFS} values which are
too low when there are fewer than 25 or so ``equivalent events'' (defined
under \Lit{HSTATI}) per bin.
This is caused by either few entries or by wide fluctuation in weights.
The result is that \Lit{HDIFFB} rejects to many bins in this case.
Comparisons for profile histograms assume Gaussian statistics
for the \Lit{S} and \Lit{C} mode comparisons of the channel mean.
Fewer that 25 or so events will result in \Lit{DIFFS} values which are too large.
The result is that \Rind{HDIFFB} rejects too many event in these low statistic cases.
\subsection*{Axis labels and histograms}
A new set of routines has been added (Pierre Aubert/CN)
to associate labels with histogram channels.
This association can be made before or after a histogram is filled, but
it has the advantage that the label information get stored in the
histogram data structure, so that it is available for all future plots in
an automatic way.
\begin{XMP}
\begin{Bcommand}
CALL HLABEL (ID,NLAB,*CLAB*,W,CHOPT)
\end{Bcommand}
\end{XMP}
Associates alphanumeric labels with a histogram.
This routine can be called for a histogram after it has been filled,
and then the labels specified will be shown on the respective axes.
The routine can also be called before a histogram is filled ,
and in this case, when filling, a certain order can be imposed.
By default the entries will be automatically ordered.
\begin{DLtt}{123456}
\item[ID] Histogram identifier.
\item[NLAB] Number of labels.
\item[*CLAB*] Character variable array with
\Lit{NLAB} elements (input and output).
\item[CHOPT] Character variable specifying the option desired.
\begin{DLtt}{123}
\item[' '] As \Lit{'N'} below.
\item['N'] Add \Lit{NLAB} news labels read in \Lit{CLAB}
to histogram \Lit{ID}.
\item['R'] Read \Lit{NLAB} labels into in \Lit{CLAB}
from histogram \Lit{ID}.
\item['X'] X-axis is being treated (default).
\item['Y'] Y-axis is being treated.
\item['Z'] Z-axis is being treated.
\item['S'] If labels exist then they should be sorted
according to:
\begin{DLtt}{123}
\item['A'] Alphabetically (default);
\item['E'] Reverse alphabetical order;
\item['D'] by increasing channel contents
(after filling);
\item['V'] by decreasing channel contents
(after filling).
\end{DLtt}
\item['T'] Modify (replace) \Lit{NLAB} existing labels read from
\Lit{CLAB} in histogram \Lit{ID}.
\end{DLtt}
\end{DLtt}
Notes:
\begin{Itemize}
\item For one-dimensional histograms \Rind{HLABEL} can be called at any time.
\item For two-dimensional histograms one {\bf must} call \Rind{HLABEL}
with option \Lit{'N'} for each axis between the call to \Rind{HBOOK2}
and the first call to \Lit{HFC2}.
\end{Itemize}
\begin{XMP}
\begin{Bcommand}
CALL HFC1 (ID,IBIN,CLAB,W,CHOPT)
\end{Bcommand}
\end{XMP}
Fills a channel in a one-dimensional histogram.
\begin{DLtt}{12345}
\item[ID] One-dimensional histogram identifier.
\item[IBIN] Number of the bin to be filled (if $\ne0$).
\item[CLAB] Character variable containing the label
describing the bin (if \Lit{IBIN=0}).
\item[CHOPT] Character variable specifying the option desired.
\begin{DLtt}{1}
\item[N] Normal filling
\item[S] or default Automatically sort
\item[U] If this option is set and the channel does not exist
then the underflow channel is incremented,
else a new one is created.
\end{DLtt}
\end{DLtt}
Notes:
\begin{Itemize}
\item If \Lit{IBIN\(\ne\)0}, then the channel \Lit{IBIN}
is filled; \Lit{CLAB} may then be undefined.
%\item \Lit{'N'} or \Lit{'S'} add a new channel dynamically.
\item Routine \Rind{HLABEL} can be called before or after \Lit{HFC1}.
\end{Itemize}
\begin{XMP}
\begin{Bcommand}
CALL HFC2 (ID,IBINX,CLABX,IBINY,CLABY,W)
\end{Bcommand}
\end{XMP}
Fills for a two-dimensional histogram the channel identified
by position \Lit{IBINX} or label \Lit{CLABX} and
position \Lit{IBINY} or label \Lit{CLABY} with weight \Lit{W}.
\begin{DLtt}{12345}
\item[ID] Two-dimensional histogram identifier.
\item[IBINX] Number of the X-bin to be filled (if $\ne0$).
\item[CLABX] Character variable containing the label
describing the X-bin (if \Lit{IBINX=0}).
\item[IBINY] Number of the Y-bin to be filled (if $\ne0$).
\item[CLABY] Character variable containing the label
describing the Y-bin (if \Lit{IBINY=0}).
\item[W] Weight of the event to be entered into the histogram.
\end{DLtt}
Notes:
\begin{Itemize}
\item Routine \Rind{HLABEL} must be called {\bf before} \Lit{HFC2}.
\item If \Lit{IBINX\(\ne\)0}, then the channel described by \Lit{IBINX}
is filled; \Lit{CLABX} may then be undefined.
\item If \Lit{IBINY\(\ne\)0}, then the channel described by \Lit{IBINY}
is filled; \Lit{CLABY} may then be undefined.
\item If the channel described by \Lit{IBINX} or \Lit{CLABX} does not exist,
the underflow channel is incremented. Idem for \Lit{IBINY} and \Lit{CLABY}.
\end{Itemize}
An example of how to sort a 2-D histogram:
\begin{XMP}
call HBOOK2
call HLABEL option N for X-axis
call HLABEL option N for Y-axis
(many) calls to HFC2
....
call HLABEL option S to sort
\end{XMP}
Another possibility is:
\begin{XMP}
call HBOOK2
(many) calls to HF2 or HFILL
....
call HLABEL option N for X-axis
call HLABEL option N for Y-axis
call HLABEL option S to sort
\end{XMP}
See Figure~\ref{fig:HLABEL} on page~\pageref{fig:HLABEL}
for an example.
%\subsubsection*{Operations on histogram labels}
%
%\begin{XMP}
%\begin{Bcommand}
%LOGICAL FUNCTION HLABEQ (ID,CHOPT)
%\end{Bcommand}
%\end{XMP}
%
%Varifies whether a histogram has labels.
%
%\begin{DLtt}{12345}
%\item[ID] Histogram identifier.
%\item[CHOPT] Character variable specifying the option desired.
% \begin{DLtt}{123}
% \item[' '] return \Lit{.true.} if histogram \Lit{ID}
% has labels, else return \Lit{.false.};
% \item['X'] return \Lit{.true.} if X axis
% has labels, else return \Lit{.false.};
% \item['Y'] return \Lit{.true.} if Y axis
% has labels, else return \Lit{.false.};
% \end{DLtt}
%\end{DLtt}
%
%\begin{XMP}
%\begin{Bcommand}
%INTEGER FUNCTION HLABNB (ID,CHOPT)
%\end{Bcommand}
%\end{XMP}
%
%Returns the number of labels for the axes of
%a histogram.
%
%\begin{DLtt}{12345}
%\item[ID] Histogram identifier.
%\item[CHOPT] Character variable specifying the option desired.
% \begin{DLtt}{123}
% \item[' '] like \Lit{'X'} below;
% \item['X'] return the number of labels for the X axis;
% \item['Y'] return the number of labels for the Y axis;
% \end{DLtt}
%\end{DLtt}
%
%\begin{XMP}
%\begin{Bcommand}
%CALL HLGNXT (ID,IBIN,CLAB*,CHOPT)
%\end{Bcommand}
%\end{XMP}
%
%Get channel label from channel number.
%
%\begin{DLtt}{12345}
%\item[ID] Histogram identifier.
%\item[IBIN] Number of the channel.
%\item[CLAB*] Character variable (CHARACTER*16) containing the label
% corresponding to channel \Lit{IBIN} (output variable).
%\item[CHOPT] Character variable specifying the axis desired.
% \begin{DLtt}{123}
% \item[' '] As \Lit{'X'} below;
% \item['X'] Get label of X axis;
% \item['Y'] Get label of Y axis;
% \item['Z'] Get label of Z axis.
% \end{DLtt}
%\end{DLtt}
%
%\begin{XMP}
%\begin{Bcommand}
%CALL HLPOS (ID,CHLAB,IBIN*,CLAB,CHOPT)
%\end{Bcommand}
%\end{XMP}
%
%Get channel number from channel label.
%
%\begin{DLtt}{12345}
%\item[ID] Histogram identifier.
%\item[CLAB] Character variable (CHARACTER*16) specifying the label
% one is looking for.
%\item[IBIN*] Number of the channel (output variable).
% If no channel has label \Lit{CLAB} a value \Lit{IBIN=-1}
% is returned.
%\item[CHOPT] Character variable specifying the axis desired.
% \begin{DLtt}{123}
% \item[' '] As \Lit{'X'} below;
% \item['X'] Get label of X axis;
% \item['Y'] Get label of Y axis;
% \item['Z'] Get label of Z axis.
% \end{DLtt}
%\end{DLtt}
%
%* Changes in HSCR to delete ntuples
% SUBROUTINE HSCR(IDD,ICYCLE,CHOPT)
%*.==========>
%*. To scratch histogram ID from current directory
%*. /PAWC/ or RZ file. IDD =0 means all histograms.
%*. ICYCLE is the cycle number in case of a RZ file.
%*..=========>
%*
%* New routine HDDIR to delete directories (memory or RZ)
% SUBROUTINE HDDIR(CHDIR)
%*.==========>
%*. Delete sub-directory CHDIR from /PAWC/ or RZ file.
%*. The current directory must be the mother directory of CHDIR
%*..=========>
%*
%
%
\subsection*{Routine modified}
\begin{XMP}
\begin{Bcommand}
CALL HOPERA (ID1,CHOPER,ID2,ID3,C1,C2)
\end{Bcommand}
\end{XMP}
\begin{DLtt}{123456}
\item[{\rm\bf Input parameters:}]
\item[ID1,ID2] Operand histogram identifiers.
\item[CHOPER] Character variable specifying the
kind of operation to be performed
(\Lit{+,-,*,/}) and \Lit{'E'}, whether errors
have to be calculated (see below).
\item[ID3] Identifier of the histogram containing
the result after the operation.
\item[C1,C2] Multiplicative constants.
\end{DLtt}
The option parameter \Lit{CHOPER} has a new option \Lit{'E'},
which, when selected, instructs \Rind{HOPERA} to compute
the error bars on the resulting histograms correctly, assuming that the
input histograms \Lit{ID1} and \Lit{ID2} are independent.
\subsection*{Extended precision in fitting routines}
In the user defined function called by the HBOOK fitting routines,
the common block \Lit{/HCFITD/} may be used instead of the user defined common block
containing the current values of parameters during the minimization procedure.
\Lit{/HCFITD/} contains the current parameters in the array \Lit{FITPAR(25)}
as follows:
\label{sec:HCFITD}
\begin{XMP}
DOUBLE PRECISION FITPAR
COMMON/HCFITD/FITPAR(25)
\end{XMP}
\begin{Fighere}
\epsfig{file=cnlhlabel.eps,width=17cm}
\caption[]{Example of the use of \Rind{HLABEL}}
\label{fig:HLABEL}
\begin{Itemize}
\item The top left picture shows the contents of the histogram ordered
alphabetically by their label.
\item The top right picture shows the same histogram, but with the
bins ordered by contents, irrespective of their label.
\item The middle picture shows a two-dimensional histogram,
where the ordinate (Y-axis) is ordered alphabetically.
\end{Itemize}
\end{Fighere}