z/VM CMS Pipeline

z/VM CMS Pipeline

CMS Pipelines lets you solve big problems by combining small programs. It lets you do work that would otherwise require someone to write a new program. Often you get the result you need with a single CMS command. That command is PIPE. This is a CMS command, and as for any other CMS command, it can be used from the CMS command line (Ready;) or from any environment that provides a way to issue commands for CMS (such as the XEDIT command line or in REXX procedure statements). 

The PIPE command accepts stage commands as operands. Many stage commands are included with CMS Pipelines. Some stage commands read data from system sources, such as disk files, tape files, and the outputs of VM/ESA commands. Other stage commands filter and refine that data in some way. You can combine many stage commands within a single PIPE command to create the results you need. 

Here is an example of a simple PIPE command. It counts the number of words in your ALL NOTEBOOK file and writes the result to your terminal. A word is anything surrounded by blanks :

      pipe < all notebook | count words | console        

Three stage commands are combined to do the work. The < (read file) stage command reads the file, the COUNT stage command counts the words, and the CONSOLE stage command displays the count. 

Anyone familiar with CMS commands can use CMS Pipelines. You don't need to be a programmer. If you can describe what you want to do, chances are that you can use CMS Pipelines to do it. 

If you are a programmer, CMS Pipelines can save you some coding. 

Pipelines can be used in execs to replace device-dependent code (such as EXECIO). Often, several lines of REXX code can be replaced with a single PIPE command. You might also consider writing your own stage commands. These user-written stage commands are programs that read from and write to a pipeline. Besides reading and writing to the pipeline, these user written stages can of course do anything that can be done from REXX, such as arithmetics. 

It is a fact that most commercial programs have the same purpose ; read data, reformat it, filter out unwanted data, do some calculations, and finally produce output. A CMS Pipeline does nothing else.

Because CMS Pipelines contains a wealth of built-in stages, all specialized little programs to work with data, it can save you a lot in coding. We are still regularly astonished by the ease and speed in solving day-to-day problems. 

Furthermore, it is possible to process huge files as the need in computer storage is limited to the size of the records traveling at any moment through the pipeline. Only when all the records have to be collected into a buffer before they can be processed by a stage, you need enough computer storage to hold the whole file.  SORT is an example of a stage command that needs all records before it can do its job.

Another characteristic of CMS Pipelines is its sometimes tremendous performance. Classical programming often has several subroutines, each performing a specific function on all the records of the data, whereby the records have to be read at the beginning of the routine and stored back at the end of the routine in order to let the next routine process the data. The worst programming techniques (we encounter such programs still every day) read and save the records to disk, resulting in a huge I/O count. Smarter programs keep the intermediate results in storage, but as suggested above, this requires that there is enough computer storage available. The virtual storage technique allows large files to be stored in storage, but the I/O of the bad programs is then frequently replaced by paging I/O.

By combining stages judiciously, records are read, processed and written only once, resulting in great performance gains.

The learning curve for CMS Pipelines may be steep, but it is well worth the invesment.

Stages

In a pipeline, the output of one stage is the input to the next. The data itself is in the form of discrete records, so the data in the pipeline is not a continuous stream of bytes. A record is simply a string of characters--perhaps a line of a CMS file or a line entered at the terminal. Imagine a stage as a small factory through which a conveyor moves records. Records enter the stage on the left, and leave on the right. The figure below shows input records before processing on the left. The stage command reads the records and processes them. The resultant output records, written by the stage command, are shown on the right.


    +----------------+      +---------------+     +-----------------+
    ! Input Record 1 !      !               !     ! Output Record 1 !
    ! Input Record 2 +----->! Stage Command +---> ! Output Record 2 !
    ! Input Record 3 !      !               !     ! Output Record 3 !
    +----------------+      +---------------+     +-----------------+        


Exactly as with water or oil plumbing, while within the stage, the records can be modified, discarded, or split apart. Practically anything can happen to them. Precisely what happens depends on the stage command that is being used. Many stage commands write one output record for each input record. Some, however, do not. 

Next figure shows a stage consisting of a CHOP stage command.  CHOP truncates records at a specified length. In the example, each record is truncated to a length of 5 characters, CHOP writes one output record for every input record.


    Input records      +---------------+     Output record
    +-----------+      ! CHOP 5        !     +-----------+
    ! BOB SMITH !      !               !     ! BOB S     !
    ! SUE JONES +----->!               +---> ! SUE J     !
    +-----------+      ! Stage Command !     +-----------+
                       +---------------+        

Next figure shows a stage consisting of a COUNT WORDS stage command. Two records flow into the stage, but only one flows out. That single record contains the count of the number of words on all the records flowing into the COUNT stage.


    Input records      +---------------+     Output record
    +-----------+      ! COUNT WORDS   !
    ! BOB SMITH !      !               !     +-----------+
    ! SUE JONES +----->!               +---> ! 4         !
    +-----------+      ! Stage Command !     +-----------+
                       +---------------+        

Below is yet another example. In this case the stage command is LOCATE with the parameter /BOB/. Again, two records flow into the stage.  LOCATE looks at the content of each incoming record. If it contains the string BOB, LOCATE lets the record through, while others are discarded.


    Input records      +---------------+     Output record
    +-----------+      ! LOCATE /BOB/  !
    ! BOB SMITH !      !               !     +-----------+
    ! SUE JONES +----->!               +---> ! BOB SMITH !
    +-----------+      ! Stage Command !     +-----------+
                       +---------------+        

The records entering a stage are called its input stream. The records leaving a stage are called its output stream. In the example, LOCATE reads all records from its input stream, but writes only the records containing BOB to its output stream and discards the others. 

Well, to be correct, LOCATE does not always discard the records that do not match. Those records can be put on a so-called secondary output stream from where they can be further manipulated in a secondary pipe. This is however an advanced topic we'll discuss in a later lesson, as it requires the so-called multi-stream pipeline programming.

Next figure shows how records flow through several stages. The output of the LOCATE stage becomes the input to the COUNT stage.


                +-------------+                 +-------------+
                !LOCATE /BOB/ !                 !COUNT WORDS  !
 +-----------+  !             !  +-----------+  !             !  +------+
 ! BOB SMITH +->!             +->! BOB SMITH +->!             +->! 2    !
 ! SUE JONES !  !             !  +-----------+  !             !  +------+
 +-----------+  !Stage Command!                 !Stage Command!
                +-------------+                 +-------------+        

The LOCATE stage reads both records from its input stream (SUE JONES and BOB SMITH). It writes only the record containing BOB SMITH to its output stream. The COUNT stage reads records from its input stream. There is only one record: BOB SMITH.  COUNT tallies the number of words in that record and writes a single record to its output stream. That record contains the number 2, which is the number of words on the record COUNT read. 

The PIPE command

To run a pipeline, use the CMS PIPE command. Like other CMS commands, PIPE can be issued from the CMS command line or from an exec.  PIPE accepts one or more pipelines as operands. In this lesson, the PIPE command operands consist of only a single pipeline. Multistream pipelines are described in a later lesson. 

In a pipeline, stages are separated by a character called a stage separator:


      pipe stage_1 | stage_2 | ... | stage_n        

Do not place a stage separator after the last stage. Since VM/ESA Version 1, Release 2.0, a stage separator is allowed before the first stage, that is, between the PIPE command and stage_1. Prior to that release, it was not allowed. 

For the stage separator, the PIPE command expects the character X'4F'. 

You must determine which key on your terminal generates the character X'4F'. It is a solid vertical bar (|) on American and English 3270 terminals. In some countries (as is the case in Belgium), this character is displayed as an exclamation mark (!). Some workstation terminal emulator programs map the solid vertical bar to the split vertical bar (¦). The solid vertical bar is the logical-or operator in PL/I and REXX programs. In a pipeline, it indicates where one stage ends and another one begins. If you aren't sure what character to use, create and run the exec listed below :

      /* STAGESEP EXEC */
      say 'The stage separator is:' '4f'x'.'
      exit        

There is no limit (other than available space on the command line) to the number of stages you can specify. For your first pipeline, try this PIPE command :


      pipe < profile exec | count lines | console        

If you do not have a PROFILE EXEC, substitute the name of any existing file. Be careful to leave a space after <. The number of lines in your PROFILE EXEC is displayed. If you make a mistake typing the command, an error message is displayed. Just type the command again, correcting the mistake. The figure shows a map of the above pipeline. 


    Stage 1               Stage 2               Stage 3
    +----------------+    +----------------+    +----------------+
    ! < profile exec +--->! count lines    +--->! console        !
    +----------------+    +----------------+    +----------+-----+
         ^                                                 !
         !                                                 v
    +----+----+                                       +----------+
    ! PROFILE !                                       ! Terminal !
    ! EXEC    !                                       !          !
    ! file    !                                       !          !
    +---------+                                       +----------+        

The example contains three stages. Each stage consists of a stage command plus operands :

< profile exec

The < (read file) stage command reads a file and writes all of the file records to its output stream. As used here, the < stage command reads your PROFILE EXEC and writes each record to its output stream. 

count lines

The COUNT stage command counts items on the records it reads from its input stream.  COUNT has operands that let you tell it what to count (such as bytes, words, or the records themselves). 

As used here, the LINES operand causes COUNT to tally the count of the input records. The records that COUNT reads are the records written by the < stage command. (That is, the output of < becomes the input to COUNT). So, COUNT tallies the count of records in your PROFILE EXEC. 

Then COUNT writes a single record containing the tally to its output stream.  COUNT writes only one record - it does not write the records from the PROFILE EXEC to its output stream. 

console

The CONSOLE stage command either reads from your console or writes to it, depending on its position in the pipeline. As used here, the CONSOLE stage command reads records from its input stream and displays them on the console. Only one record is in its input stream. That record, written by COUNT, contains the count of the lines in your PROFILE EXEC. 

Notice that both < and CONSOLE work with devices. In the example, < reads a disk file and CONSOLE writes to the console. Stage commands that convey data between the pipeline and the outside world are called device drivers, the other stages are then called filters

One advantage of CMS Pipelines is that you can use a different device by changing only one stage command. To change the above example to write the count to a file instead of the console, just substitute a > stage command for CONSOLE. Here's how :


      pipe < profile exec | count lines | > yourfile data a        

The > (write-file) stage command writes all records in its input stream to a file. Specify the file name as an operand. Remember to leave a space after the > symbol. The > stage command creates a new file or replaces an existing file named YOURFILE DATA A. A file mode must be specified, as shown. 

And while we are on the subject, let's extend this example still further :

      pipe < profile exec ! console ! punch ! > profile save a ! tape        

Now, with one PIPE command, we read our PROFILE EXEC and put the output on the screen (console), to the virtual punch (punch), to a tape (tape) and to an output file (yourfile data a). This again shows the enormous power that can be achieved with CMS Pipelines. The pipeline is almost equivalent to

     TYPE PROFILE EXEC
     PUNCH PROFILE EXEC (NOH
     COPY PROFILE EXEC A PROFILE SAVE A
     TAPE DUMP PROFILE EXEC        

Apart from the shorter coding achieved with the pipeline command, the biggest advantage is that the file is only read once from disk instead of 4 times with the separate CMS commands ! Think about the saved I/O time. 

Let's discuss the 2 main types of stages in a little bit more detail now. 


Device Drivers

Device drivers are stage commands that interact with devices or other system resources. The simplest pipelines consist of two device drivers. 

Data read from one device moves through the pipeline to the other device. 

For example, to copy data from a file to your terminal, enter the following command (change TEST DATA to the name of an existing file):


      pipe < test data | console        

The results are like those of the CMS TYPE command. The map of this pipeline is as follows :




    Stage 1            Stage 2
    +-------------+    +-------------+
    ! < test data +--->! console     !
    +-------------+    +-------+-----+
         ^                     !
         !                     v
    +----+------+         +----------+
    ! TEST DATA !         ! Terminal !
    !   file    !         !          !
    +-----------+         +----------+        

The < stage command reads the file TEST DATA and writes each record to its output stream. The output of the < stage command is connected to the input of CONSOLE.  CONSOLE reads the records from its input stream and displays them on the screen. 

CMS Pipelines includes many device-driver stage commands. They work with tapes, printers, disk files, XEDIT data, the console, the reader, the program stack, REXX variables, and the system environments. Although not all of these are true devices, we still refer to stage commands that work with them as device drivers. 

The device drivers we will use most often in this lesson are <, >, >>, CONSOLE, CP, COMMAND, and LITERAL. We've already seen some of these. Let's look at some examples of the others. The following pipeline uses the COMMAND device driver:


    pipe command LISTFILE * * A | > yourlist data a
    Ready;        

The COMMAND device driver passes the LISTFILE command to CMS for execution and writes the results of LISTFILE to its output stream. (The results are not displayed on your terminal). The second stage is a > device driver. The > device driver reads records from its input stream and writes them to the file YOURLIST DATA A. 

 1Have you any idea what you would have to add to the above pipeline to also see the result of LISTFILE at your terminal ?

For reasons that will be explained in detail in next lesson, you should use the COMMAND device driver and not the CMS device driver. Note also that the parameters for this driver must be coded in uppercase

The CP device driver works in a similar fashion. Use it to capture the responses of CP commands. In this example, the CP stage command passes the string QUERY USERS to CP. The results are passed to the next stage, which writes them to a file named USERS DATA A. 


    pipe cp query users | > users data a
    Ready;        

People coding REXX procedures probably know that it is possible to get the result of a CP command via the REXX DIAG(8) function. 

Remember that CP returns the answer in a storage buffer, where each record is separated from the next by an '15'x (or line-end) character.  DIAG(8) doesn't split the records for you and thus you have to parse the output yourself.  PIPE does however split the records at the line-ends. 

Whether you use CMS Pipelines or REXX's DIAG() function to issue CP commands just depends on what you plan to do with it. 

For example,

    pipe cp spool pun rscs        

is an overkill and needless overhead, as you have to awake PIPE, which then has to call CP. 

LITERAL is a very useful device driver. It writes a string to its output stream. In this pipeline, LITERAL writes the string "Testing 1, 2, 3" to its output stream. The CONSOLE stage reads that record and displays it:


    pipe literal Testing 1, 2, 3 | console
    Testing 1, 2, 3
    Ready;        

How would you write the same string to a file ? Substitute a > stage command for CONSOLE. 

Use the LITERAL stage command if you want to do quick tests and need specific data. It's sometimes faster than to look up the HELP or the manuals. For example :

   pipe literal A ! locate /a/ | console        

will quickly reveal that LOCATE is case sensitive. 

The >> (append file) device driver adds records to the end of an existing file, or creates a file if it does not exist. The following pipeline adds the record "The End" to the file USERS DATA A:


    pipe literal The End | >> users data a
    Ready;        

From the few examples above, one of the advantages of CMS Pipelines should be clear now. The functions performed by the pipeline work on records. These records are pumped into or out the pipeline via device drivers. Using the same plumbing for other input or output devices is just a matter of changing the device drivers. If today, your data comes from a CMS disk file, and tomorrow, the same file is on tape, then you just have to change the < stage into a TAPE stage. 


Filters

Device drivers let you get data in and out of a pipeline.  Filters on the other hand, are stage commands that work on data already in the pipeline. The COUNT stage command used in the first example pipeline is a filter. It counts every record that flows into it from its input stream, then it writes one record containing that count to its output stream. The LOCATE stage command is also a filter. It examines the records from its input stream, looking for those that match a specified string. If the record matches, LOCATE writes the record to its output stream.  LOCATE discards records that do not match (or puts them in the secondary output stream). 

Filters can do any function imaginable. Many filters are built into CMS Pipelines, but you can also code your own using the REXX language. 

The filters supplied with CMS Pipelines do many functions of general use. For example, they select records based on the content of the record or on the position of the record in the stream flowing through the pipeline. They change or rearrange records as they pass through. They can even sort records. 

Note: Stage commands are grouped into the categories filter, device driver, or other for convenience. To the PIPE dispatcher, they are all just stage commands that happen to do different kinds of functions. 

And, CMS Pipelines stages are normally designed to do very simple things (mainly to make them reusable). Pipelines can become very complex, not because of the complexity of the separate stages, but due to the complex plumbing construction combining all the stages (compare it to the complexity of an oil refinery). 

This simplicity of the filters was a general design item of CMS Pipelines. For example, CHOP 5 truncates records that are longer than 5 characters, but does not extend records with less than 5 characters ! This should be done with yet another filter, namely PAD.


Specifying PIPE options

In addition to an operand (consisting of one or more pipelines), the PIPE command accepts one or more options.  PIPE options reassign the stage separator, trace execution of the pipeline, control the level of messages displayed, and so on. A complete list of the PIPE options is in the VM/ESA: CMS Pipelines Reference (SC24-5592) or via HELP PIPE (OPTIONS. 

We'll describe several PIPE options as the need arises. To show you how to specify a PIPE option, we'll use the STAGESEP option.  STAGESEP assigns the stage separator to a different character. By default, the stage separator is a vertical bar (|). To use a question mark as the stage separator, for example, specify STAGESEP ? as shown in this figure :


  pipe (stagesep ? trace) literal one two three ? count words ? console
  3        

As shown, PIPE options should be enclosed within parentheses after the PIPE keyword. When specifying more than one option, separate them with blanks. The first stage of the pipeline begins immediately after the options. 

Understanding Pipelines

New pipeline users often think that each stage of a pipeline processes all the records before writing results to the next stage. This is not true, and one of the big differences with the pipelines of PC DOS or UNIX. Most stages process only one record at a time.  CMS Pipelines controls when the stages run. It knows which stages have a record to process and which do not. The order in which stages run is unpredictable. Because CMS Pipelines usually lets each stage process only one record at a time, only a few records are in the pipeline at any moment, so that minimal virtual storage is used.  This is another important characteristic of pipelines. It means that if you process a file containing one trillion records, you do not need to worry about having enough virtual storage to hold all those records. 

There are times, however, when all the records need to be collected in storage before further processing, as the analogy figure suggests. 


Some stage commands need to read all the records before they can do their work. For example, the SORT stage command cannot sort the records in the pipeline until it has read all the records. After it sorted the records, it writes those records, one at a time, to its output stream. But, these filters are exceptions. They are said to buffer the records. We'll indicate when a stage command buffers records. 

In the manuals and the HELP, stages that buffer records are indicated as delaying the records

And, to let you better understand the power of CMS Pipelines, let's stress that when you handle big files, it is not always so that the file will be read completely. Look at this example :

pipe < big file a | take 5 | console        

We'll come back on its details later, but the TAKE stage does indeed what you expect it should do, take the first 5 records passing through the pipe (in this case coming from the disk-read stage). 

After having seen 5 records go by, the TAKE stages tells the PIPE dispatcher "I've done my job, I don't need anything more...", and the dispatcher in turn informs the previous stage that there isn't anybody listening anymore to its output, so it can stop also. (This would happen also even if there are plenty of other stages between the '<' and the TAKE). So, a pipeline can suddenly collapse when one stage decides to stop. And, of course, as there is no need to read the file from disk any further, no useless I/O take1 place. 


Pipelines in REXX procedures.

PIPE commands can be issued from the CMS command line when they are short (limit is the length of the command line) and not too complex. 

In many cases, the PIPE command is too complex, or must be saved for later reuse. In that case, you should code your PIPE command in a REXX procedure. The procedure may then also do some extra parameter checking and handling, so to make your pipeline more general. 

Preserving a pipeline.

Suppose we want to code this PIPE command in the procedure :


  PIPE < USER DIRECT A | SPECS W2-4 1 | NFIND MAINT | > OUTPUT FILE A        

The procedure looks then like this :


   /* Use of a PIPE command */
   address command
   'PIPE < USER DIRECT A ! SPECS W2-4 1 | NFIND MAINT | > OUTPUT FILE A'
   exit        

Often you'll start to design or test (part of) a PIPE at the CMS command line. Once you have something that becomes usable, you are confronted with the problem that you have to transfer it to XEDIT to put it in a REXX procedure. One solution is of course to memorize or note down the command that is in the terminal retrieve buffer and then switch to XEDIT and re-enter it there. 

But CMS Pipelines can help you here too ! This is the scenario to follow :

  1. You design your PIPE. It is thus in the terminal retrieve buffer. 
  2. You issue the command PIPE CONSOLE | > MY EXEC A from the CMS command line. 
  3. You hit the retrieve PF-key twice to bring back your original PIPE command
  4. and you hit ENTER. The string will be put in the file MY EXEC. 
  5. You hit ENTER an extra time to signal an end-of-file to the CONSOLE stage. 

Even if you don't need the PIPE in a procedure, it may as well be a good and simple technique to write it in an EXEC anyway, as it then is easier to edit your pipeline. If you then use the EXECCALL goodie, the execution is just one PF-key away. See EXECCALL goodie. for more information. 

Coding style for more complex pipelines.

With experience however, you will soon start to write longer pipelines. Rather than string out the PIPE command on a single line, we'll use REXX continuation characters to split it onto several lines, as in the following example :

Pipeline coding style, type 1.   /* Example of coding style for PIPE commands */
   address command
   parse upper arg fileid
   'PIPE',
      '|<' fileid,                           /* read the file */
      '|SPECS W2-4 1',                   /* take words 2 to 4 */
      '|NFIND MAINT',               /* eliminate MAINT record */
      '|> OUTPUT FILE A'              /* write to output file */
   exit rc        

This coding scheme is called portrait format as opposed to the landscape format used earlier. It's main advantages are :

  1. One stage per line. This allows to easily insert extra stages,
  2. Usually not hindered by the screen width,
  3. Stages can be commented,
  4. By placing the stage separator at the left, a nice alignment can be obtained, and errors due to missing stage separators avoided.

Please review carefully the syntax. Each stage is enclosed in quotes (except if it contains REXX variables) to conform to our rule code everything that is not a variable as a literal string surrounded by quotes. Each stage is then followed by the REXX continuation character (comma), except of course the last stage, to make it one long string that CMS Pipelines can understand. 

Remark that we prefaced the first stage with a stage separator to have a nice alignment.

On the S-disk you can find the XEDIT macro FMTP XEDIT that reformats a landscape pipeline to portrait format. 

You should know that REXX replaces the continuation character with a blank when it interprets the lines. If you do not want REXX to put a blank between the lines, use the REXX concatenation symbol (||) :

   /* Continuation without an intervening blank */
   address command
   'PIPE',
       '|LITERAL Hello'||,
       '|CONSOLE'        

When REXX interprets the lines, there is no blank between the word "Hello" and the following stage separator ; the executed statement becomes :

   'PIPE LITERAL Hello |CONSOLE'        

Because trailing blanks are significant to some stage commands, it is important to remember how to use the concatenation symbol. We'll have to repeat this a few times until we're sure you understand the implications. 

In a later lesson, we will learn about multistream pipelines that complicate the coding in procedures. We'll see that if we adopt the above style, it will be well suited for the more complex multistream pipelines. 

One of the most frequent coding errors is to forget a continuation character. If you want, you can improve the coding style a bit further, as follows :

Pipeline coding style, type 2.   /* Example of coding style for PIPE commands */
   address command
   parse upper arg fileid
   'PIPE'                ,
      '|<' fileid        ,                   /* read the file */
      '|SPECS W2-4 1'    ,               /* take words 2 to 4 */
      '|NFIND MAINT'     ,          /* eliminate MAINT record */
      '|> OUTPUT FILE A'              /* write to output file */
   exit rc        

Remark the alignment of the continuation characters. 

Some prefer however to write the stage separators at the end of the lines, as in following example :

Pipeline coding style, type 3.   /* Example of coding style for PIPE commands */
   address command
   parse upper arg fileid
   'PIPE'                ,
      '<' fileid'!'      ,                   /* read the file */
      'SPECS W2-4 1!'    ,               /* take words 2 to 4 */
      'NFIND MAINT!'     ,          /* eliminate MAINT record */
      '> OUTPUT FILE A'               /* write to output file */
   exit rc        

The advantage here is that there are no problems when trailing blanks are significant for some stage, and thus, concatenation characters aren't needed. We find the look not so nice as previous styles. Anyway, try to be consistent and choose one coherent style. 

Before VM/ESA Version 1, Release 1.1 and the general availability of CMS Pipelines in VM/ESA, REXX statements were limited to 512 bytes. As CMS Pipelines can become quite long an complex, this imposed a too stringent limitation. REXX was therefore modified to allow much larger statements (in practice, no limit).

Another advantage of the portrait format is that you can easily remove a stage from the pipeline. A permanent remove is of course a delete of the line, but if, for testing reasons, you want to temporarily ignore a stage, then you can use one of the following techniques :

  1. Put the line in a enclosing comment, but pay attention to the continuation character ! This is an example :

 /*   '|SPECS W2-4 1',               /* read the file */ */,        

  1. Note the extra trailing continuation character !
  2. Move the line to a place outside the scope of your program. You can either move the record to a zone enclosed with comment records, like these :

  /*
        '|SPECS W2-4 1',           /* read the file */
  */        

  1. or, more simple, if the procedure is not large, move the record after the REXX EXIT statement so that it never will be executed.

Return codes

Each stage of the pipeline gives a return code when it ends, but PIPE returns only one return code.  PIPE returns the worst return code from all the stages in the pipeline. (Any negative return code is worse than any positive return code). 

You can put options following the PIPE command to display a message with each return code from each stage, and to list stages that return with a nonzero return code, but this gives a flood of information that will not help you much in debugging. 

Look at this example :


  pipe command ACCESS 123 B/B | hole | command ACCESS 124 C/C        

This pipeline tries to access 2 minidisks, but if one of the virtual disk addresses doesn't exist, the return code of the PIPE will be 100 (invalid address), and you won't be able to discover which of the commands issued the message. We'll discuss ways to circumvent this kind of problems later.


Pipeline Help

There are two ways to get help information about CMS Pipelines. You can use the CMS HELP command, or you can use the HELP stage command of CMS Pipelines. Both ways are described below. 

Using the CMS HELP command

You can use the CMS HELP command to get reference information for the PIPE command, messages, stage commands, and pipeline subcommands. To display information about the PIPE command, enter :

      help pipe
   or
      help cms pipe        

To display a menu of available stage commands and pipeline subcommands, use:

      help pipe menu        

To find the most appropriate stage command for a specific task,

      help pipe task        

can be most useful. 

For a specific stage command, specify the stage command name. For example, to get information about the SPECS stage command, enter:

      help pipe specs        

Using the HELP stage command

CMS Pipelines includes a HELP stage command that displays information about CMS Pipelines messages, stage commands, and pipeline subcommands. 

To use it, enter a one-stage PIPE command :

      pipe help literal        

The above example gets help information for the LITERAL stage command. The word literal is an operand of the HELP stage command. To get help on any pipeline message, stage command, or pipeline subcommand, type its name as an operand of HELP. 

CMS Pipelines remembers messages it has issued. To get help on the message issued most recently, specify 0 as the operand on the HELP stage command :

      pipe help 0        

If you omit the operand, it defaults to 0. The following PIPE command yields the same result:

      pipe help        

CMS Pipelines remembers the ten messages issued before the last one. You can get help for any of these messages without knowing its message number. For instance, to get help for the last but one message, enter :

      pipe help 1        

With VM/ESA Version 1, Release 2.2, the Help information as provided by John Hartmann himself is accessible online. It is not documented in the official VM/ESA manuals. As said in the introduction of this lesson, CMS Pipelines was first available as Program Offering. As some users of this offering became acquainted with the way John explains things, it is now provided again in VM/ESA (through the PIPELINE HELPLIB file on the Help disk). 

You can see the HELP of John by issuing the

PIPE AHELP MENU

    or

PIPE AHELP STAGE_CMD

The way John explains things is sometimes a bit different and strange, but the examples that are included can be different from the standard HELP and can be of extra use. 

You'll even discover other stage commands that are not officially documented or supported (sometimes indicated as experimental). Look for example at BROWSE. These may become officially supported in a later release of VM/ESA. 

You are at the end of this chapter. You should now select Chapter 2: Filters to continue the course.


Footnotes:

(1) Before that release of VM/ESA, CMS Pipelines was available as program 5785-RAC.



Michael Granderson

Senior Applications Developer at NACD (National Association of Corporate Directors)

1y

Eduardo Crivelaro (Mainframe) - Your article mentioned the following: "New pipeline users often think that each stage of a pipeline processes all the records before writing results to the next stage. This is not true, and one of the big differences with the pipelines of PC DOS or UNIX. Most stages process only one record at a time.  CMS Pipelines controls when the stages run. It knows which stages have a record to process and which do not. The order in which stages run is unpredictable. Because CMS Pipelines usually lets each stage process only one record at a time, only a few records are in the pipeline at any moment, so that minimal virtual storage is used.  This is another important characteristic of pipelines. It means that if you process a file containing one trillion records, you do not need to worry about having enough virtual storage to hold all those records." That is such a key distinction between CMS Pipelines and pipes in Unix and Linux. I was curious about sorts, and I see that the SORT stage command is an exception, and you indicated how it is handled. Nice! Thank you very much for your post!

Like
Reply

To view or add a comment, sign in

More articles by Eduardo M. Crivelaro

Insights from the community

Others also viewed

Explore topics