Links

GitHub

Open HUB

Quick Links

Download

STREAMS

SIGTRAN

SS7

Hardware

SCTP

Related

Package

Manual

FAQ

Manuals

sctp Manual

iperf Manual

SPG Manual

STREAMS Manual

strcompat Manual

strutil Manual

strbcm Manual

strtty Manual

strxns Manual

strxnet Manual

strsock Manual

strinet Manual

strsctp Manual

striso Manual

netperf Manual

strchan Manual

strx25 Manual

strisdn Manual

strss7 Manual

sigtran Manual

strvoip Manual

osr61 Manual

LiS Manual

Documentation

FAQ

SIGTRAN

Design

Conformance

Performance

References

Man Pages

Manuals

Papers

Home

Overview

Status

Documentation

Resources

About

News

SPG Manual

Description: OpenSS7 Online Manuals

A PDF version of this document is available here.

OpenSS7

OpenSS7

OpenSS7 STREAMS Programmer’s Guide

About This Manual

This is Edition 7.20141001, last updated 2014-10-25, of The OpenSS7 STREAMS Programmer’s Guide, for Version 1.1 release 7.20141001 of the OpenSS7 package.


Preface


Acknowledgements

As with most open source projects, this project would not have been possible without the valiant efforts and productive software of the Free Software Foundation, the Linux Kernel Community, and the open source software movement at large.


Sponsors

Funding for completion of the OpenSS7 OpenSS7 package was provided in part by:

Monavacon Limited
OpenSS7 Corporation

Additional funding for The OpenSS7 Project was provided by:

Monavacon LimitedOpenSS7 Corporation
AirNet CommunicationsComverse Ltd.
eServGlobal (NZ) Pty Ltd.Excel Telecommunications
France TelecomGeoLink SA
HOB InternationalLockheed Martin Co.
MotorolaNetCentrex S. A.
Newnet Communications, Inc.Nortel Networks
Peformance Technologies, Inc.Sonus Networks Inc.
SS8 Networks Inc.SysMaster Corporation
TECORETumsan Oy
VerisignVodare Ltd.

Contributors

The primary contributor to the OpenSS7 OpenSS7 package is Brian F. G. Bidulock. The following is a list of notable contributors to The OpenSS7 Project:

- Per Berquist- Kutluk Testicioglu
- John Boyd- John Wenker
- Chuck Winters- Angel Diaz
- Peter Courtney- Jérémy Compostella
- Tom Chandler- Sylvain Chouleur
- Gurol Ackman- Christophe Nolibos
- Pierre Crepieux- Bryan Shupe
- Christopher Lydick- D. Milanovic
- Omer Tunali- Tony Abo
- John Hodgkinson- Others

Supporters

Over the years a number of organizations have provided continued support in the form of assessment, inspection, testing, validation and certification.


Telecommunications

Integrated Telecom SolutionsAASTRA
Accuris NetworksAculab
AdaxAEPONA
AirNet CommunicationsAirwide Solutions
AlacreAlcatel
Alcatel-LucentAltobridge
AnamApertio (now Nokia)
Alaska Power & TelephoneAricent
Artesyn (now Emerson)Arthus Technologies
Bharti TelesoftBubbleMotion
Continuous Computing (Trillium)Cellnext Solutions Limited
CiscoCodent Networks
Cogeco Cable Inc.Comverse Ltd.
Condor NetworksCoral Telecom
CorecessCorelatus
CosiniData Connection
DatacraftDatatek Applications Inc.
DatatronicsDialogic
DigiumDruid Software
DTAG (Deutsche Telecom AG)Empirix
Engage Communication Inc.Ericsson
eServGlobal (NZ) Pty Ltd.ETSI
Excel TelecommunicationsFlextronics (now Aricent)
France TelecomGemini Mobile Technologies
Geolink (now SeaMobile)Global Edge
HuaweiIBSYS Canada
Integral Access (now Telco Systems)Integrat Mobile Aggregation Services
Kineto WirelessLucent
Maestro CommunicationsMCI
MindspeedMobis
MobixellMotivity Telecom, Inc.
MotorolaMpathix Inc.
m-Wise Inc.Myriad Group
Net2PhoneNetCentrex S. A.
NetTest A/S (now Anritsu)NeuvaTel PCS
Newnet Communications, Inc.NMS (now Dialogic)
Noble Systems CorporationNokia
Nortel Networksj2 Global Communications, Inc.
OnMobileOrange
OuroborosP3 Solutions GmbH
Primal Technologies Inc.Propolys Pte Ltd.
Peformance Technologies, Inc.Pulse Voice Inc.
Reliance CommunicationsRoamware Inc.
SONORYS Technology GmbHSonus Networks Inc.
Spider Ltd. (now Emerson)SS8 Networks Inc.
Oasis SystemsStratus
Stratus Technologies Bermuda Ltd.Sicap AG
Switchlab Ltd.Synapse Mobile Networks SA
SysMaster CorporationTata Communications
TecoreTekno Telecom LLC
TelcordiaTelecom Italia
TeledesignTelemetrics Inc.
TelnorTE-Systems
Texas Instruments Inc.Tumsan Oy
UlticomVanu Inc.
Vecto Communications SRLVeraz Networks
VeriSignVodare Ltd.
VSE NET GmbHThe Software Group Limited
WINGcon GmbHWipro Technologies
Xentel Inc.YCOM SA
ZTE Corporation

Aerospace and Military

Advanced TechnologiesAltobridge
AltobridgeBBN (Bolt, Beranek, and Neuman)
ARINCBoldon James
ATOS OriginLockheed Martin Co.
BoeingNorthrop Grumman Corporation
Boldon JamesQinetiQ
CRNASAAB
DSNA-DGAC 1Sandia National Laboratories
DLR 2Thales
DSNA-DTIWright-Patterson Air Force Base
Egis-Avia (Sofreavia)
MetaSlash, Inc.
Sofreavia
FAA WJHTC3
Thales ATM/Air Systems

Financial, Business and Security

AlebraAlebra
Automated Trading Desk (now Citi)Boldon James
Banco CredicoopFujitsu-Seimens
BeMacFutureSoft
Boldon JamesGSX
CyberSource CorporationHOB International
Fujitsu-SeimensHP (Hewlett-Packard)
FutureSoftIBM
Gcom, Inc.
GSX
HOB International
HP (Hewlett-Packard)
IBM
Lightbride (now CyberSource)
MasterCardAlert Logic
Network Executive Software Inc.Apani
Packetware Inc.BeMac
Packetware Inc.ERCOM
Prism Holdings Ltd.Hitech Systems
S2 Systems (now ACI)iMETRIK
Symicron Computer Communications LimitedIntrado Inc.

Education, Health Care and Nuclear Power

IEEE Computer SocietyAteb
ENST 4Mandexin Systems Corporation
HTW-Saarland 5
Kansas State UniversityAreva NP
University of North Carolina CharlotteEuropean Organization for Nuclear Research

Agencies

It would be difficult for the OpenSS7 Project to attain the conformance and certifications that it has without the free availability of specifications documents and standards from standards bodies and industry associations. In particular, the following:

3GPP (Third Generation Partnership Project)
ATM Forum
EIA/TIA (Electronic Industries Alliance)
ETSI (European Telecommunications Standards Institute)
ICAO (International Civil Aviation Organization)
IEEE (Institute of Electrical and Electronic Engineers)
IETF (The Internet Engineering Task Force)
ISO (International Organization for Standardization)
ITU (International Telecommunications Union)
Mulutiservices Forum
The Open Group

Of these, ICAO, ISO, IEEE and EIA have made at least some documents publicly available. ANSI is notably missing from the list: at one time draft documents were available from ANSI (ATIS), but that was curtailed some years ago. Telecordia does not release any standards publicly. Hopefully these organizations will see the light and realize, as the others have, that to remain current as a standards organization in today’s digital economy requires providing individuals with free access to documents.


Authors

The authors of the OpenSS7 package include:

- Brian Bidulock

Maintainer

The maintainer of the OpenSS7 package is:

- Brian Bidulock

Please send bug reports to bugs@openss7.org using the send-pr script included in the package, only after reading the BUGS file in the release, or See ‘Problem Reports’.

Document Information

Notice

This package is released and distributed under the GNU Affero General Public License (see GNU Affero General Public License). Please note, however, that there are different licensing terms for the manual pages and some of the documentation (derived from OpenGroup6 publications and other sources). Consult the permission notices contained in the documentation for more information.

This document, is released under the GNU Free Documentation License (see GNU Free Documentation License) with no sections invariant.

Abstract

This document provides a STREAMS Programmer’s Guide for OpenSS7.

Objective

The objective of this document is to provide a guide for the STREAMS programmer when developing STREAMS modules, drivers and application programs for OpenSS7.

This guide provides information to developers on the use of the STREAMS mechanism at user and kernel levels.

STREAMS was incorporated in UNIX System V Release 3 to augment the character input/output (I/O) mechanism and to support development of communication services.

STREAMS provides developers with integral functions, a set of utility routines, and facilities that expedite software design and implementation.

Intent

The intent of this document is to act as an introductory guide to the STREAMS programmer. It is intended to be read alone and is not intended to replace or supplement the OpenSS7 manual pages. For a reference for writing code, the manual pages (see STREAMS(9)) provide a better reference to the programmer. Although this describes the features of the OpenSS7 package, OpenSS7 Corporation is under no obligation to provide any software, system or feature listed herein.

Audience

This document is intended for a highly technical audience. The reader should already be familiar with Linux kernel programming, the Linux file system, character devices, driver input and output, interrupts, software interrupt handling, scheduling, process contexts, multiprocessor locks, etc.

The guide is intended for network and systems programmers, who use the STREAMS mechanism at user and kernel levels for Linux and UNIX system communication services.

Readers of the guide are expected to possess prior knowledge of the Linux and UNIX system, programming, networking, and data communication.

Revisions

Take care that you are working with a current version of this document: you will not be notified of updates. To ensure that you are working with a current version, contact the Author, or check The OpenSS7 Project website for a current version.

A current version of this document is normally distributed with the OpenSS7 package.

Version Control

$Log: SPG2.texi,v $
Revision 1.1.2.3  2011-07-27 07:52:12  brian
- work to support Mageia/Mandriva compressed kernel modules and URPMI repo

Revision 1.1.2.2  2011-02-07 02:21:33  brian
- updated manuals

Revision 1.1.2.1  2009-06-21 10:40:06  brian
- added files to new distro

ISO 9000 Compliance

Only the TeX, texinfo, or roff source for this document is controlled. An opaque (printed, postscript or portable document format) version of this document is an UNCONTROLLED VERSION.

Disclaimer

OpenSS7 Corporation disclaims all warranties with regard to this documentation including all implied warranties of merchantability, fitness for a particular purpose, non-infringement, or title; that the contents of the document are suitable for any purpose, or that the implementation of such contents will not infringe on any third party patents, copyrights, trademarks or other rights. In no event shall OpenSS7 Corporation be liable for any direct, indirect, special or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with any use of this document or the performance or implementation of the contents thereof.

OpenSS7 Corporation reserves the right to revise this software and documentation for any reason, including but not limited to, conformity with standards promulgated by various agencies, utilization of advances in the state of the technical arts, or the reflection of changes in the design of any techniques, or procedures embodied, described, or referred to herein. OpenSS7 Corporation is under no obligation to provide any feature listed herein.

U.S. Government Restricted Rights

If you are licensing this Software on behalf of the U.S. Government ("Government"), the following provisions apply to you. If the Software is supplied by the Department of Defense ("DoD"), it is classified as "Commercial Computer Software" under paragraph 252.227-7014 of the DoD Supplement to the Federal Acquisition Regulations ("DFARS") (or any successor regulations) and the Government is acquiring only the license rights granted herein (the license rights customarily provided to non-Government users). If the Software is supplied to any unit or agency of the Government other than DoD, it is classified as "Restricted Computer Software" and the Government’s rights in the Software are defined in paragraph 52.227-19 of the Federal Acquisition Regulations ("FAR") (or any successor regulations) or, in the cases of NASA, in paragraph 18.52.227-86 of the NASA Supplement to the FAR (or any successor regulations).

Organization

This guide has several chapters, each discussing a unique topic. Introduction, Overview, Mechanism and Processing contain introductory information and can be ignored by those already familiar with STREAMS concepts and facilities.

This document is organized as follows:

Preface

Describes the organization and purpose of the guide. It also defines an intended audience and an expected background of the users of the guide.

Introduction

An introduction to STREAMS and the OpenSS7 package. STREAMS Fundamentals. Presents an overview and the benefits of STREAMS.

Overview

A brief overview of STREAMS.

Mechanism

A description of the STREAMS framework. Describes the basic operations for constructing, using, and dismantling Streams. These operations are performed using open(2s), close(2s), read(2s), write(2s), and ioctl(2s).

Processing

Processing and procedures within the STREAMS framework. Gives an overview of the STREAMS put and service routines.

Messages

STREAMS Messages, organization, types, priority, queueing, and general handling. Discusses STREAMS messages, their structure, linkage, queueing, and interfacing with other STREAMS components.

Polling

Polling of STREAMS file descriptors and other asynchronous application techniques. Describes how STREAMS allows user processes to monitor, control, and poll Streams to allow an effective utilization of system resources.

Modules and Drivers

An overview of STREAMS modules, drivers and multiplexing drivers. Describes the STREAMS module and driver environment, input-output controls, routines, declarations, flush handling, driver-kernel interface, and also provides general design guidelines for modules and drivers.

Modules

Details of STREAMS modules, including examples. Provides information on module construction and function.

Drivers

Details of STREAMS drivers, including examples. Discusses STREAMS drivers, elements of driver flow control, flush handling, cloning, and processing.

Multiplexing

Details of STREAMS multiplexing drivers, including examples. Describes the STREAMS multiplexing facility.

Pipes and FIFOs

Details of STREAMS-based Pipes and FIFOs. Provides information on creating, writing, reading, and closing of STREAMS-based pipes and FIFOs and unique connections.

Terminal Subsystem

Details of STREAMS-based Terminals and Pseudo-terminals. Discusses STREAMS-based terminal and and pseudo-terminal subsystems.

Synchronization

Discusses STREAMS in a symmetrical multi-processor environment.

Reference

Reference section.

Conformance

Conformance of the OpenSS7 package to other UNIX implementations of STREAMS.

Portability

Portability of STREAMS modules and drivers written for other UNIX implementations of STREAMS and how they can most easily be ported into OpenSS7; but, for more details on this topic, see the OpenSS7 - STREAMS Portability Guide.

Development

Development guidelines for developing portable STREAMS modules and drivers.

Data Structures

Primary STREAMS Data Structures, descriptions of their members, flags, constants and use. Summarizes data structures commonly used by STREAMS modules and drivers.

Message Types

STREAMS Message Type reference, with descriptions of each message type. Describes STREAMS messages and their use.

Utilities

STREAMS kernel-level utility functions for the module or driver writer. Describes STREAMS utility routines and their usage.

Debugging

STREAMS debugging facilities and their use. Provides debugging aids for developers.

Configuration

STREAMS configuration, the STREAMS Administrative Driver and the autopush facility. Describes how modules and drivers are configured into the Linux and UNIX system, tunable parameters, and the autopush facility.

Administration

Administration of the STREAMS subsystem.

Examples

Collected examples.

Conventions Used

This guide uses texinfo typographical conventions.

Throughout this guide, the word STREAMS will refer to the mechanism and the word Stream will refer to the path between a user application and a driver. In connection with STREAMS-based pipes Stream refers to the data transfer path in the kernel between the kernel and one or more user processes.

Examples are given to highlight the most important and common capabilities of STREAMS. They are not exhaustive and, for simplicity, often reference fictional drivers and modules. Some examples are also present in the OpenSS7 package, both for testing and example purposes.

System calls, STREAMS utility routines, header files, and data structures are given using texinfo filename typesetting, when they are mentioned in the text.

Variable names, pointers, and parameters are given using texinfo variable typesetting conventions. Routine, field, and structure names unique to the examples are also given using texinfo variable typesetting conventions when they are mentioned in the text.

Declarations and short examples are in texinfosample’ typesetting.

texinfo displays are used to show program source code.

Data structure formats are also shown in texinfo displays.

Other Documentation

Although the STREAMS Programmer’s Guide for OpenSS7 provides a guide to aid in developing STREAMS applications, readers are encouraged to consult the OpenSS7 manual pages. For a reference for writing code, the manual pages (see STREAMS(9)) provide a better reference to the programmer. For detailed information on system calls used by STREAMS (section 2), and STREAMS utilities from section 8. STREAMS specific input output control (ioctl) calls are provided in streamio(7). STREAMS modules and drivers are described on section 7. STREAMS is also described to some extent in the System V Interface Definition, Third Edition.

UNIX Edition

This system conforms to UNIX System V Release 4.2 for Linux.

Related Manuals

OpenSS7 Installation and Reference Manual

Copyright

© 1997-2014 Monavacon Limited. All Rights Reserved.


1 Introduction


1.1 Background

STREAMS is a facility first presented in a paper by Dennis M. Ritchie in 1984,7 originally implemented on 4.1BSD and later part of Bell Laboratories Eighth Edition UNIX, incorporated into UNIX System V Release 3.0 and enhanced in UNIX System V Release 4 and UNIX System V Release 4.2. STREAMS was used in SVR4 for terminal input/output, pseudo-terminals, pipes, named pipes (FIFOs), interprocess communication and networking. Since its release in System V Release 4, STREAMS has been implemented across a wide range of UNIX, UNIX-like, and UNIX-based systems, making its implementation and use an ipso facto standard.

STREAMS is a facility that allows for a reconfigurable full duplex communications path, Stream, between a user process and a driver in the kernel. Kernel protocol modules can be pushed onto and popped from the Stream between the user process and driver. The Stream can be reconfigured in this way by a user process. The user process, neighbouring protocol modules and the driver communicate with each other using a message passing scheme closely related to MOM (Message Oriented Middleware). This permits a loose coupling between protocol modules, drivers and user processes, allowing a third-party and loadable kernel module approach to be taken toward the provisioning of protocol modules on platforms supporting STREAMS.

On UNIX System V Relase 4.2, STREAMS was used for terminal input-output, pipes, FIFOs (named pipes), and network communications. Modern UNIX, UNIX-like and UNIX-based systems providing STREAMS normally support some degree of network communications using STREAMS; however, many do not support STREAMS-based pipe and FIFOs8 or terminal input-output.9.

Linux has not traditionally implemented a STREAMS subsystem. It is not clear why, however, perceived ideological differences between STREAMS and Sockets and also the XTI/TLI and Sockets interfaces to Internet Protocol services are usually at the centre of the debate. For additional details on the debate, see About This Manual in OpenSS7 Frequently Asked Questions.

Linux pipes and FIFOs are SVR3-style, and the Linux terminal subsystem is BSD-like. UNIX 98 Pseudo-Terminals, ptys, have a specialized implementation that does not follow the STREAMS framework and, therefore, do not support the pushing or popping of STREAMS modules. Internal networking implementation under Linux follows the BSD approach with a native (system call) Sockets interface only.

RedHat at one time provided an Intel Binary Compatibility Suite (iBCS) module for Linux that supported the XTI/TLI interface and socksys system calls and input-output controls, but not the STREAMS framework (and therefore cannot push or pop modules).

OpenSS7 is the current open source implementation of STREAMS for Linux and provides all of the capabilities of UNIX System V Release 4.2 MP, plus support for mainstream UNIX implementations based on UNIX System V Release 4.2 MP through compatibility modules.

Although it is intended primarily as documentation for the OpenSS7 implementation of STREAMS, much of the OpenSS7 - STREAMS Programmer’s Guide is generally applicable to all STREAMS implementations.


1.2 What is STREAMS?

STREAMS is a flexible, message oriented framework for the development of GNU/Linux communications facilities and protocols. It provide a set of system calls, kernel resources, and kernel utilities within a framework that is applicable to a wide range of communications facilities including terminal subsystems, interprocess communication, and networking. It provides standard interfaces for communication input and output within the kernel, common facilities for device drivers, and a standard interface10 between the kernel and the rest of the GNU/Linux system.

The standard interface and mechanism enable modular, portable development and easy integration of high performance network services and their components. Because it is a message passing architecture, STREAMS does not impose a specific network architecture (as does the BSD Sockets kernel architecture. The STREAMS user interface is uses the familiar UNIX character special file input and output mechanisms open(2s), read(2s), write(2s), ioctl(2s), close(2s); and provides additional system calls, poll(2s), getmsg(2s), getpmsg(2s), putmsg(2s), putpmsg(2s), to assist in message passing between user-level applications and kernel-resident modules. Also, STREAMS defines a standard set of input-output controls (ioctl(2s)) for manipulation and configuration of STREAMS by a user-space application.

As a message passing architecture, the STREAMS interface between the user process and kernel resident modules can be treated either as fully synchronous exchanges or can be treated asynchronously for maximum performance.

1.2.1 Characteristics

STREAMS has the the following characteristics that are not exhibited (or are exhibited in different ways) by other kernel level subsystems:

  • STREAMS is based on the character device special file which is one of the most flexible special files available in the GNU/Linux system.
  • STREAMS is a message passing architecture, similar to Message Oriented Middleware (MOM) that achieves a high degree of functional decoupling between modules. This allows the service interface between modules to correspond to the natural interfaces found or described between protocol layers in protocol stack without requiring the implementation to conform to any given model.

    As a contrasting example, the BSD Sockets implementation, internal to the kernel, provides strict socket-protocol, protocol-protocol and protocol-device function call interfaces.

  • By using put and service procedures for each module, and scheduling service procedures, STREAMS combines background scheduling of coroutine service procedures with message queueing and flow control to provide a mechanism robust for both event driven subsystem and soft real-time subsystem.

    In contrast, BSD Sockets, internal to the kernel, requires the sending component across the socket-protocol, protocol-protocol, or protocol-device to handle flow control. STREAMS integrates flow control within the STREAMS framework.

  • STREAMS permits user runtime configuration of kernel data structure and modules to provide for a wide range of novel configurations and capabilities in a live GNU/Linux system. The BSD Sockets protocol framework does not provide this capability.
  • STREAMS is as applicable to termination input-output and interprocess communication as it is to networking protocols.

    BSD Sockets is only applicable to a restricted range of networking protocols.

  • STREAMS provides mechanisms (the pushing and popping of modules, and the linking and unlinking of Streams under multiplexing drivers) for complex configuration of protocol stacks; the precise topology being typically under the control of user space daemon processes.

    No other kernel protocol stack framework provides this flexible capability. Under BSD Sockets it is necessary to define specialized socket types to perform these configuration functions and not in any standard way.

1.2.2 Components

STREAMS provides a full-duplex communications path for data and control information between a kernel-resident driver and a user space process (see Figure 101).

Within the kernel, a Stream is comprised of the following basic components:

  • A Stream head that is inside the Linux kernel, but which sits closest to the user space process. The Stream head is responsible for communicating with user space processes and that presents the standard STREAMS I/O interface to user space processes and applications.
  • A Stream end or Driver that is inside the Linux kernel, but which sits farthest from the user space process. A Stream end or Driver that interfaces to hardware or other mechanisms within the Linux kernel.
  • A Module that sits between the Stream head and Stream end. The Module provides modular and flexible processing of control and data information passed up and down the Stream.
Simple Stream

Figure 101. Simple Stream

1.2.2.1 Stream head

A Stream head is the component of a Stream that is closest to the user space process. The Stream head is responsible for directly communicating with the user space process in user context and for converting system calls to actions performed on the Stream head or the conversion of control and data information passed between the user space process and the Stream in response to system calls. All Streams are associate with a Stream head. In the case of STREAMS-based pipes, the Stream may be associated with two (interconnected) Stream heads. Because the Stream head follows the same structure as a Module, it can be viewed as a specialized module.

With STREAMS, pipes and FIFOs are also STREAMS-based.11 STREAMS-based pipes and FIFOs do not have a Driver component.

STREAMS-based pipes place another Stream head in the position of the Driver. That is, a STREAMS-based pipe is a full-duplex communications path between two otherwise independent Stream heads. Modules may be placed between the Stream heads in the same fashion as they can exist between a Stream head and a Driver in a normal Stream. A STREAMS-based pipe is illustrated in Figure 102.

STREAMS-based Pipe

Figure 102. STREAMS-based Pipe

STREAMS-based FIFOs consist of a single Stream head that has its downstream path connected to its upstream path where the Driver would be located. Modules can be pushed under this single Stream Head. A STREAMS-based FIFO is illustrated in Figure 109.

STREAMS-based FIFO (named pipe)

Figure 109. STREAMS-based FIFO (named pipe)

For more information on STREAMS-based pipes and FIFOs, see Pipes and FIFOs.

1.2.2.2 Module

A STREAMS Module is an optional processing element that is placed between the Stream head and the Stream end. The Module can perform processing functions on the data and control information flowing in either direction on the Stream. It can communicate with neighbouring modules, the Stream head or a Driver using STREAMS messages. Each Module is self-contained in the sense that it does not directly invoke functions provided by, nor access data structures of, neighbouring modules, but rather communicates data, status and control information using messages. This functional isolation provides a loose coupling that permits flexible recombination and reuse of Modules. A Module follows the same framework as the Stream head and Driver, has all of the same entry points and can use all of the same STREAMS and kernel utilities to perform its function.

Modules can be inserted between a Stream head and Stream end (or another Stream head in the case of a STREAMS-based pipe or FIFO). The insertion and deletion of Modules from a Stream is referred to as pushing and popping a Module due to the fact that that modules are inserted or removed from just beneath the Stream head in a push-down stack fashion. Pushing and popping of modules can be performed using standard ioctl(2s) calls and can be performed by user space applications without any need for kernel programming, assembly, or relinking.

For more information on STREAMS modules, see Module Component.

1.2.2.3 Driver

All Streams, with the sole exception of STREAMS-based pipe and FIFOs, contain a Driver a the Stream end. A STREAMS Driver can either be a device driver that directly or indirectly controls hardware, or can be a pseudo-device driver that interface with other software subsystems within the kernel. STREAMS drivers normally perform little processing within the STREAMS framework and typically only provide conversion between STREAMS messages and hardware or software events (e.g. interrupts) and conversion between STREAMS framework data structures and device related data structures.

For more information on STREAMS drivers, see Driver Component.

1.2.2.4 Queues

Each component in a Stream (Stream head, Module, Driver) has an associated pair of queues. One queue in each pair is responsible for managing the message flow in the downstream direction from Stream head to Stream end; the other for the upstream direction. The downstream queue is called the write-side queue in the queue pair; the upstream queue, the read-side queue.

Each queue in the pair provides pointers necessary for organizing the temporary storage and management of STREAMS messages on the queue, as well as function pointers to procedures to be invoked when messages are placed on the queue or need to be taken off of the queue, and pointers to auxiliary and module-private data structures. The read-side queue also contains function pointers to procedures used to open and close the Stream head, Module or Driver instance associated with the queue pair. Queue pairs are dynamically allocated when an instance of the driver, module or Stream head is created and deallocated when the instance is destroyed.

For more information on STREAMS queues, see Queue Component.

1.2.2.5 Messages

STREAMS is a message passing architecture. STREAMS messages can contain control information or data, or both. Messages that contain control information are intended to illicit a response from a neighbouring module, Stream head or Stream end. The control information typically uses the message type to invoke a general function and the fields in the control part of the message as arguments to a call to the function. The data portion of a message represents information that is (from the perspective of the STREAMS framework) unstructured. Only cooperating modules, the Stream head or Stream end need know or agree upon the format of control or data messages.

A STREAMS message consists of one or more blocks. Each block is a 3-tuple of a message block, a data block and a data buffer. Each data block has a message type, and the data buffer contains the control information or data associated with each block in the message. STREAMS messages typically consist of one control-type block (M_PROTO) and zero or more data-type blocks (M_DATA), or just a data-type block.

A set of specialized and standard message types define messages that can be sent by a module or driver to control the Stream head. A set of specialized and standard message types define messages that can be sent by the Stream head to control a module or driver, normally in response to a standard input-output control for the Stream.

STREAMS messages are passed between a module, Stream head or Driver using a put procedure associated with the queue in the queue pair for the direction in which the message is being passed. Messages passed toward the Stream head are passed in the upstream direction, and those toward the Stream end, in the downstream direction. The read-side queue in the queue pair associated with the module instance to which a message is passed is responsible for processing or queueing upstream messages; the write-side queue, for processing downstream messages.

STREAMS messages are generated by the Stream head and passed downstream in response to write(2s), putmsg(2s), and putpmsg(2s) system calls; they are also consumed by the Stream head and converted to information passed to user space in response to read(2s), getmsg(2s), and getpmsg(2s) system calls.

STREAMS messages are also generated by the Driver and passed upstream to ultimately be read by the Stream head; they are also consumed when written by the Stream head and ultimately arrive at the Driver.

For more information on STREAMS messages, see Message Component.


1.3 Basic Streams Operations

This section provides a basic description of the user level interface and system calls that are used to manipulate a Stream.

A Stream is similar, and indeed is implemented, as a character device special file and is associated with a character device within the GNU/Linux system. Each STREAMS character device special file (character device node, see mknod(2)) has associated with it a major and minor device number. In the usual situation, a Stream is associated with each minor character device node in a similar fashion to a minor device instance for regular character device drivers.

STREAMS devices are opened, as are character device drivers, with the open(2s) system call.12 Opening a minor device node accesses a separate Stream instance between the user level process and the STREAMS device driver. As with normal character devices, the file descriptor returned from the open(2s) call, can be used to further access the Stream.

Opening a minor device node for the first time results in the creation of a new instance of a Stream between the Stream head and the driver. Subsequent opens of the same minor device node does not result in the creation of a new Stream, but provides another file descriptor that can be used to access the same Stream instance. Only the first open of a minor device node will result in the creation of a new Stream instance.

Once it has opened a Stream, the user level process can send and receive data to and from the Stream with the usual read(2s) and write(2s) system calls that are compatible with the existing character device interpretations of these system calls. STREAMS also provides the additional system calls, getmsg(2s) and getpmsg(2s), to read control and data information from the Stream, as well as putmsg(2s) and putpmsg(2s) to write control and data information. These additional system calls provide a richer interface to the Stream than is provided by the traditional read(2s) and write(2s) calls.

A Stream is closed using the close(2s) system call (or a call that closes file descriptors such as exit(2)). If a number of processes have the Stream open, only the last close(2s) of a Stream will result in the destruction of the Stream instance.

1.3.1 Basic Operations Example

An basic example of opening, reading from and writing to a Stream driver is shown in Listing 1.1.

#include <sys/types.h>
#include <sys/stat.h>
#include <sys/uio.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
main()
{
        char buf[1024]
        int fd, count;

        if ((fd = open("/dev/streams/comm/1", O_RDWR)) < 0) {
                perror("open failed");
                exit(1);
        }

        while ((count = read(fd, buf, 1024)) > 0) {
                if (write(fd, buf, count) != count) {
                        perror("write failed");
                        break;
                }
        }
        exit(0);
}

Listing 1.1: Basic Operations

The example in Listing 1.1 is for a communications device that provide a communications channel for data transfer between two processes or hosts. Data written to the device is communicated over the channel to the remote process or host. Data read from the device was written by the remote process or host.

In the example in Listing 1.1, a simple Stream is opened using the open(2s) call. /dev/streams/comm/1 is the path to the character minor device node in the file system. When the device is opened, the character device node is recognized as a STREAMS special file, and the STREAMS subsystem creates a Stream (if one does not already exist for the minor device node) an associates it with the minor device node. Figure 103 illustrates the state of the Stream at the point after the open(2s) call returns.

Stream to Communications Driver

Figure 103. Stream to Communications Driver

The while loop in Listing 1.1 simply reads data from the device using the read(2s) system call and then writes the data back to the device using the write(2s) system call.

When a Stream is opened for blocking operation (i.e., neither O_NONBLOCK nor O_NDLEAY were set), read(2s) will block until some data arrives. The read(2s) call might, however, return less that the requested ‘1024’ bytes. When data is read, the routine simply writes the data back to the device.

STREAMS implements flow control both in the upstream and downstream directions. Flow control limits the amount of normal data that can be queued awaiting processing within the Stream. High and low water marks for flow control are set on a queue pair basis. Flow control is local and specific to a given Stream. High priority control messages are not subject to STREAMS flow control.

When a Stream is opened for blocking operation (i.e., neither O_NONBLOCK nor O_NDLEAY were set), write(2s) will block while waiting for flow control to subside. write(2s) will always block awaiting the availability of STREAMS message blocks to satisfy the call, regardless of the setting of O_NONBLOCK or O_NDELAY.

In the example in Listing 1.1, the exit(2) system call is used to exit the program; however, the exit(2) results in the equivalent of a call to close(2s) for all open file descriptors and the Stream is flushed and destroyed before the program is finally exited.


1.4 Components

This section briefly describes each STREAMS component and how they interact within a Stream. Chapters later in this manual describe the components and their interaction in greater detail.


1.4.1 Queues

This subsection provides a brief overview of message queues and their associated procedures.

A queue provides an interface between an instance of a STREAMS driver, module or Stream head, and the other modules and drivers that make up a Stream for a direction of message flow (i.e., upstream or downstream). When an instance of a STREAMS driver, module or Stream head is associated with a Stream, a pair of queues are allocated to represent the driver, module or Stream head within the Stream. Queue data structures are always allocated in pairs. The first queue in the pair is the read-side or upstream queue in the pair; the second queue, the write-side or downstream queue.

Queues are described in greater detail in Queues and Priority.

1.4.1.1 Queue Procedures

This subsection provides a brief overview of queue procedures.

The STREAMS module, driver or Stream head provides five procedures that are associated with each queue in a queue pair: the put, service, open, close and admin procedures. Normally the open and close procedures (and possibly the optional admin procedure) are only associated with the read-side of the queue pair.

Each queue in the pair has a pointer to a put procedure. The put procedure is used by STREAMS to present a new message to an upstream or downstream queue. At the ends of the Stream, the Stream head write-side, or Stream end read-side, queue put procedure is normally invoked using the put(9s) utility. A module within the Stream typically has its put procedure invoked by an adjacent module, driver or Stream head that uses the putnext(9) utility from its own put or service procedure to pass message to adjacent modules. The put procedure of the queue receiving the message is invoked. The put procedure decides whether to process the message immediately, queue the message on the message queue for later processing by the queue’s service procedure, or whether to pass the message to a subsequent queue using putnext(9).

Each queue in the pair has a pointer to an optional service procedure. The purpose of a service procedure process messages that were deferred by the put procedure by being placed on the message queue with utilities such as putq(9). A service procedure typically loops through taking messages off of the queue and processing them. The procedure normally terminates the loop when it can not process the current message (in which case it places the message back on the queue with putbq(9)), or when there is no longer any messages left on the queue to process. A service procedure is optional in the sense that if the put procedure never places any messages on the queue, a service procedure is unnecessary.

Each queue in the pair also has a pointer to a open and close procedure; however, the qi_qopen and qi_qclose pointers are only significant in the read-side queue of the queue pair.

The queue open procedure for a driver is called each time that a driver (or Stream head) is opened, including the first open that creates a Stream and upon each successive open of the Stream. The queue open procedure for a module is called when the module is first pushed onto (inserted into) a Stream, and for each successive open of a Stream upon which the module has already been pushed (inserted).

The queue close procedure for a module is called whenever the module is popped (removed) from a Stream. Modules are automatically popped from a Stream on the last close of the Stream. The queue close procedure for a driver is called with the last close of the Stream or when the last reference to the Stream is relinquished. If the Stream is linked under a multiplexing driver (I_LINK(7) (see streamio(7))), or has been named with fattach(3), then the Stream will not be dismantled on the last close and the close procedure not called until the Stream is eventually unlinked (I_UNLINK(7) (see streamio(7))) or detached (fdetach(3)).

Procedures are described in greater detail in Procedures.


1.4.2 Messages

This subsection provides a brief overview of STREAMS messages.

In fitting with the concept of function decoupling, all control and data information is passed between STREAMS modules, drivers and the Stream head using messages. Utilities are provided to the STREAMS module writer for passing messages using queue and message pointers. STREAMS messages consist of a 3-tuple of a message block structure (msgb(9)), a data block structure (datab(9)) and a data buffer. The message block structure is used to provide an instance of a reference to a data block and pointers into the data buffer. The data block structure is used to provide information about the data buffer, such as message type, separate from the data contained in the buffer. Messages are normally passed between STREAMS modules, drivers and the Stream head using utilities that invoke the target module’s put procedure, such as put(9s), putnext(9), qreply(9). Messages travel along a Stream with successive invocations of each driver, module and Stream head’s put procedure.

Messages are described in greater detail in Messages Overview and Messages.

1.4.2.1 Message Types

This subsection provides a brief overview of STREAMS message types.

Each data block (datab(9)) is assigned a message type. The message type discriminates the use of the message by drivers, modules and the Stream head. Most of the message types may be assigned by a module or driver when it generates a message, and the message type can be modified as a part of message processing. The Stream head uses a wider set of message types to perform its function of converting the functional interface to the user process into the messaging interface used by STREAMS modules and drivers.

Most of the defined message types (see Message Type Overview, and Message Types) are solely for use within the STREAMS framework. A more limited set of message types (M_PROTO, M_PCPROTO and M_DATA) can be used to pass control and data information to and from the user process via the Stream head. These message type can be generated and consumed using the read(2s), write(2s), getmsg(2s), getpmsg(2s), putmsg(2s), putpmsg(2s) system calls and some streamio(7) STREAMS ioctl(2s).

Message types are described in detail in Message Type Overview and Message Types.

1.4.2.2 Message Linkage

Messages blocks of differing types can be linked together into composite messages as illustrated in Figure 104.

A Message

Figure 104. A Message

Messages, once allocated, or when removed from a queue, exist standalone (i.e., they are not attached to any queue). Messages normally exist standalone when they have been first allocated by an interrupt service routine, or by the Stream head. They are placed into the Stream by the driver or Stream head at the Stream end by calling put(9s). After being inserted into a Stream, message normally only exist standalone in a given queue’s put or service procedures. A queue’s put or service procedure normally do one of the following:

  • pass the message along to an adjacent queue with putnext(9) or qreply(9);
  • process and consume the message by deallocating it with freemsg(9);
  • place the message on the queue from the put procedure with putq(9) or from the service procedure using putbq(9).

Once placed on a queue, a message exists only on that queue and all other references to the message are dropped.

Only one reference to a message block (msgb(9)) exists within the STREAMS framework. Additional references to the same data block (datab(9)) and data buffer can be established by duplicating the messages block, msgb(9) (without duplicating either the data block,(datab(9), or data buffer). The STREAMS dupb(9) and dupmsg(9) utilities can be used to duplicate message blocks. Also, the entire 3-tuple of message block, data block and data buffer can be copied using the copyb(9) and copymsg(9) STREAMS utilities.

When a message is first allocated, it is the responsibility of the allocating procedure to either pass the message to a queue put procedure, place the message on its own message queue, or free the message. When a message is removed from a message queue, the reference then becomes the responsibility of the procedure that removed it from the queue. Under special circumstances, it might be necessary to temporarily store a reference to a standalone message in a module private data structure, however, this is usually not necessary.

When a message has been placed on a queue, it is linked into the list of messages already on the queue. Messages that exist on a message queue await processing by the queue’s service procedure. Essentially, queue put procedures are a way of performing immediate message processing, and placing a message on a message queue for later processing by the queue’s service procedure is a way of deferring message processing until a later time: that is, until STREAMS schedules the service procedure for execution.

Two messages linked together on a message queue is illustrated in Figure 105. In the figure, ‘Message 2’ is linked to ‘Message 1’.

Messages on a Message Queue

Figure 105. Messages on a Message Queue

As illustrated in Figure 105, when a message exists on a message queue, the first message block in the message (which can possibly contain a chain of message blocks) is linked into a double linked list used by the message queue to order and track messages. The queue structure, queue(9), contains the head and tail pointers for the linked list of messages that reside on the queue. Some of the fields in the first message block (such as the linked list pointers) are significant only in the first message block of the message and applies to all the message blocks in the message (such as message band).

Message linkage is described in detail in Message Structure.

1.4.2.3 Message Queueing Priority

This subsection provides a brief overview of message queueing priority.

STREAMS message queues provide the ability to process messages of differing priority. There are three classes of message priority (in order of increasing priority):

  1. Normal messages.
  2. Priority messages.
  3. High-priority messages.

Normal messages are queued in priority band ‘0’. Priority messages are queued in bands greater than zero (‘1’ through ‘255’ inclusive). Messages of a higher ordinal band number are of greater priority. For example, a priority message for band ‘23’ is queued ahead of messages for band ‘22’. Normal and priority messages are subject to flow control within a Stream, and a queued according to priority.

High priority messages are assigned a priority band of ‘0’; however, their message type distinguishes them as high priority messages and they are queued ahead of all other messages. (The priority band for high priority messages is ignored and always set to ‘0’ whenever a high priority message type is queued.) High priority messages are given special treatment within the Stream and are not subjected to flow control; however, only one high priority message can be outstanding for a given transaction or operation within a Stream. The Stream head will discard high priority messages that arrive before a previous high priority message has been acted upon.

Because queue service procedures process messages in the order in which they appear in the queue, messages that are queued toward the head of the queue yield a higher scheduling priority than those toward the tail. High priority messages are queue first, followed by priority messages of descending band numbers, finally followed by normal (band ‘0’) messages.

STREAMS provides independent flow control parameters for ordinary messages. Normal message flow control parameters are contained in the queue structure itself (queue(9)); priority parameters, in the auxiliary queue band structure (qband(9)). A set of flow control parameters exists for each band (from ‘0’ to ‘255’).

As a high priority message is defined by message type, some message types are available in high-priority/ordinary pairs (e.g., M_PCPROTO/M_PROTO) that perform the same function but which have differing priority.

Queueing priority is described in greater detail in Queues and Priority.


1.4.3 Modules

This subsection provides a brief overview of STREAMS modules.

Modules are components of message processing that exist as a unit within a Stream beneath the Stream head. Modules are optional components and zero or more (up to a predefined limit) instances of a module can exist within a given Stream. Instances of a module have a unique queue pair associated with them that permit the instance to be linked among the other queue pairs in a Stream.

Figure 48 illustrates and instance each of two modules (‘A’ and ‘B’) that are linked within the same Stream. Each module instance consists of a queue pair (‘Ad/Au’ and ‘Bd/Bu’ in the figure). Messages flow from the driver to the Stream head through the upstream queues in each queue pair (‘Au’ and then ‘Bu’ in the figure); and from Stream head to driver through downstream queues (‘Bd’ and then ‘Ad’).

The module provides unique message processing procedures (put and optionally service procedures) for each queue in the queue pair. One set of put and service procedures handles upstream messages; the other set, downstream messages. Each procedure is independent of the others. STREAMS handles the passing of messages but any other information that is to be passed between procedures must be performed explicitly by the procedures themselves. Each queue provides a module private pointer that can be used by procedures for maintaining state information or passing other information between procedures.

A Stream in More Detail

Figure 48. A Stream in More Detail

Each procedure can pass messages directly to the adjacent queue in either direction of message flow. This is normally performed with the STREAMS putnext(9) utility. For example, in Figure 48, procedures associated with queue ‘Bd’ can pass messages to queue ‘Ad’; ‘Bu’ to ‘Au’.

Also, procedures can easily locate the other queue in a queue pair and pass messages along the opposite direction of flow. This is normally performed using the STREAMS qreply(9) utility. For example, in Figure 48, procedures associated with queue ‘Ad’ can easily locate queue ‘Au’ and pass messages to ‘Bu’ using qreply(9).

Each queue in a module is associated with messages, processing procedures, and module private data. Typically, each queue in the module has a distinct set of message, processing procedures and module private data.

Messages

Messages can be inserted into, and removed from, the linked list message queue associated with each queue in the queue pair as they pass through the module. For example, in Figure 48, ‘Message Ad’ exists on the ‘Ad’ queue; ‘Message Bu’, on the ‘Bu’ queue.

Processing Procedures

Each queue in a module queue pair requires that a put procedure be defined for the queue. Upstream or downstream modules, drivers or the Stream head invoke a put procedure of the module when they pass messages to the module along the Stream.

Each queue may optionally provide a service procedure that will be invoked when messages are placed on the queue for later processing by the service procedure. A service procedure is never required if the module put procedure never enqueues a message to either queue in the queue pair.

Either procedure in either queue in the pair can pass messages upstream or downstream and may alter information within the module private data associated with either queue in the pair.

Data

Module processing procedures can make use of a pointer in each queue structure that is reserved for use by the module writer to locate module private data structures. These data structures are typically attached to each queue from the module’s open procedure, and detached from then module’s close procedure. Module private data is useful for maintaining state information associated with the instance of the module and for passing information between procedures.

Modules are described in greater detail in Modules.


1.4.4 Drivers

This subsection provides a brief overview of STREAMS drivers.

The Device component of the Stream is an initial part of the regular Stream (positioned just below the Stream head). Most Streams start out life as a Stream head connected to a driver. The driver is positioned within the Stream at the Stream end. Note that not all Streams require the presence of a driver: a STREAMS-based pipe or FIFO Stream do not contain a driver component.

A driver instance represented by a queue pair within the Stream, just as for modules. Also, each queue in the queue pair has a message queue, processing procedures, and private data associated with it in the same way as for STREAMS modules. There are three differences that distinguish drivers from modules:

  1. Drivers are responsible for generating and consuming messages at the Stream end.

    Drivers convert STREAMS messages into appropriate software or hardware actions, events and data transfer. As a result, drivers that are associated with a hardware device normally contain an interrupt service procedure that handles the external device specific actions, events and data transfer. Messages are typically consumed at the Stream end in the driver’s downstream put or service procedure and action take or data transferred to the hardware device. Messages are typically generated at the Stream end in the driver’s interrupt service procedure, and inserted upstream using the put(9s) STREAMS utility.

    Software drivers (so-called pseudo-device drivers) are similar to a hardware device driver with the exception that they typically do not contain an interrupt service routine. Pseudo-device drivers are still responsible for consuming messages at the Stream end and converting them into actions and data output (external to STREAMS), as well as generating messages in response to events and data input (external to STREAMS).

    In contrast, modules are intended to operate solely within the STREAMS framework.

  2. Because a driver sits at a Stream end and can support multiplexing, a driver can have multiple Streams connected to it, either upstream (fan-in) or downstream (fan-out) (see Multiplexing of Streams).

    In contrast, an instance of a module is only connected within a single Stream and does not support multiplexing at the module queue pair.

  3. An instance of a driver (queue pair) is created and destroyed using the open(2s) and close(2s) system calls.

    In contrast, an instance of a module (queue pair) is created and destroyed using the I_PUSH and I_POP STREAMS ioctl(2s) commands.

Aside from these differences, the STREAMS driver is similar in most respects to the STREAMS module. Both drivers and modules can pass signals, error codes, return values, and other information to processes in adjacent queue pairs using STREAMS messages of various message types provided for that purpose.

Drivers are described in greater detail in Drivers.


1.4.5 Stream Head

This subsection provide a brief overview of Stream heads.

The Stream head is the first component of a Stream that is allocated when a Stream is created. All Streams have an associated Stream head.

In the case of STREAMS-based pipes, two Stream heads are associated with each other. STREAMS-based FIFOs have one Stream head but no Stream end or Driver. For all other Streams, as illustrated in Figure 48, there exists a Stream head and a Stream end or Driver.

The Stream head has a queue pair associated with them, just as does any other STREAMS module or driver. Also, just as any other module, the Stream head provides the processing procedures and private data for processing of messages passed to queues in the pair.

The differences is that the processing procedures are provided by the GNU/Linux system rather than being written by the module or driver writer. These system provided processing procedures perform the necessary functions to convert generate to and consume messages from the Stream in response to system calls invoked by a user process. Also, a set of specialized behaviours are provided and a set of specialized message types that may be exchanged with modules and drivers in the Stream to provide the standard interface expected by the user application.

Stream heads are described in greater detail in Mechanism, Polling, Pipes and FIFOs, and Terminal Subsystem.


1.5 Multiplexing

This subsection provides a brief overview of Stream Multiplexing.

Basic Streams that can be created with the open(2s) or pipe(2s) system calls are linear arrangements from Stream head to Driver or Stream head to Stream head. Although these linear arrangements satisfy the needs of a large class of STREAMS applications, there exits a class of application that are more naturally represented by multiplexing: that is, an arrangements where one or more upper Streams feed into one or more lower Streams. Network protocol stacks (a significant application are for STREAMS) are typically more easily represented by multiplexed arrangements.

A fan-in multiplexing arrangement is one in which multiple upper Streams feed into a single lower Stream in a many-to-one relationship as illustrated in Figure 49.

Many-to-one Multiplexor

Figure 49. Many-to-one Multiplexor

A fan-out multiplexing arrangement is one in which a single upper Stream feeds into multiple lower Streams in a one-to-many relationship as illustrated in Figure 50. (This is the more typically arrangement for communications protocol stacks.)

One-to-many Multiplexor

Figure 50. One-to-many Multiplexor

A fan-in/fan-out multiplexing arrangement is one in which multiple upper Streams feed into multiple lower Streams in a many-to-many relationship as illustrated in Figure 51.

Many-to-many Multiplexor

Figure 51. Many-to-many Multiplexor

To support these arrangements, STREAMS provide a mechanism that can be used to assemble multiplexing arrangements in a flexible way. An, otherwise normal, STREAMS pseudo-device driver can be specified to be a multiplexing driver.

Conceptually, a multiplexing driver can perform upper multiplexing between multiple Streams on its upper side connecting the user process and the multiplexing driver, and lower multiplexing between multiple Streams on its lower side connecting the multiplexing driver and the device driver.

As with normal STREAMS drivers, multiplexing drivers can have multiple Streams created on its upper side using the open(2s) system call. Unlike regular STREAMS drivers, however, multiplexing drivers have the additional capability that other Streams can be linked to the lower side of the driver. The linkage is performed by issuing specialized streamio(7) commands to to the driver that are recognized by multiplexing drivers (I_LINK, I_PLINK, I_UNLINK, I_PUNLINK).

Any Stream can be linked under a multiplexing driver (provided that it is not already linked under another multiplexing driver). This includes an upper Stream of a multiplexing driver. In this fashion, complex trees of multiplexing drivers and linear Stream segments containing pushed modules can be assembled. Using these linkage commands, complex arrangements can be assembled, manipulated and dismantled by a user or daemon process to suit application needs.

The fan-in arrangement of Figure 49 performs upper multiplexing; the fan-out arrangement of Figure 50, lower multiplexing; and the fan-in/fan-out arrangement of Figure 51, both upper and lower multiplexing.

1.5.1 Fan-Out Multiplexers

Figure 47 illustrates an example, closely related to the fan-out arrangement of Figure 50, where the Internet Protocol (IP) within a networking stack is implemented as a multiplexing driver and independent Streams to three specific device drivers are linked beneath the IP multiplexing driver.

Internet Multiplexing Stream

Figure 47. Internet Multiplexing Stream

The IP multiplexing driver is capable of routing messages to the lower Streams on the basis of address and the subnet membership of each device driver. Messages received from the lower Streams can be discriminated an sent to the appropriate user process upper Stream (e.g. on the basis of, say, protocol Id). Each lower Stream, ‘Module 1’, ‘Module 2’, ‘Driver 3’, presents the same service interface to the IP multiplexing driver, regardless of the specific hardware or lower level communications protocol supported by the driver. For example, the lower Streams could all support the Data Link Provider Interface (DLPI).

As depicted in Figure 47, the IP multiplexing driver could have additional multiplexing drivers or modules above it. Also, ‘Driver 1’, ‘Driver 2’ or ‘Driver 3’ could themselves be multiplexing drivers (or replaced by multiplexing drivers). In general, multiplexing drivers are independent in the sense that it is not necessary that a given multiplexing driver be aware of other multiplexing drivers upstream of its upper Stream, nor downstream of its lower Streams.

1.5.2 Fan-In Multiplexers

Figure 52 illustrates an example, more closely related to the fan-in arrangement of Figure 49, where an X.25 Packet Layer Protocol multiplexing driver is used to switch messages between upper Streams supporting Permanent Virtual Circuits (PVCs) or Switch Virtual Circuits (SVCs) and (possibly) a single lower Stream.

Multiplexing Stream

Figure 52. Multiplexing Stream

The ability to multiplex upper Streams to a driver is a characteristic supported by all STREAMS drivers: not just multiplexing drivers. Each open(2s) to a minor device node results in another upper Stream that can be associated with the device driver. What the multiplexing driver permits over the normal STREAMS driver is the ability to link one or more lower Streams (possibly containing modules and another multiplexing driver) beneath it.

1.5.3 Complex Multiplexers

When constructing multiplexers for applications, even more complicated arrangements are possible. Multiplexing over multiple Streams on both the upper and lower side of a multiplexing driver is possible. Also, a driver the provides lower multiplexing can be linked beneath a driver that provide upper multiplexing as depicted by the dashed box in Figure 52. Each multiplexing driver can perform upper multiplexing, lower multiplexing, or both, providing a flexibility for the designer.

STREAMS provides multiplexing as a general purpose facility that is flexible in that multiplexing drivers can be stacked and linked in a wide array of complex configurations. STREAMS imposes few restrictions on processing within the multiplexing driver making the mechanism applicable to a many classes of applications.

Multiplexing is described in greater detail in Multiplexing.


1.6 Benefits of STREAMS

STREAMS provides a flexible, scalable, portable, and reusable kernel and user level facility for the development of GNU/Linux system communications services. STREAMS allows the creation of kernel resident modules that offer standard message passing facilities and the ability for user level processes to manipulate and configure those modules into complex topologies. STREAMS offers a standard way for user level processes to select and interconnect STREAMS modules and drivers in a wide array of combinations without the need to alter Linux kernel code, recompile or relink the kernel.

STREAMS also assists in simplifying the user interface to device drivers and protocol stacks by providing powerful system calls for the passing of control information from user to driver. With STREAMS it is possible to directly implement asynchronous primitive-based service interfaces to protocol modules.


1.6.1 Standardized Service Interfaces

Many modern communications protocols define a service primitive interface between a service user and a service provider. Examples include the ISO Open Systems Interconnect (OSI) and protocols based on OSI such as Signalling System Number 7 (SS7). Protocols based on OSI can be directly implemented using STREAMS.

In contrast to other approaches, such as BSD Sockets, STREAMS does not impose a structured function call interface on the interaction between a user level process or kernel resident protocol module. Instead, STREAMS permits the service interface between a service user and service provider (whether the service user is a user level process or kernel resident STREAMS module) to be defined in terms of STREAMS messages that represent standardized service primitives across the interface.

A service interface is defined13 at the boundary between neighbouring modules. The upper module at the boundary is termed the service user and the lower module at the boundary is termed the service provider. Implemented under STREAMS, a service interface is a specified set of messages and the rules that allow passage of these messages across the boundary. A STREAMS module or driver that implements a service interface will exchange messages within the defined set across the boundary and will respond to received messages in accordance with the actions defined for the specific message and the sequence of messages preceding receipt of the message (i.e., in accordance with the state of the module).

Instances of protocol stacks are formed using STREAMS facilities for pushing modules and linking multiplexers. For proper and consistent operation, protocol stacks are assembled so that each neighbouring module, driver and multiplexer implement the same service interface. For example, a module that implements the SS7 MTP protocol layer, as shown in Figure 53, presents a protocol service interface at it input and output sides. Other modules, drivers and multiplexers should only be connected at the input and output sides of the SS7 MTP protocol module if they provide the same interface in the symmetric role (i.e., user or provider).

It is the ability of STREAMS to implement service primitive interfaces between protocol modules that makes it most appropriate for implementation of protocols based on the OSI service primitive interface such as X.25, Integrated Services Digital Network (ISDN), Signalling System No. 7 (SS7).


1.6.2 Manipulating Modules

STREAMS provides the ability to manipulate the configuration of drivers, modules and multiplexers from user space, easing configuration of protocol stacks and profiles. Modules, drivers and multiplexers implementing common service interfaces can be substituted with ease. User level processes may access the protocol stack at various levels using the same set of standard system calls, while also permitting the service interface to the user process to match that of the topmost module.

It is this flexibility that makes STREAMS well suited to the implementation of communications protocols based on the OSI service primitive interface model. Additional benefits for communications protocols include:

  • User level programs use a service interface that is independent of underlying protocols, drivers, device implementation, and physical communications media.
  • Communications architecture and upper layer protocols can be independent of underlying protocol, drivers, device implementation, and physical communications media.
  • Communications protocol profiles can be created by selecting and connection constituent lower layer protocols and services.

The benefits of the STREAMS approach are protocol portability, protocol substitution, protocol migration, and module reuse. Examples provided in the sections that follow are real-world examples taken from the open source Signalling System No. 7 (SS7) stack implemented by the OpenSS7 Project.


1.6.2.1 Protocol Portability

Figure 53, shows how the same SS7 Signalling Link protocol module can be used with different drivers on different machines by implementing compatible service interfaces. The SS7 Signalling Link are the Data Link Provider Interface (DLPI) and the Communications Device Interface (CDI) for High-Level Data Link Control (HDLC).

Protocol Module Portability

Figure 53. Protocol Module Portability

By using standard STREAMS mechanisms for the implementation of the SS7 Signalling Link module, only the driver needs to be ported to port an entire protocol stack from one machine to another. The same SS7 Signalling Link module (and upper layer modules) can be used on both machines.

Because the Driver presents a standardized service interface using STREAMS, porting a driver from the machine architecture of ‘Machine A’ to that of ‘Machine B’ consists of changes internal to the driver and external to the STREAMS environment. Machine dependent issues, such as bus architectures and interrupt handling are kept independent of the primary state machine and service interface. Porting a driver from one major UNIX or UNIX-like operating system and machine architecture supporting STREAMS to another is a straightforward task.

With OpenSS7, STREAMS provides the ability to directly port a large body of existing STREAMS modules to the GNU/Linux operating system.


1.6.2.2 Protocol Substitution

STREAMS permits the easy substitution of protocol modules (or device drivers) within a protocol stack providing a new protocol profile. When protocol modules are implemented to a compatible service interface the can be recombined and substituted, providing a flexible protocol architecture. In some circumstances, and through proper design, protocol modules can be substituted that implement the same service interface, even if they were not originally intended to be combined in such a fashion.

Protocol Substitution

Figure 300. Protocol Substitution

Figure 300 illustrates how STREAMS can substitute upper layer protocol modules to implement a different protocol stack over the same HDLC driver. As each module and driver support the same service interface at each level, it is conceivable that the resulting modules could be recombined to support, for example, SS7 MTP over an ISDN LAPB channel.14

Another example would be substituting an M2PA signalling link module for a traditional SS7 Signalling Link Module to provide SS7 over IP.


1.6.2.3 Protocol Migration

Figure 54 illustrates how STREAMS can move functions between kernel software and front end firmware. A common downstream service interface allows the transport protocol module to be independent of the number or type of modules below. The same transport module will connect without modification to either an SS7 Signalling Link module or SS7 Signalling Link driver that presents the same service interface.

Protocol Migration

Figure 54. Protocol Migration

The OpenSS7 SS7 Stack uses this capability also to adapt the protocol stack to front-end hardware that supports differing degrees of SS7 Signalling Link support in firmware. Hardware cards that support as much as a transparent bit stream can have SS7 Signalling Data Link, SS7 Signalling Data Terminal and SS7 Signalling Link modules pushed to provide a complete SS7 Signalling Link that might, on another hardware card, be mostly implemented in firmware.

By shifting functions between software and firmware, developers can produce cost effective, functionally equivalent systems over a wide range of configurations. They can rapidly incorporate technological advances. The same upper layer protocol module can be used on a lower capacity machine, where economics may preclude the use of front-end hardware, and also on a larger scale system where a front-end is economically justified.


1.6.2.4 Module Reusability

Figure 55 shows the same canonical module (for example, one that provides delete and kill processing on character strings) reused in two different Streams. This module would typically be implemented as a filter, with no downstream service interface. In both cases, a tty interface is presented to the Stream’s user process since the module is nearest the Stream head.

Module Reusability

Figure 55. Module Reusability


2 Overview


2.1 Definitions


2.2 Concepts


2.3 Application Interface


2.4 Kernel Level Facilities


2.5 Subsystems


3 Mechanism

This chapter describes how applications programs create and interact with a Stream using traditional and standardized STREAMS system calls. General system call and STREAMS-specific system calls provide the interface required by user level processes when implementing user level applications programs.


3.1 Mechanism Overview

The system call interface provided by STREAMS is upward compatible with the traditional character device system calls.

STREAMS devices appears as character device nodes within the file system in the GNU/Linux system. The open(2s) system call recognizes that a character special file is a STREAMS device, creates a Stream and associates it with a device in the same fashion as a character device.

Once open, a user process can send and receive data to and from the STREAMS special file using the traditional write(2s) and read(2s) system calls in the same manner as is performed on a traditional character device special file.

Character device input-output controls using the ioctl(2s) system call can also be performed on a STREAMS special file. STREAMS defines a set of standard input-output control commands (see ioctl(2p) and streamio(7)) specific to STREAMS special files. Input-output controls that a defined for a specific device are also supported as they are for character device drivers.

With support for these general character device input and output system calls, it is possible to implement a STREAMS device driver in such a way that an application is unaware that it has opened and is controlling a STREAMS device driver: the application could treat the device in the identical manner to a character device. This make it possible to convert an existing character device driver to STREAMS and make possible the portability, migration, substitution and reuse benefits of the STREAMS framework.

STREAMS provides STREAMS-specific system calls and ioctl(2s) commands, in addition to support for the traditional character device I/O system calls and ioctl(2s) commands.

The poll(2s) system call15 provides the ability for the application to poll multiple Streams for a wide range of events.

The putmsg(2s) and putpmsg(2s) system calls provide the ability for applications programs to transfer both control and data information to the Stream. The write(2s) system call only supports the transfer of data to the Stream, whereas, putmsg(2s) and putpmsg(2s) permit the transfer of prioritized control information in addition to data.

The getmsg(2s) and getpmsg(2s) system calls provide the ability for applications programs to receive both control and data information from the Stream. The read(2s) system call can only support the transfer of data (and in some cases the inline control information), whereas, getmsg(2s) and getpmsg(2s) permit the transfer of prioritized control information in addition to data.

Implementation of standardized service primitive interfaces is enabled through the use of the putmsg(2s), putpmsg(2s), getmsg(2s) and getpmsg(2s) system calls.

STREAMS also provides kernel level utilities and facilities for the development of kernel resident STREAMS modules and drivers. Within the STREAMS framework, the Stream head is responsible for conversion between STREAMS messages passed up and down a Stream and the system call interface presented to user level applications programs. The Stream head is common to all STREAMS special files and the conversion between the system call interface and message passed on the Stream does not have to be reimplemented by the module and device driver writer as is the case for traditional character device I/O.

3.1.1 STREAMS System Calls

The STREAMS-related system calls are:

open(2s)Open a STREAMS special file and create a new (or access an existing) Stream.
close(2s)Close a STREAMS special file and possibly cause the destruction of a Stream (i.e., on the last close of the Stream.
read(2s)Read data from an open Stream.
write(2s)Write data to an open Stream.
ioctl(2s)Control an open Stream.
getmsg(2s), getpmsg(2s)Receive a (prioritized) message at the Stream head.
putmsg(2s), putpmsg(2s)Send a (prioritized) message from the Stream head.
poll(2s)Receive notification when selected events occur on one or more Streams.
pipe(2s)Create a channel that provides a STREAMS-based bidirectional communication path between multiple processes.

3.2 Stream Construction

STREAMS constructs a Stream as a double linked list of kernel data structures. Elements of the linked list are queue pairs that represent the instantiation of a Stream head, modules and drivers. Linear segments of link queue pairs can be connected to multiplexing drivers to form complex tree topologies. The branches of the tree are closest to the user level process and the roots of the tree are closest to the device driver.

The uppermost queue pair of a Stream represents the Stream head. The lowermost queue pair of a Stream represents the Stream end or device driver, pseudo-device driver, or another Stream head in the case of a STREAMS-based pipe.

The Stream head is responsible for conversion between a user level process using the system call interface and STREAMS messages passed up and down the Stream. The Stream head uses the same set of kernel routines available to module a driver writers to communicate with the Stream via the queue pair associated with the Stream head.

Figure 13 illustrates the queue pairs in the most basis of Streams: one consisting of a Stream head and a Stream end. Depicted are the upstream (read) and downstream (write) paths along the Stream. Of the uppermost queue pair illustrated, ‘H1’ is the upstream (read) half of the Stream head queue pair; ‘H2’, the downstream (write) half. Of the lowermost queue pair illustrated, ‘E2’ is the upstream half of the Stream end queue pair; ‘H1’ the downstream half.

Upstream and Downstream Stream Construction

Figure 13. Upstream and Downstream Stream Construction

Each queue specifies an entry point (that is, a procedure) that will be used to process messages arriving at the queue. The procedures for queues ‘H1’ and ‘H2’ process messages sent to (or that arrive at) the Stream head. These procedures are defines by the STREAMS subsystem and are responsible for the interface between STREAMS related system calls and the Stream. The procedures for queues ‘E1’ and ‘E2’ process messages at the Stream end. These procedures are defined by the device driver, pseudo-device driver, or Stream head at the Stream end (tail). In accordance with the procedures defined for each queue, messages are processed by the queue and typically passed from queue to queue along the linked list segment.

Figure 14 details the data structures involved. The data structures are the queue(9), qband(9), qinit(9), module_init and module_stat structures.

The queue(9) structure is the primary data structure associated with the queue. It contains a double linked list (message queue) of messages contained on the queue. It also includes pointers to other queues used in Stream linkage, queue state information and flags, and pointers to the qband(9) and qinit(9) structures associated with the queue.

The qband(9) structure is used as an auxiliary structure to the queue(9) structure that contains state information and pointers in to the message list for each priority band within a queue (except for band ‘0’ information, which is contained in the queue(9) structure itself). qband(9) structures are linked into a list and connected to the queue(9) structure to which they belong.

The qinit(9) structure is defined by the module or driver and contains procedure pointers for the procedures associated with the queue, as well as pointers to module or driver information and initialization limits contained in the module_info(9) structure as well as an optional pointer to a module_stat(9) structure that contains collected run-time statistics for the entire module or driver. Normally, a separate qinit(9) structure exists for all of the upstream and downstream instances of a queue associated with a driver or module.

The module_info(9) structure contains information about the module or driver, such as module identifier and module name, as well as minimum and maximum packet size and queue flow control high and low water marks. It is important to note that this structure is used only to initialize the corresponding limit values for an instance of the queue(9) structure. The values contained within a particular queue(9) structure can be changed in a running module or driver without affecting the module_init(9) structure. The module_init(9) structure is considered to be a read-only structure for the purpose of modules and drivers written for STREAMS.

The module_stat(9) structure contains runtime counts of the entry into the various procedures contained in the qinit(9) structure as well as a pointer to any module private statistics that need to be collected. As depicted in Figure 14, there is normally only one module_stat(9) structure per queue pair that collects statistics for the entire module or driver. STREAMS does not peg this counts automatically and will not manipulate this structure, even when one is attached. It is the responsibility of the module or driver writer to peg counts as required. OpenSS7 does, however, provide some user level administrative tools that can be used to examine the statistics contained in this structure. The module_stat(9) structure is opaque to the STREAMS subsystem and can be read from or written to by module or driver procedures.

Stream Queue Relationship

Figure 14. Stream Queue Relationship

Note that it is possible to have a separate qinit(9), module_init(9) and module_stat(9) structure for each queue in the queue pair; however, typically there are two qinit(9) structures and only one module_info and module_stat structure per module or driver. qinit(9), module_info and module_stat structures are statically allocated by the module or driver, and the queue(9) and qband(9) structures are dynamically allocated by STREAMS on demand.

All of these queue related data structures are in Data Structures (and in the OpenSS7 Manual Pages).

Figure 14 illustrates two adjacent queue pairs with links between them in both directions on the Stream. When a module is opened, STREAMS creates a queue pair for the module and then links the the queue pair into the list. Each queue is linked to the next queue in the direction of message flow. The q_next member of the queue(9) data structure is used to perform the linkage. STREAMS allocates queue(9) structures in pairs (that is, as an array containing two queue(9) structures). The read-side queue of the pair is the lower ordinal and the write-side the higher. Nevertheless, STREAMS provides some utility functions (or macros) that assist queue procedures in locating the other queue in the pair. The Stream head and Stream end are known to procedures only a destinations toward which messages are sent.16

There are two ways for the user level process to construct a Stream:

  1. Open a STREAMS device special file using the open(2s) system call. Construction of a Stream with the open(2s) system call is detailed in Opening a STREAMS Device File and Opening a STREAMS-based FIFO and illustrated in Figure 15.
  2. Create a STREAMS-based pipe using the pipe(2s) system call. Construction of a Stream with the pipe(2s) system call is detailed in Creating a STREAMS-based Pipe and illustrated in Figure 16.

3.2.1 Opening a STREAMS Device File

A Stream is constructed when a STREAMS-based driver file is opened using the open(2s) system call. A Stream constructed in this fashion is illustrated in Figure 15.

In the traditional UNIX system, a STREAMS-based driver file is a character device special file within the UNIX file system. In the GNU/Linux system, under OpenSS7, a STREAMS-based driver file is either a character device special file within a GNU/Linux file system, or a character device special file within the mounted Shadow Special File System (specfs). When the specfs is mounted, specfs device nodes can be opened directly. When the specfs is not mounted, specfs device nodes can only be opened indirectly via character device nodes in a GNU/Linux file system external to the specfs.

All STREAMS drivers (and modules) have their entry points defined by the streamtab(9) structure for that driver (or module). The streamtab structure has the following format:

struct streamtab {
    struct qinit *st_rdinit;
    struct qinit *st_wrinit;
    struct qinit *st_muxrinit;
    struct qinit *st_muxwinit;
};

The streamtab structure defines a module or driver. st_rdinit points to the read qinit structure for the driver and st_wrinit points to the driver’s write qinit structure. For a multiplexing driver, the st_muxrinit and st_muxwinit point to the qinit structures for the lower side of the multiplexing driver. For a regular non-multiplexing driver these members are NULL.

Opened STREAMS-based Driver

Figure 15. Opened STREAMS-based Driver

3.2.1.1 First Open of a Stream

When a STREAMS-based file is opened, a new Stream is created if one does not already exists for the file, or if the D_CLONE flag is set for the file indicating that a new Stream is to be created on each open of the file. First, a file descriptor is allocated in the process’ file descriptor table, a file pointer is allocated to represent the opened file. The file pointer is initialized to point to the inode associated with the character special file in the external file system (see f_inode in Figure 15). This inode is of type character special (S_IFCHR). The Linux kernel recognizes the inode as a character special file and invokes the character device open routine in OpenSS7. This inode is equivalent to the vnode used by UNIX System V Release 4.2.

OpenSS7 uses the major and minor device numbers associated with the character special file to locate an inode within the Shadow Special File System (specfs) that is also provided by OpenSS7, and the f_inode pointer of the file pointer is adjusted to point directly to this specfs inode. This specfs inode is equivalent to the common snode used by UNIX System V Release 4.2.

Next, a Stream header is created from a stdata(9) data structure and a Stream head is created from a pair of queue structures. The content of the stdata data structure is initialized with predetermined STREAMS values applicable to all character special Streams. The content of the queue data structures in the Stream head are initialized with values from the streamtab structure statically defined for Stream heads in the same manner as any STREAMS module or driver.

The inode within the specfs contains STREAMS file system dependent information. This inode corresponds to the common snode of UNIX System V Release 4.2. The sd_inode filed of the stdata structure is initialized to point to this inode. The i_pipe filed of the inode data structure is initialized to point to the Stream header (stdata structure), thus there is a forward and backward pointer between the Stream header and the inode.

The private_data member of the file pointer is initialized to point to the Stream header and the sd_file member of the stdata structure is initialized to point to the file pointer.

After the Stream header and Stream head queue pair is allocated and initialized, a queue structure pair is allocated and initialized for the driver. Each queue in the queue pair has its q_init pointer initialized to the corresponding qinit structure defined in the driver’s streamtab. Limit values in each queue in the pair are initialized the queue’s module_init structure, now accessible via the q_init pointer in the queue structure and the qi_minfo pointer in the qinit structure.

The q_next pointers in each queue structure are set so that the Stream head write queue points to the driver write queue and the driver read queue points to the Stream head read queue. The q_next pointers at the ends of the Stream are set to NULL. Finally, the driver open procedure (accessible via the qi_qopen member of the qinit structure for the read-side queue) is called.

3.2.1.2 Subsequent Open of a Stream

When the Stream has already been created by a call to open(2s) and has not yet been destroyed, that is, on a subsequent open of the Stream, and the STREAMS driver is not marked for clone open with the D_CLONE flag in the cdevsw(9) structure, the only actions performed are to call the driver’s open procedure and the open procedures of all pushable modules present on the already existing Stream.


3.2.2 Opening a STREAMS-based FIFO

A STREAMS-based FIFO Stream is also constructed with a call to open(2s). A Stream constructed in this fashion is illustrated in Figure 15b.

A STREAMS-based FIFO appears as a FIFO special file within a GNU/Linux file system, as a character special file within a GNU/Linux file system, or as a FIFO special file within the Shadow Special File System (specfs).17

Figure 15b illustrates an STREAMS-based FIFO that has been opened and a Stream created.

Opened STREAMS-based FIFO

Figure 15b. Opened STREAMS-based FIFO

The sequence of events the cause the creation of a Stream when a STREAMS-based FIFO is opened using the open(2s) system call are the same as that for regular STREAMS device special files with the following differences:

  1. When the Stream header (stdata structure) is created, it is attached to the external GNU/Linux file system inode instead of an inode within the Shadow Special File System (specfs). This is also true of the file pointer: that is, the file pointer refers to the external file system inode instead of a specfs inode. The result is illustrated in Figure 15b.
  2. The Stream header (stdata structure) is initialized with limits and values appropriate for a STREAMS-based FIFO rather than a regular STREAMS driver. This is because the behaviour of a STREAMS-based FIFO Stream head must be somewhat different from a regular STREAMS driver to be compliant with POSIX.18
  3. No driver queue pair is created or attached to the Stream. The Stream head write-side queue q_next pointer is set to the read-side queue as illustrated in Figure 15b.

Aside from these differences, opening a STREAMS-based FIFO is structurally equivalent to opening a regular STREAMS driver. The similarity makes it possible to also implement STREAMS-based FIFOs as character special files.


3.2.3 Creating a STREAMS-based Pipe

A Stream is also constructed when a STREAMS-based pipe is created using the pipe(2s) system call.19 A Stream constructed in this fashion is illustrated in Figure 16.

Created STREAMS-based Pipe

Figure 16. Created STREAMS-based Pipe

Pipes have no inode in an external GNU/Linux file system that can be opened with the open(2s) system call and, therefore, they must be created with a call to pipe(2s).20 When the pipe(2s) system call is executed, two Streams are are created. The construction of each Stream is similar to that when a STREAMS driver is opened with the following differences:

  • Instead of creating one process file table entry and one file pointer, as was the case for regular STREAMS drivers, pipe(2s) creates two file table entries (file descriptors) and two file pointers, as shown in Figure 16.
  • Because a character special device is not being opened, there is no inode in an external file system, so STREAMS allocated two inodes from the specfs.21 Each inode has a file type of S_IFIFO. The file pointer and stdata structure for each Stream header is attached to one of these inodes.
  • When the Stream header associated with each file descriptor is initialized, the stdata structure is initialized with values appropriate for a STREAMS-based pipe instead of a regular Stream.22
  • Instead of creating a driver queue pair for the Stream, the q_next pointer for the write-side queue of each Stream head is initialized to point to the read-side queue of the other Stream head. This is illustrated in Figure 16.

3.2.4 Adding and Removing Modules

When a Stream has been constructed, modules can be inserted into the Stream between the Stream head and the Stream end (or between the Stream head and the midpoint of a STREAMS-based pipe or FIFO.) Addition (or pushing) of modules is accomplished by inserting the module into the Stream immediately below the Stream head. Removal (or popping) of modules is accomplished by deleting the module immediately below the Stream head from the Stream.

When a module is pushed onto a Stream, the module’s open procedure is called for the newly inserted queue pair. When a module is popped from the Stream, the module’s close procedure is called prior to deleting the queue pair from the Stream.

Modules are pushed onto an open Stream by issuing the I_PUSH(7) (see streamio(7)) ioctl(2s) command on the file descriptor associated with the open Stream. Modules are popped from a Stream with the I_POP(7) (see streamio(7)) ioctl(2s) command on the file descriptor associated with the open Stream.

I_PUSH and I_POP allow a user level process to dynamically reconfigure the ordering and type of modules on a Stream to meet any requirement.


3.2.4.1 Pushing Modules

When the Stream head receives an I_PUSH ioctl command, STREAMS locates the module’s streamtab entry and creates a new queue pair to represent the instance of the module. Each queue in the pair is initialized in a similar fashion as for drivers: the q_init pointers are initialized to point to the qinit structures of the module’s streamtab, and the limit values are initialized to the values found in the corresponding module_init structures.

Next, STREAMS positions the module’s queue pair in the Stream immediately beneath the Stream head and above the driver and all existing modules on the Stream. Then the module’s open procedure is called for the queue pair. (The open procedure is located in the qi_qopen member of the qinit structure associated with the read-side queue.)

Each push of a module onto a Stream results in the insertion of a new queue pair representing a new instance of the module. If a module is (successfully) pushed twice on the same Stream, two queue pairs and two instances of the module will exist on the Stream.

To assist in identifying misbehaving applications programs that might push the same set of modules in an indefinite loop, swallowing an excessive amount of system resources, STREAMS imposes a limit on the number of modules that can be pushed on a given Stream to a practical number. The number is limited by the NSTRPUSH kernel parameter (see Configuration) which is set to either ‘16’ or ‘64’ on most systems.

Once an instance of a module is pushed on a Stream, its open procedure will be called each time that the Stream is reopened.


3.2.4.2 Popping Modules

When the Stream head receives a I_POP ioctl command, STREAMS locates the module directly beneath the Stream head and calls it’s close procedure. (The close procedure is located by the qi_qclose member in the qinit structure associated with the module instance’s read-side queue.) Once the close procedure returns, STREAMS deletes the queue pair from the Stream and deallocates the queue pair.


3.2.5 Closing the Stream

Relinquishing the last reference to a Stream dismantles the Stream and deallocates its components. Normally, the last direct or indirect call to close(2s) for a Stream results in the Stream being dismantled in this fashion.23 Calls to close(2s) before the last close of a Stream will not result in the dismantling of the Stream and no module or driver close procedure will be called on closes prior to the last close of a Stream.

Dismantling a Stream consists of the following sequence of actions:

  1. If the Stream is a STREAMS-based pipe and the other end of the pipe is not open by any process, but is named (i.e., mounted by fattach(3)), then the named end of the pipe is detached as with fdetach(3) and then the Stream is dismantled.
  2. If the Stream is a multiplexing driver, dismantling a Stream first consists of unlinking any Streams that remain temporarily linked (by a previous I_LINK command) under the multiplexing driver using the control stream being closed. Unlinking of temporary links consists of issuing an M_IOCTL message to the driver indicating the I_UNLINK operation and entering an uninterrupted wait for an acknowledgement. Waiting for acknowledgement to the M_IOCTL command can cause the close to be delayed. If unlinking any temporary links results in the last reference being released to the now unlinked Stream, that Stream will be dismantled before proceeding.
  3. Each module that is present on the Stream being dismantled will be popped from the Stream by calling the module’s close procedure and then deleting the module instance queue pair from the Stream.
  4. If a driver exists on the Stream being dismantled, the driver’s close procedure is called and then the Stream end queue pairs are deallocated.

    If the Stream invoking the chain of events that resulted in the dismantling of a Stream is open for blocking operation (neither O_NDELAY nor O_NONBLOCK were set), no signal is pending for the process causing dismantling of the Stream, and there are messages on the module or driver’s write-side queue, STREAMS may wait for an interval for the messages to drain before calling the module or driver’s close procedure. The maximum interval to wait is traditionally ‘15’ seconds. If any of these conditions are not met, the module or driver is closed immediately.

    When each module or driver queue pair is deallocated, any messages that remain on the queue are flushed prior to deallocation. Note that STREAMS frees only the messages contained on a message queue: any message or data structures used internally by the driver or module must be freed by the driver or module before it returns from its close procedure.

  5. The queue pair associated with the Stream head is closed24 and the queue pair and Stream header (stdata structure) are deallocated and the associated inode, file pointer, and file descriptors are released.

3.2.6 Stream Construction Example

This Streams construction example builds on the previous example (see Listing 1.1 in Basic Streams Operations), by adding the pushing of a module onto the open Stream.


3.2.6.1 Inserting Modules

This example demonstrates the ability of STREAMS to push modules, not available with traditional character devices. The ability to push modules onto a Stream allows the independent processing an manipulation of data passing between the driver and user level process. This example is of a character conversion module is given a command and a string of characters by the user. Once this command is received, the character conversion module examines all character passing through it for an occurrence of the characters in the command string. When an instance of the string is discovered in the data path, the requested command action is performed on matching characters.

The declarations for the user program are shown in Listing 3.1.

#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <sys/uio.h>
#include <string.h>
#include <fcntl.h>
#include <sys/stropts.h>

#define   BUFLEN      1024

/*
 *  These defines would typically be
 *  found in a header file for the module
 */
#define   XCASE       1         /* change alphabetic case of char */
#define   DELETE      2         /* delete char */
#define   DUPLICATE   3         /* duplicate char */

main()
{
    char buf[BUFLE];
    int fd, count;
    struct strioct1 strioctl;

Listing 3.1: Inserting Modules Example

As in the previous example of Listing 1.1, first a Stream is opened using the open(2s) system call. In this example, the STREAMS device driver is /dev/streams/comm/01.

    if ((fd = open("/dev/streams/comm/01", O_RDWR)) < 0) {
        perror("open failed");
        exit(1);
    }

Listing 3.2: Inserting Modules Example (cont’d)

Next, the character conversion module (named chconv) is pushed onto the open Stream using the I_PUSH(7) (see streamio(7)) ioctl(2s) command.

    if (ioctl(fd, I_PUSH, "chconv") < 0) {
        perror("ioctl  I_PUSH failed");
        exit(2);
    }

Listing 3.3: Inserting Modules Example (cont’d)

The difference in creating an instance of a STREAMS driver and module are illustrated in Listing 3.2 and Listing 3.3. An instance of a driver is created with the open(2s) system call, and each driver requires at least one device node in a file system for access. Naming of device nodes follow device naming conventions. On the other hand, an instance of a module is created with the I_PUSH(7) (see streamio(7)) ioct(2) command. No file system device node is required. Naming of modules is separate from any file system considerations, and are chosen by the module writer. The only restrictions on a module name is that it be less than FMNAMESZ in length, and that it be unique.

When successful, the I_PUSH(7) (see streamio(7)) ioctl(2s) call directs STREAMS to locate and insert the STREAMS module named chconv onto the Stream. If the chconv module has not been loaded into the Linux kernel, OpenSS7 will attempt to demand load the kernel module named streams-chconv. Once the chconv STREAMS module is loaded in the kernel, STREAMS will create a queue pair for the instance of the module, insert it into the Stream beneath the Stream head, and call the module’s open procedure. If the module’s open procedure returns an error (typically only [ENXIO]), that error will be returned to the ioctl(2s) call. If the module’s open procedure is successful, it (and the ioctl(2s) call), return ‘0’. The resulting Stream configuration is illustrated in Figure 17.

Case Converter Module

Figure 17. Case Converter Module

Modules are always pushed and popped from the position immediately beneath the Stream head in the manner of a push-down stack. This results in a Last-In-First-Out (LIFO) order of modules being pushed and popped. For example, if another module were to be pushed on the Stream illustrated in Figure 17, it would be placed between the Stream head and the Character Converter module.


3.2.6.2 Module and Driver Control

The next steps in this example are to pass control information to the module to tell it what command to execute on which string of characters. A sequence that achieves this is shown in Listing 3.4. The sequence makes use of the I_STR(7) (see streamio(7)) ioctl(2s) command for STREAMS special files.

/* change all uppercase vowels to lowercase */
strioctl.ic_cmd = XCASE;
strioctl.ic_timout = 0;         /* default timeout (15 sec) */
strioctl.ic_dp = "AEIOU";
strioctl.ic_len = strlen(strioctl.ic_dp);

if (ioctl(fd, I_STR, &strioctl) < 0) @{
    perror("ioctl I_STR failed");
    exit(3);
@}

/* delete all instances of trhe chars 'x' and  'X' */
strioctl.ic_cmd = DELETE;
strioctl.ic_dp = "xX";
strioctl.ic_len = strlen(strioctl.ic_dp);

if (ioctl(fd, I_STR, &strioctl) < 0) @{
    perror("ioctl I_STR failed");
    exit(4);
@}

Listing 3.4: Module and Driver Control Example

There exist two methods for controlling modules and drivers using the ioctl(2s) system call:

Transparent

In a transparent ioctl(2s) call, the cmd argument to the call is the command issued to the module or device, and the arg argument is specific to the command and defined by the receiver of the command. This is the traditional method of controlling character devices and can also be supported by a STREAMS module and driver.

I_STR

In an I_STR ioctl(2s) call, the cmd argument to the call is I_STR and the arg argument of the call is a pointer to a strioctl structure (defined in sys/stropts.h) describing the particulars of the call. This method is specific to STREAMS special files.

It is this later method that illustrated in Listing 3.4.

The strioctl structure, defined in sys/stropts.h, has the following format:

struct strioctl {
    int ic_cmd;                         /* ioctl request */
    int ic_timout;                      /* ACK/NAK timeout */
    int ic_len;                         /* length of data argument */
    char *ic_dp;                        /* ptr to data argument */
};
ic_cmdidentifies the command intended for a module or driver,
ic_timoutspecifies the number of seconds an I_STR request should wait for an acknowledgement before timing out,
ic_lenis the number of bytes of data to accompany the request, and
ic_dppoints to that data.

In the Listing 3.4, two commands are issued to the character conversion module, XCASE and DELETE.25

To issue the example XCASE command, ic_cmd is set to the command, XCASE, and ic_dp and ic_len are set to the the string ‘AEIOU’. Upon receiving this command, the example module will convert uppercase vowels to lowercase in the data subsequently passing through the module. ic_timout is set to zero to indicated that the default timeout (‘15’ seconds) should be used if no response is received.

To issue the example DELETE command, ic_cmd is set to the command, DELETE, and ic_dp and ic_len are set to the the string ‘xX’. Upon receiving this command, the example module will delete all occurrences of the characters ‘X’ and ‘x’ from data subsequently passing through the module. ic_timout is set to zero to indicated that the default timeout (‘15’ seconds) should be used if no response is received.

Once issued, the Stream head takes an I_STR ioctl(2s) command and packages its contents into a STREAMS message consisting of an M_IOCTL block and a M_DATA block and passes it downstream to be considered by modules and drivers on the Stream. The ic_cmd and ic_len values are stored in the M_IOCTL block and the data described by ic_dp and ic_len are copied into the M_DATA block. Each module, and ultimately the driver, examines the ioc_cmd filed in the M_IOCTL message to see if the command is known to it. If the command is unknown to a module, it is passed downstream for consideration by other modules on the Stream or for consideration by the driver. If the command is unknown to a driver, it is negatively acknowledged and a error is returned from the ioctl(2s) call.

The user level process calling ioctl(2s) with the I_STR(7) (see streamio(7)) command will block awaiting an acknowledgement. The calling process will block up to ic_timout seconds waiting for a response. If ic_timout is ‘0’, it indicates that the default timeout value (typically ‘15’ seconds) should be used. If ic_timout is ‘-1’, it indicates that an infinite timeout should be used. If the timeout occurs, the ioctl(2s) command will fail with error [ETIME]. Only one process (thread) can be executing an I_STR(7) (see streamio(7)) ioctl(2s) call on a given Stream at time. If an I_STR is being executed when another process (or thread) issues an I_STR of its own, the process (or thread) will block until the previous I_STR operation completes. However, the process (or thread) will not block indefinitely if ic_timout is set to a finite timeout value.

When successful, the I_STR command returns the value defined by the command operation itself, and also returns any information to be returned in the area pointed to by ic_dp on the call. The ic_len member is ignored for the purposes of returning data, and it is the caller’s responsibility to ensure that the buffer pointed to by ic_dp is large enough to hold the returned data.


3.2.6.3 Stream Dismantling with Modules

As shown in Listing 3.5, the remainder of this example follows the example in Listing 1.1 in Basic Streams Operations: data is read from the Stream and then echoed back to the Stream.

    while ((count = read(fd, buf, BUFLEN)) > 0) {
        if (write(fd, buf, count) != count) {
            perror("write failed");
            break;
        }
    }
    exit(0);
}

Listing 3.5: Module and Driver Control Example (cont’d)

The exit(2) system call in Listing 3.5 will results in the dismantling of the Stream as it is closed. However, in this example, when the Stream is closed with the chconv module still present on the Stream, the module is automatically popped as the Stream is dismantled.

Alternatively, it is possible to explicitly pop the module from the Stream using the I_POP(7) (see streamio(7)) ioctl(2s) command. The I_POP command removes the module that exists immediately below the Stream head. It is not necessary to specify the module to be popped by name: whatever module exists just beneath the Stream head will be popped.


3.2.6.4 Stream Construction Example Summary

This example provided illustration of the ability of STREAMS to modify the behaviour of a driver without the need to modify driver code. A STREAMS module was pushed that provided the extended behaviour independent of the underlying driver. The I_PUSH and I_POP commands used to push and pop STREAMS modules were also illustrated by the example.

Many other streamio(7) ioctl commands are available to the applications programmer to manipulate and interrogate configuration and other characteristics of a Stream. See streamio(7) for details.


4 Processing

Each module or driver queue pair has associated with it open close and optionally admin procedures. These procedures are specified by the qi_qopen, qi_qclose and qi_qadmin function pointers in the qinit(9) structure associated with the read-side queue(9) of the queue pair. The open and close procedures was the focus of previous chapters.

Each queue(9) in a module or driver queue pair has associated with it a put and optional serivce procedure. These procedures are specified by the qi_putp and qi_srvp function pointers in the qinit(9) structure associated with each queue(9) in the queue pair. The put and sevice procedures are responsible for the processing of messages the implementation of flow control, and are the focus of this chapter.


4.1 Procedures

The put and service procedures associated with a given queue(9) in a module queue pair are responsible for the processing of messages entering and leaving the queue. Processing within these procedures is performed according to the message type of the message being processed. Messages can be modified, queued, passed in either direction on a Stream, freed, copied, duplicated, or otherwise manipulated. In processing for typical filter module, a resulting message is normally passed along the Stream in the same direction it was travelling when it was received.

A queue must always have a put procedure. The put procedure will be invoked when messages are passed to the queue from an upstream or downstream module. A put procedure will either process the message immediately, or place the message on its queue awaiting later processing by the module or driver’s service procedure.

Optionally, a queue can also have an associated service procedure. The service procedure is responsible for processing the backlog of any queued messages from the message queue.

With both a put and service procedure it is possible to tune performance of a module or driver by performing actions required immediately from the put procedure while performing actions that can be deferred from the service procedure. The service procedure provides for the implementation of flow control and can also be used to promote bulk processing of messages.

The put and particularly the service procedures are not directly associated with any user level process. They are kernel level coroutines that normally run under the context of the STREAMS scheduler kernel thread.26

4.1.1 Put Procedure

The put procedure is invoked whenever a message is passed to a queue. A message can be passed to a queue using the put(9s), putnext(9), putctl(9), putctl1(9), putctl2(9), putnextctl(9), putnextctl1(9), putnextctl2(9), qreply(9) STREAMS utilities. The Stream head, modules and drivers use these utilities to deliver messages to a queue.27 Invoking the put procedure of a queue with one of these utilities is the only accepted way of passing a message to a queue.28

A queue’s put procedure is specified by the qi_putp member of the qinit(9) structure associated with the queue(9). This is illustrated in Figure 18a. In general, the read- and write-side queues of a module or driver have different qinit(9) structures associated with them as there are differences in upstream and downstream message processing; however, it is possible for read- and write-side queues to share the same qinit(9) structure.

Put Procedure Example

Figure 18a. Put Procedure Example

The put procedure processes a message immediately or places it onto the message queue for later processing (generally by the service procedure). Because the put procedure is invoked before any queueing takes place, it provides a processing point at which the module or driver can take actions on time critical messages. put procedures are executed a higher priority than service procedures. put procedures in the upstream direction may even be executed with interrupts disabled.

As illustrated in Figure 18a, when a queue’s put procedure is invoked by an adjacent queue’s put procedure (e.g. using putnext(9)), the qi_putp member of the queue’s associated qinit(9) structure is invoked by STREAMS as subroutine call.

When a number of modules are present in a Stream, as illustrated in Figure 18a, each successive direct invocation of a put procedure is nested inside the others. For example, if the put procedure on the read-side of the driver is invoked by calling put(9s) from the driver’s interrupt service routine, and then each successive put procedure calls putnext(9), by the time that the message reaches the Stream head, the driver, ‘ModA’, ‘ModB’, ‘ModC’, and the Stream head put procedures will be nested within another.

The advantage of this approach is that put processing is invoked sequentially and immediately. A disadvantage of this approach is that, if there are additional stack frames nested in each put procedure, the interrupt service routine stack limits can be exceeded, causing a kernel crash. This is also the case for normal (non-ISR) operation and the kernel stack limits might be exceeded if excessive nesting of put procedures occurs.29

The driver and module writers need to be cognisant of the fact that a limited stack might exist at the time that the put procedure is invoked. However, STREAMS also provides the service procedure as a way to defer processing to a ‘!in_irq()’ context.

4.1.2 Service Procedure

Each queue in module or driver queue pair can also have a service procedure associated with it.

A queue’s service procedure is specified by the qi_srvp member of the qinit(9) structure associated with the queue(9). If a queue does not have a service procedure, the associated qi_srvp member is set to NULL. If the queue has a service procedure, the associated qi_srvp member points to the service procedure function. As with put procedures, in general, the read- and write-side queues of a module or driver have different qinit(9) structure associated with them as there are normally differences between the upstream and downstream message processing; however, it is possible for read- and write-side queues to share the same qinit(9) structure.

A queue’s service procedure is never invoked directly by an adjacent module or driver. Adjacent modules or drivers invoke a queue’s put procedure. The put procedure can then defer processing to the service procedure in a number of ways. The most direct way that a put procedure can invoke a service procedure for a message is to place that message on the message queue using putq(9). Once the message is placed on the message queue in this manner, the put procedure can return, freeing the associated stack frame. Also, placing a message on the message queue with putq(9) will normally result in the queue’s service procedure being scheduled for later execution by the STREAMS scheduler.

Note that the STREAMS scheduler is separate and distinct from the Linux scheduler. The Linux scheduler is responsible for scheduling tasks, whereas the STREAMS scheduler is only responsible for scheduling the execution of queue service procedures (and a few other deferrable STREAMS housekeeping chores). The STREAMS scheduler executes pending queue service procedures on a First-Come-First-Served (FCFS) basis. When a queue’s service procedure is scheduled, its queue(9) structure is linked onto the tail of the list of queues awaiting service procedure execution for the STREAMS scheduler. When the STREAMS scheduler runs queues, each queue on the list is unlinked, starting at the head of the list, and its service procedure executed.

To provide responsive scheduling of service procedures without necessarily requiring a task switch (to the STREAMS kernel thread), the STREAMS scheduler is invoked and queue service procedures executed within user context before returning to user level from any STREAMS system call.

Processing of messages within a queue service procedure is performed by taking messages off of the message queue and processing them in order. Because messages a queued on the message queue with consideration to the priority class of the message, messages of higher priority are processed by the service procedure first. However, providing that no other condition impedes further processing of messages (e.g. flow control, inability to obtain a message block), service procedures process all of the messages on the message queue available to them and then return. Because service procedures are invoked by the STREAMS scheduler on a FCFS basis, a priority message on a queue does not increase the scheduling priority of a queue’s service procedure with respect to other queue service procedures: it only affects the priority of processing one message on message queue with respect to other messages on the queue. As a result, higher priority messages will experience a shorter processing latency that lower priority messages.

In general, because drivers run at a software priority higher than the STREAMS scheduler, drivers calling put(9s) can cause multiple messages to be queued for service before the service procedure runs. On the other hand, because the STREAMS scheduler is always invoked before return to user level at the end of a system call, it is unlikely that the Stream head calling put(9s) will result in multiple messages being accumulated before the corresponding service procedure runs.

4.1.3 Put and Service Procedure Summary

Processing of messages can be divided between put and service procedures to meet the requirements for STREAMS processing, and to meet the demands of the module or driver. Some message types might be processed entirely within the put procedure. Others might be processed only with the service procedure. A third class of messages might have processing split between put and service procedures. Processing of upstream and downstream messages can be independent, giving consideration to the needs of each message flow. The mechanism allows a flexible arrangement for the module and driver writer.

put and service procedures are addressed in more detail under Modules and Drivers. Design guidelines for put and service processing are given in ‘Design Guidlines’, ‘Module Design Guidlines’, and ‘Driver Design Guidlines’.


4.2 Asynchronous Example


5 Messages


5.1 Messages Overview

All communications between the Stream head, modules and drivers within the STREAMS framework is based on message passing. Control and data information is passed along the Stream as opposed to direct function calls between modules. Adjacent modules and driver are invoked by passing pointers to messages to the target queue’s put procedure. This permits processing to be deferred (i.e. to a service procedure) and to be subjected to flow control and scheduling within the STREAMS framework.

At the Stream head, conversion between functional call based systems calls and the message oriented STREAMS framework is performed. Some system calls retrieve upstream messages or information about upstream messages at the Stream head queue pair, others create messages and pass them downstream from the Stream head.

At the Stream end (driver), conversion between device or pseudo-device actions and events and STREAMS messages is performed in a similar manner to that at the Stream head. Downstream control messages are consumed converted into corresponding device actions, device events generate appropriate control messages and the driver sends these upstream. Downstream messages containing data are transferred to the device, and data received from the device is converted to upstream data messages.

Within a linear segment from Stream head to Stream end, messages are modified, created, destroyed and passed along the Stream as required by each module in the Stream.

Messages consist of a 3-tuple of a message block structure (msgb(9)), a data block structure (datab(9)) and a data buffer. The message block structure is used to provide an instance of a reference to a data block and pointers into the data buffer. The data block structure is used to provide information about the data buffer, such as message type, separate from the data contained in the buffer. Messages are normally passed between STREAMS modules, drivers and the Stream head using utilities that invoke the target module’s put procedure, such as put(9s), putnext(9), qreply(9). Messages travel along a Stream with successive invocations of each driver, module and Stream head’s put procedure.


5.1.1 Message Types

Each data block (datab(9)) is assigned a message type. The message type discriminates the use of the message by drivers, modules and the Stream head. Message types are defined in sys/stream.h. Most of the message types may be assigned by a module or driver when it generates a message, and the message type can be modified as a part of message processing. The Stream head uses a wider set of message types to perform its function of converting the functional interface to the user process into the messaging interface used by STREAMS modules and drivers.

Most of the defined message types are solely for use within the STREAMS framework. A more limited set of message types (M_PROTO, M_PCPROTO and M_DATA) can be used to pass control and data information to and from the user process via the Stream head. These message type can be generated and consumed using the read(2s), write(2s), getmsg(2s), getpmsg(2s), putmsg(2s), putpmsg(2s) system calls and some streamio(7) STREAMS ioctl(2s).

Below the message types are classified by queueing priority, direction of normal travel (downstream or upstream), and briefly described:

5.1.1.1 Ordinary Messages

Ordinary Messages (also called normal messages) are listed in the table below. Messages with a ‘D’ beside them can normally travel in the downstream direction; with a ‘U’, upstream. Messages with an ‘H’ beside them can be generated by the Stream head; an ‘M’, a module; an ‘E’, the Stream end or driver. Messages with an ‘h’ beside them are consumed and interpreted by the Stream head; an ‘m’, interpreted by a module; an ‘e’, consumed and interpreted by the Stream end or driver.

The following message types are defined by SVR 4.2:

M_DATADUHMEhmeUser data message for I/O system calls
M_PROTODUHMEhmeProtocol control information
M_BREAKD-MEmeRequest to a Stream driver to send a "break"
M_PASSFP-UHhFile pointer passing message30
M_SIG-UMEhSignal sent from a module/driver to a user
M_DELAYD-MEmeRequest a real-time delay on output
M_CTLDUMEmeControl/status request used for inter-module communication
M_IOCTLD-HmeControl/status request generated by a Stream head
M_SETOPTS-UMEhSet options at the Stream head, sent upstream
M_RSEDUMEmeReserved for internal use

The following message types are not defined by SVR 4.2 and are OpenSS7 specific, or are specific to another SVR 4.2-based implementation:

M_EVENT
M_TRAIL
M_BACKWASHAIX specific message for driver direct I/O.

Ordinary messages are described in detail throughout this chapter and in Message Types.

5.1.1.2 High Priority Messages

High Priority Messages message are listed in the table below. Messages with a ‘D’ beside them can normally travel in the downstream direction; with a ‘U’, upstream. Messages with an ‘H’ beside them can be generated by the Stream head; an ‘M’, a module; an ‘E’, the Stream end or driver. Messages with an ‘h’ beside them are consumed and interpreted by the Stream head; an ‘m’, interpreted by a module; an ‘e’, consumed and interpreted by the Stream end or driver.

The following message types are defined by SVR 4.2:

M_IOCACK-UMEhPositive ioctl(2s) acknowledgement
M_IOCNAK-UMEhNegative ioctl(2s) acknowledgement
M_PCPROTODUHMEhmeProtocol control information
M_PCSIG-UMEhSignal sent from a module/driver to a user
M_READD-HmeRead notification, sent downstream
M_FLUSHDUHMEhmeFlush module queue
M_STOPD-MEmeSuspend output
M_STARTD-MEmeRestart stopped device output
M_HANGUP-UMEhSet a Stream head hangup condition, sent upstream
M_ERROR-UMEhReport downstream error condition, sent upstream
M_COPYIN-UMEhCopy in data for transparent31 ioctls, sent upstream
M_COPYOUT-UMEhCopy out data for transparent32 ioctls, sent upstream
M_IOCDATAD-HmeData for transparent33 ioctls, sent downstream
M_PCRSEDUMEhmeReserved for internal use
M_STOPID-MEmeSuspend input
M_STARTID-MEmeRestart stopped device input

The following message types are not defined by SVR 4.2 and are OpenSS7 specific, or are specific to another SVR 4.2-based implementation:

M_PCCTLDUMEmeSame as M_CTL, but high priority.
M_PCSETOPTS-UMEhSame as M_SETOPTS, but high priority.
M_PCEVENTSame as M_EVENT, but high priority.
M_UNHANGUP-UMEhReverses a previous M_HANGUP message.
M_NOTIFY
M_HPDATADUHMEhmeSame as M_DATA, but high priority.
M_LETSPLAYAIX specific message for driver direct I/O.
M_DONTPLAYAIX specific message for driver direct I/O.
M_BACKDONEAIX specific message for driver direct I/O.
M_PCTTY

High Priority messages are described in detail throughout this chapter and in Message Types.


5.1.2 Expedited Data


5.2 Message Structure

STREAMS messages consist of a chain of one or more message blocks. A message block is a triplet of a msgb(9) structure, a datab(9) structure, and a variable length data buffer. A message block (msgb(9) structure) is an instance of a reference to the data contained in the data buffer. Many message block structures can refer to a data block and data buffer. A data block (datab(9) structure) contains information not contained in the data buffer, but directly associated with the data buffer (e.g., the size of the data buffer). One and only one data block is normally associated with each data buffer. Data buffers can be internal to the message block, data block, data buffer triplet, automatically allocated using kmem_alloc(9), or allocated by the module or driver and associated with a data block (i.e., using esballoc(9)).

The msgb(9) structure is defined in sys/stream.h and has the following format and members:

typedef struct msgb {
        struct msgb *b_next;            /* next msgb on queue */
        struct msgb *b_prev;            /* prev msgb on queue */
        struct msgb *b_cont;            /* next msgb in message */
        unsigned char *b_rptr;          /* rd pointer into datab */
        unsigned char *b_wptr;          /* wr pointer into datab */
        struct datab *b_datap;          /* pointer to datab */
        unsigned char b_band;           /* band of this message */
        unsigned char b_pad1;           /* padding */
        unsigned short b_flag;          /* message flags */
        long b_pad2;                    /* padding */
} mblk_t;

The members of the msgb(9) structure are described as follows:

b_nextpoints to the next message block on a message queue;
b_prevpoints to the previous message block on a message queue;
b_contpoints to the next message block in the same message chain;
b_rptrpoints to the beginning of the data (the point from which to read);
b_wptrPoints to the end of the data (the point from which to write);
b_datappoints to the associated data block (datab(9));
b_bandindicates the priority band;
b_pad1provides padding; and
b_flagholds flags for this message block. Flags are normally set only on the first block of a message. Valid flags are discussed below.
b_pad2Reserved.34

The b_band member determines the priority band of the message. This member determines the queueing priority (placement) in a message queue when the message type is an ordinary message type. High priority message types are always queued ahead of ordinary message types, and the b_band member is always set to ‘0’ whenever a high priority message is queued by a STREAMS utility function. When allocb(9) or esballoc(9) are used to allocate a message block, the b_band member is initially set to ‘0’. This member may be modified by a module or driver.

Note that in System V Release 4.0, certain data structures fundamental to the kernel (for example, device numbers, user IDs) were enlarged to enable them to hold more information. This feature was referred to as Expanded Fundamental Types (EFT). Since some of this information was passed in STREAMS messages, there was a binary compatibility issue for pre-System V Release 4 drivers and modules. #ifdef’s were added to the kernel to provide a transition period for these drivers and modules to be recompiled, and to allow it to be built to use the pre-System V Release 4 short data types or the System V Release 4 long data types. Support for short data types will be dropped in some future releases.35

The values that can be used in b_flag are exposed when sys/stream.h is included:

#define MSGMARK         (1<<0)  /* last byte of message is marked */
#define MSGNOLOOP       (1<<1)  /* don't loop mesage at stream head */
#define MSGDELIM        (1<<2)  /* message is delimited */
#define MSGNOGET        (1<<3)  /* UnixWare/Solaris/Mac OT/ UXP/V getq does not 
                                   return message */
#define MSGATTEN        (1<<4)  /* UXP/V attention to on read side */
#define MSGMARKNEXT     (1<<4)  /* Solaris */
#define MSGLOG          (1<<4)  /* UnixWare */
#define MSGNOTMARKNEXT  (1<<5)  /* Solaris */
#define MSGCOMPRESS     (1<<8)  /* OSF: compress like messages as space allows */
#define MSGNOTIFY       (1<<9)  /* OSF: notify when message consumed */

The following flags are defined by SVR 4.2:

MSGMARKlast byte of message is marked
MSGNOLOOPdon’t loop message at Stream head
MSGDELIMmessage is delimited

The following flags are not defined by SVR 4.2 and are OpenSS7 specific, or are specific to another SVR 4.2-based implementation:

MSGNOGETUnixWare/Solaris/Mac OT/ UXP/V getq does not return message
MSGATTENUXP/V attention to on read side
MSGMARKNEXTSolaris
MSGLOGUnixWare
MSGNOTMARKNEXTSolaris
MSGCOMPRESSOSF: compress like messages as space allows
MSGNOTIFYOSF: notify when message consumed
typedef struct free_rtn {
        void (*free_func) (caddr_t);
        caddr_t free_arg;
} frtn_t;

typedef struct datab {
        union {
                struct datab *freep;
                struct free_rtn *frtnp;
        } db_f;
        unsigned char *db_base;
        unsigned char *db_lim;
        unsigned char db_ref;
        unsigned char db_type;
        unsigned char db_class;
        unsigned char db_pad;
        unsigned int db_size;
#if 0
        unsigned char db_cache[DB_CACHESIZE];
#endif
#if 0
        unsigned char *db_msgaddr;
        long db_filler;
#endif
        /* Linux Fast-STREAMS specific members */
        atomic_t db_users;
} dblk_t;

#define db_freep db_f.freep
#define db_frtnp db_f.frtnp

The following members are defined by SVR 4.2:

db_freeppointer to an external data buffer to be freed;
db_frtnppointer to an routine to free an extended buffer;
db_basebase of the buffer (first usable byte);
db_limlimit of the buffer (last usable byte plus 1);
db_refnumber of references to this data block by message blocks;
db_typethe data block type (i.e., STREAMS message type);
db_classthe class of the message (normal or high priority);
db_iswhatanother name for db_class;
db_padpadding;
db_filler2another name for db_pad;
db_sizesize of the buffer;
db_cacheSVR 3.1 internal buffer;36
db_msgaddrpointer to msgb(9) structure allocated with this data block in a 3-tuple;37
db_fillerfiller; and,38

The following members are not defined by SVR 4.2 and are OpenSS7 specific:

db_userssame as db_ref but atomic.

5.2.1 Message Linkage

The message block (msgb(9) structure) provides an instance of a reference to the data buffer associated with the message block. Multiple message blocks can be chained together (with b_cont pointers) into a composite message. When multiple messages blocks are chained the type of the first message block (its db_type) determines the type of the overall message. For example, a message consisting of an M_IOCTL message block followed by an M_DATA message block is considered to be an M_IOCTL message. Other message block members of the first message block, such as b_band, also apply to the entire message. The initial message block of a message block chain can be queued onto a message queue (with the b_next and b_prev pointers). The chaining of message blocks into messages using the b_cont pointer, and linkage onto message queues using the b_next and b_prev pointers, are illustrated in Figure 21.

Message Form and Linkage

Figure 21. Message Form and Linkage

A message can occur stand-alone (that is, it is not present on any message queue as it is in a module or driver’s put procedure) or can be queued on a message queue awaiting processing by the queue’s service procedure. The b_next and b_prev pointers are not significant for a stand-alone message and are initialized to NULL by STREAMS when the message is not queued on a message queue.

A message block is an instance of a reference to a data block (and therefore data buffer). Multiple message block can refer to the same data block. This is illustrated in Figure 21. In the figure, the second message block of ‘Message 1’ shares a data block with the second message block of ‘Message 2’. Message blocks that share data blocks result from use of the dupb(9) and dupmsg(9) STREAMS utilities. The first of these utilities, dupb(9), will duplicate a message block, obtaining a new reference to the data block. The db_ref member of the associated data block will be increased by one to indicate the number of message blocks that refer to this data block. The second of these utilities, dupmsg(9) duplicate all of the message blocks in a message, following the b_cont pointers, resulting in a duplicated message.

Duplication of message blocks provides an excellent way of obtaining a new reference to a data buffer without the overhead of copying each byte of the buffer. A common use of duplication is to obtain a duplicate of a message to be held for retransmission, while another duplicate is passed to the next module for transmission.

Despite the advantages of duplication, copying a message block or message chain is also possible with the copyb(9) and copymsg(9) STREAMS utilities. These utilities copy the message block, data block, and data buffer for one message block (copyb(9)) or each message block in a chain (copymsg(9)).

Data Buffer References

Figure 21b. Data Buffer References

Being a reference to a data buffer, the message block has two pointer into the data buffer that define the range of data used by the reference. The b_rptr indicates the beginning of the range of data in the data buffer, and represents the position at which a module or driver would begin reading data; the b_wptr, the end of the range of data, where a module or driver would begin writing data. The data block, on the other hand, has two pointers representing the absolute limits of the data buffer. The db_base indicates the beginning of the data buffer; db_lim, the end. This relationship between pointers into the data buffer is illustrated in Figure 21b.

STREAMS provides a library of utility functions used to manipulate message blocks, data blocks and data buffers. The members of a message block or data block should not be manipulated directly by the module or driver write: an appropriate STREAMS message utility should be used instead. See Utilities.


5.2.2 Sending and Receiving Messages

As shown in the message lists of Messages Overview, a large subset of the available message types can be generated and consumed by modules and drivers. Another subset, are dedicated to generation and consumption by the Stream head.

Message types that are dedicated for passing control and data information between the Stream and a user level process are the M_PROTO, M_PCPROTO, and M_DATA messages.39 STREAMS-specific system calls are provided to user level processes so that they may exchange M_PROTO, M_PCPROTO and M_DATA message with a Stream. This permits a user level process to interact with the Stream in a similar fashion as a module on the Stream, allowing user level processes to also present a service interface.40.

In general, all system calls interact directly (by subroutine interface) with the Stream head. An exception is the open(2s) and close(2s) system calls which directly invoke a subroutine call to the module or driver qi_qopen and qi_qclose procedures. All other system calls call subroutines provided by the Stream header that can result in the generation and transmission of a message on the Stream from the Stream head, or consumption of a message at the Stream head.

The traditional write(2s) system call is capable of directly generating M_DATA messages and having them passed downstream. The traditional read(2s) system call can collect M_DATA messages (and in some read modes, M_PROTO and M_PCPROTO messages) that have arrived at the Stream head. These system calls provide a backward compatible interface for character device drivers implemented under STREAMS.41

The STREAMS-specific putmsg(2s), putpmsg(2s) system calls provide the user level process with the ability to directly generate M_PROTO, M_PCPROTO or M_DATA messages and have send downstream on the Stream from the Stream head. getmsg(2s), getpmsg(2s) system calls provide the ability to collect M_PROTO, M_PCPROTO and M_DATA messages from the Stream head. These system calls are superior to the write(2s) and read(2s) system calls in that the provide a finer control over the composition of the generated message, and more information concerning the composition of a consumed message. Whereas, write(2s) and read(2s) pass only one buffer from the user, putmsg(2s), putpmsg(2s), getmsg(2s), getpmsg(2s) provide two buffers: one for the control part of the message to transfer M_PROTO or M_PCPROTO message blocks with preservation of boundaries; another for the data part, to transfer M_DATA messages blocks – all in a single call. Also, data transfer with write(2s) and read(2s) are by nature byte-stream oriented, whereas, control and data transfer with putmsg(2s) and getmsg(2s) are by nature message oriented. write(2s) and read(2s) provide no mechanism for assigning priority to messages generated or indicating the priority of messages to be received: putpmsg(2s) and getpmsg(2s) provide the ability to specify criteria for the band (b_band) of the generated or consumed message.


5.2.2.1 putmsg(2s)

putmsg(2s) provides the ability for a user level process to generate M_PROTO, M_PCPROTO and M_DATA messages and have them send downstream on a Stream. The user specifies a control part of the message that is used to fill the M_PROTO or M_PCPROTO message block in the resulting message, and a data part of the message that is used to fill the M_DATA message block in the resulting message.

The prototype for the putmsg(2s) system call is exposed by including the sys/stropts.h system header file. The prototype for the putmsg(2s) system call is as follows:

int putmsg(int fildes, const struct strbuf *ctlptr,
           const struct strbuf *dataptr, int flags);

Where the arguments are interpreted as follows:

fildes

specifies the Stream upon which to generate messages and is a file descriptor that was returned by the corresponding call to open(2s) or pipe(2s) that created the Stream.

ctlptr

is a pointer to a read-only strbuf(5) structure that is used to specify the control part of the message.

dataptr

is a pointer to a read-only strbuf(5) structure that is used to specify the data part of the message.

flags

specifies whether the control part of the message is to be of type M_PROTO or of type M_PCPROTO. It can have values ‘0’ (specifying that an M_PROTO message be generated) or ‘RS_HIPRI’ (specifying that an M_PCPROTO message be generated).

The ctlptr and dataptr point to a strbuf(5) structure that is used to specify the control and data parts of the message. The strbuf(5) structure has the format and members as follows:

struct strbuf {
        int maxlen;     /* maximum buffer length */
        int len;        /* length of data */
        char *buf;      /* pointer to buffer */
};

The members of the strbuf(5) structure are interpreted by putmsg(2s) as follows:

maxlenspecifies the maximum length of the buffer and is ignored by putmsg(2s);
lenspecifies the length of the data for transfer in the control or data part of the message; and,
bufspecifies the location of the data buffer containing the data for transfer in the control or data part of the message.

If ctlptr is set to ‘NULL’ on call, or the len member of the strbuf(5) structure pointed to by ctlptr is set to ‘-1’, then no control part (M_PROTO or M_PCPROTO message block) will be placed in the resulting message.

If dataptr is set to ‘NULL’ on call, or the len member of the strbuf(5) structure pointed to by dataptr is set to ‘-1’, then no data part (M_DATA message block) will be placed in the resulting message.

For additional details, see the putmsg(2s) or putmsg(2p) reference page.


5.2.2.2 getmsg(2s)

getmsg(2s) provides the ability for a user level process to retrieve M_PROTO, M_PCPROTO and M_DATA messages that have arrived at the Stream head. The user specifies an area into which to receive any control part of the message (from M_PROTO or M_PCPROTO message blocks in the message), and an area into which to receive any data part of the message (from M_DATA message blocks in the message).

The prototype for the getmsg(2s) system call is exposed by including the sys/stropts.h system header file. The prototype for the getmsg(2s) system call is as follows:

int getmsg(int fildes, struct strbuf *ctlptr, struct strbuf *dataptr,
           int *flagsp);

Where the arguments are interpreted as follows:

fildes

specifies the Stream upon which to generate messages and is a file descriptor that was returned by the corresponding call to open(2s) or pipe(2s) that created the Stream.

ctlptr

is a pointer to an strbuf(5) structure that is used to specify the area to accept the control part of the message.

dataptr

is a pointer to an strbuf(5) structure that is used to specify the area to accept the data part of the message.

flagsp

is a pointer to an integer flags word that is used both to specify the criteria for the type of message to be retrieved, on call, as well as indicating the type of the message retrieved, on return.

On call, the integer pointed to by flagsp can contain ‘0’ indicating that the first available message is to be retrieved regardless of priority; or, ‘RS_HIPRI’, indicating that only the first high priority message is to be retrieved and no low priority message. On successful return, the integer pointed to by flagsp will contain ‘0’ to indicate that the message retrieved was an ordinary message (M_PROTO or just M_DATA), or ‘RS_HIPRI’ to indicate that the message retrieved was of high priority (M_PCPROTO or just M_HPDATA).

The members of the strbuf(5) structure are interpreted by getmsg(2s) as follows:

maxlenspecifies the maximum length of the buffer into which the message part is to be written;
lenignored by getmsg(2s) on call, but set on return to indicate the length of the data that was actually written to the buffer by getmsg(2s); and,
bufspecifies the location of the data buffer to contain the data retrieved for the control or data part of the message.

If ctlptr or dataptr are ‘NULL’ on call, or the maxlen field of the corresponding strbuf(5) structure is set to ‘-1’, then getmsg(2s) will not retrieve the corresponding control or data part of the message.

For additional details, see the getmsg(2s) or getmsg(2p) reference page.


5.2.2.3 putpmsg(2s)

putpmsg(2s) is similar to putmsg(2s), but provides the additional ability to specify the queue priority band (b_band) of the resulting message. The prototype for the putpmsg(2s) system call is exposed by including the sys/stropts.h system header file. The prototype for the putpmsg(2s) system call is as follows:

int putpmsg(int fildes, const struct strbuf *ctlptr,
            const struct strbuf *dataptr, int band, int flags);

The arguments to putpmsg(2s) are interpreted the same as those for putmsg(2s) as described in putmsg(2s) with the exception of the band and flags arguments.

The band argument provides a band number to be placed in the b_band member of the first message block of the resulting message. band can only be non-zero if the message to be generated is a normal message.

The flags argument is interpreted differently by putpmsg(2s): it can have values ‘MSG_BAND’ or ‘MSG_HIPRI’, but these are equivalent to the ‘0’ and ‘RS_HIPRI’ flags for putmsg(2s).

Under OpenSS7, putmsg(2s) is implemented as a library call to putpmsg(2s). This is possible because the call:

putmsg(fildes, ctlptr, dataptr, flags);

is equivalent to:

putpmsg(fildes, ctlptr, dataptr, 0, flags);

For additional details, see the putpmsg(2s) or putpmsg(2p) reference page.


5.2.2.4 getpmsg(2s)

getpmsg(2s) is similar to getmsg(2s), but provides the additional ability to specify the queue priority band (b_band) of the retrieved message. The prototype for the getpmsg(2s) system call is exposed by including the sys/stropts.h system header file. The prototype for the getpmsg(2s) system call is as follows:

int getpmsg(int fildes, struct strbuf *ctlptr, struct strbuf *dataptr,
            int *bandp, int *flagsp);

The arguments to getpmsg(2s) are interpreted the same as those for getmsg(2s) as described in getmsg(2s), with the exception of the bandp and flags arguments.

The bandp argument points to a band number on call that specifies a criteria for use with selecting the band of the retrieved message and returns the band number of the retrieved message upon successful return. The integer pointed to by bandp can take on values as follows:

MSG_ANY

Only specified on call. Specifies that the first available message is to be retrieved, regardless of priority or band.

MSG_BAND

On call, specifies that an orinary message of message band bandp or greater is to be retrieved. On return, indicates that an ordinary message was retrieved of the band returned in bandp.

MSG_HIPRI

On call, specifies that a high priority message is to be retrieved. On return, indicates that a high priority message was retrieved.

On call, bandp is ignored unless flagsp specifies ‘MSG_BAND’. When ‘MSG_BAND’ is specified, bandp specifies the minimum band number of the message to be retrieved. On return, bandp indicates the band number (b_band) of the retrieved message, or ‘0’ if the retrieved message was a high priority message.

Under OpenSS7, getmsg(2s) is implemented as a library call to getpmsg(2s). This is possible because the calls:

int flags = 0;
getmsg(fildes, ctlptr, dataptr, &flags);

int flags = RS_HIPRI;
getmsg(fildes, ctlptr, dataptr, &flags);

are equivalent to:

int band = 0;
int flags = MSG_ANY;
getpmsg(fildes, ctlptr, dataptr, &band, &flags);

int band = 0;
int flags = MSG_HIPRI;
getpmsg(fildes, ctlptr, dataptr, &band, &flags);

For additional details, see the getpmsg(2s) or getpmsg(2p) reference page.


5.2.3 Control of Stream Head Processing

Stream head message processing can be controlled by the user level process, or by a module or driver within the Stream.

Modules and drivers can control Stream head processing using the M_SETOPTS message. At any time, a module or driver can issue an M_SETOPTS message upstream. The M_SETOPTS contains a stroptions(9) structure (see Data Structures) specifying which Stream head characteristics to alter in the read-side queue of the Stream head (including q_hiwat, qi_lowat, q_minpsz and q_maxpsz), however, of interest to the current discussion are the read and write options associated with the Stream head.

User level processes can also alter the read and write options associated with the Stream head. User level processes use the I_SRDOPT(7) (see streamio(7)), I_GRDOPT(7) (see streamio(7)), I_SWROPT(7) (see streamio(7)) and I_GWROPT(7) (see streamio(7)) ioctl(2s) commands to achieve the same purpose as the M_SETOPTS message used by modules and drivers.


5.2.3.1 Read Options

Read options are altered by a user level process using the I_SRDOPT(7) (see streamio(7)) and I_GRDOPT(7) (see streamio(7)) ioctl(2s) commands; or altered by a module or driver using the SO_READOPT flag and so_readopt member of the stroptions(9) data structure contained in an M_SETOPTS message passed upstream.

Two flags, each selected from two sets of flags, can be set in this manner. The two sets of flags are as follows:


5.2.3.2 Read Mode

The read mode affects how the read(2s) and readv(2s) system calls treat message boundaries. One read mode can be selected from the following modes:

RNORM

byte-stream mode. This is the default read mode. This is the normal byte-stream mode where message boundaries are ignored. read(2s) and readv(2s) return data until the read count has been satisfied or a zero length message is received.

RMSGD

message non-discard mode. The read(2s) and readv(2s) system calls will return when either the count is satisfied, a zero length message is received, or a message boundary is encountered. If there is any data left in a message after the read count has been satisfied, the message is placed back on the Stream head read queue. The data will be read on a subsequent read(2s) or readv(2s) call.

RMSGN

message discard mode. Similar to RMSGN mode, above, but data that remains in a message after the read count has been satisfied is discarded.

RFILL

message fill mode. Similar to RNORM but requests that the Stream head fill a buffer completely before returning to the application. This is used in conjunction with a cooperating module and M_READ messages.42


5.2.3.3 Read Protocol

The read protocol affects hos read(2s) and readv(2s) system calls treat the control part of a message. One read protocol can be selected from the following protocols:43

RPROTNORM

fail read when control part present. Fail read(2s) with [EBADMSG] if a message containing a control part is at the front of the Stream head read queue. Otherwise, the message wi read as normal. This is the default setting for new Stream heads.44

RPROTDAT

deliver control part of a message as data. The control part of a message is prepended to the data part and delivered.45

RPROTDIS

discard control part of message, delivering only any data part. The control part of the message is discarded and the data part is processed.46

RPROTCOMPRESS

compress like data.47

Note that, although all modes terminate the read on a zero-length message, POSIX requires that zero only be returned from read(2s) when the requested length is zero or an end of file (M_HANGUP) has occurred. Therefore, OpenSS7 only returns on a zero-length message if some data has been read already.


5.2.3.4 Write Options

No mechanism is provided to permit a write(2s) system call to generate either a M_PROTO or M_PCPROTO message. The write(2s) system call will only generate one or more M_DATA messages.

Write options are altered by a user level process using the I_SWROPT(7) (see streamio(7)) and I_GWROPT(7) (see streamio(7)) ioctl(2s) commands. It is not possible for a module or driver to affect these options with the M_SETOPTS message.

SNDZERO

Permits the sending of a zero-length message downstream when a write(2s) of zero length is issued. Without this option being set, write(2s) will succeed and return ‘0’ if a zero-lenth write(2s) is issued, but no zero-length message will be generated or sent. This option is the default for regular Stream, but is not set by default for STREAMS-based pipes.

SNDPIPE

Issues a {SIGPIPE} signal to caller of write(2s) if the caller attempts to write to a Stream that has received a hangup (M_HANGUP) or an error (M_ERROR). When not set, {SIGPIPE} will not be signalled. This option is the default for STEAMS-based pipes but is not set by default for regular Streams.

SNDHOLD

Requests that the Stream head hold messages temporarily in an attempt to coalesce smaller messages into larger ones for efficiency. This feature is largely deprecated, but is supported by OpenSS7. When not set (as is the default), messages are sent immediately. This option is not set by default for any Stream.


5.2.3.5 Write Offset

A write offset is provided as a option to allow for reservation of bytes at the beginning of the M_DATA message resulting from a call to the write(2s) system call.

The write offset can be altered by a module or driver using the SO_WROFF flag and so_wroff member of the stroptions(9) data structure contained in an M_SETOPTS message passed upstream. It is not possible for a user level process to alter the write offset using any streamio(7) command.

The write offset associated with a Stream head determines the amount of space that the Stream head will attempt to reserve at the beginning of the initial M_DATA message generated in response to the write(2s) system call. The purpose of a write offset is to permit modules and drivers to request that bytes at the beginning of a downstream messages be reserved to permit, for example, the addition of protocol headers to the message as it passes without the need to allocate additional message blocks and prepend them.

The write offset, however, is advisory to the Stream head and if it cannot include the offset, a M_DATA message with no offset may still be generated. It is the responsibility of the module or driver to ensure that sufficient bytes are reserved at the start of a message before attempting to use them.


5.3 Queues and Priority

Each queue in a Stream has associated with it a message queue that consists of a double linked list of message blocks. Messages are normally placed onto a message queue by the queue’s put procedure, and removed by the service procedure. Messages will accumulate in the message queue whenever the rate at which messages are place onto the message queue by the put procedure exceeds the rate at which they are removed by the service procedure. The service procedure can become blocked for a number of reasons: the STREAMS scheduler is delayed in invoking the service procedure due to higher priority system tasks; the service procedure is awaiting a message block necessary to complete its processing of a message; the service procedure is blocked by flow control forward in the Stream.

When a queue service procedure runs, it take messages off of the message queue from the head of the message queue in the order in which they appear in the queue. Messages are queued according to their priority: high priority messages appear first, followed by priority messages of descending band number, followed by normal messages in band zero. Within a band, messages are processed in the order in which they arrived at the queue (that is, on a First-In-First-Out (FIFO) basis). High priority messages are also processed in the order in which they arrived at the queue. This ordering within the queue is illustrated in Figure 22.

Message Ordering on a Queue

Figure 22. Message Ordering on a Queue

When a message is placed on a queue, (e.g., by putq(9)), it is placed on the queue behind messages of the same priority. High priority messages are not subjected to flow control. Priority messages will affect the flow control parameters in the qband(9) structure associated with the band. Normal messages will affect the flow control parameter in the queue(9) structure. Message priority range from ‘0’ to ‘255’, where ‘0’ is the lowest queueing priority and ‘255’ the highest. High priority messages are considered to be of greater priority than all other messages.

Bands can be used for any purpose required by a service interface. For example, simple Expedited Data implementation can be accomplished by using one band in addition to normal messages, band ‘1’. This is illustrated in Figure 23.

Message Ordering with One Priority Band

Figure 23. Message Ordering with One Priority Band

High priority messages are considered to be of greatest priority and are not subjected to flow control. High priority messages are a rare occurrence on the typical Stream, and the Stream head only permits one high priority message (M_PCPROTO) to be outstanding for a user. If a high priority message arrives at the Stream head and one is already waiting to be read by the user, the message is discarded. High priority messages are typically handled directly from a queue’s put procedure, but they may also be queued to the message queue. When queue, a high priority message will always cause the service procedure of the queue (if any) to be scheduled for execution by the STREAMS scheduler. When a service procedure runs, and a message is retrieved from the message queue (e.g., with getq(9)), high priority messages will always be retrieved first. High priority messages must be acted upon immediately by a service procedure, it is not possible to place a high priority message back on a queue with putbq(9).


5.3.1 Queue Priority Utilities

The following STREAMS utilities are provided to module and driver writers for use in put and service procedures. These utilities assist with handling flow control within a Stream.

flushq(9)
flushband(9)

These utilities provide the ability to flush specific messages from a message queue. They are discussed under Flush Handling, and under Utilities. These utilities are also described in the corresponding manual page.

canput(9)
bcanput(9)
canputnext(9)
bcanputnext(9)

These utilities provide the ability to test the current or next queue for a flow control condition for normal (band zero) messages or priority messages within a message band. They are discussed under Flow Control, and under Utilities. These utilities are also described in the corresponding manual page.

strqset(9)
strqget(9)

These utilities provide the ability to examine and modify flow control parameters associated with a queue (queue(9)) or queue band (qband(9)). They are discussed below, and under Utilities. These utilities are also described in the corresponding manual page.

The strqget(9) and strqset(9) STREAMS utilities are provided to access and modify members of the queue(9) and qband(9) data structures. In general, the specific members of these data structures should not be access directly by the module writer. This restriction is necessary for several reasons:

  • The size and format of the queue(9) and qband(9) structures might change, breaking binary modules compiled against the older definitions. strqget(9) and strqset(9) provide structure independent access to these members.
  • On Symetric Multi-Processing (SMP) architectures, it may be necessary to protect access to a member of these structures to guarantee atomicity of operations. strqget(9) and strqset(9) provide necessary locking on SMP architectures.

5.3.1.1 strqget(9)

A declaration for the strqget(9) utility is exposed by including the sys/stream.h kernel header file. The prototype is as follows:

int strqget(queue_t *q, qfields_t what, unsigned char band, long *val);

Where the arguments are interpreted as follows:

q

Specifies the queue(9) structure (and indirectly the qband(9) structure) from which to retrieve a member.

what

Specifies which member to retrieve. Specific values for various members are described below.

band

When zero, specifies that the member is to be retrieved from the queue(9) structure specified by q; when non-zero, the band number of the qband(9) structure from which to retrieve the member.

val

Points to a long value into which the result is to be placed. All results are converted to a long before being written to this location.

The qfields_t(9) enumeration is defined as follows:

typedef enum qfields {
        QHIWAT,         /* hi water mark */
        QLOWAT,         /* lo water mark */
        QMAXPSZ,        /* max packet size */
        QMINPSZ,        /* min packet size */
        QCOUNT,         /* count */
        QFIRST,         /* first message in queue */
        QLAST,          /* last message in queue */
        QFLAG,          /* state */
        QBAD,           /* last (AIX and SUPER-UX) */
} qfields_t;

Each value of the qfields_t enumeration specifies a different member to be set by strqset(9) or retrieved by strqget(9). When band is zero, the member to be set or retrieved is the corresponding member of the queue(9) structure indicated by q. When band is non-zero, the member to be set or retrieved is the corresponding member of the qband(9) structure, associated with q, of band number band.

QHIWATSet or return the high water mark (q_hiwat or qb_hiwat).
QLOWATSet or return the low water mark (q_lowat or qb_lowat).
QMAXPSZSet or return the maximum packet size (q_maxpsz or qb_maxpsz).
QMINPSZSet or return the minimum packet size (q_minpsz or qb_minpsz).
QCOUNTReturn the count of bytes queued (q_count or qb_count). This field is only valid for strqget(9).
QFIRSTReturn a pointer to the first message queued (q_first or qb_first). This field is only valid for strqget(9).
QLASTReturn a pointer to the last message queued (q_last or qb_last). This field is only valid for strqget(9).
QFLAGReturn the flags word (q_flag or qb_flag). This field is only valid for strqget(9).

Additional information is given under Utilities, and provided in the strqget(9) manual page.


5.3.1.2 strqset(9)

A declaration for the strqset(9) utility is exposed by including the sys/stream.h kernel header file. The prototype is as follows:

int strqset(queue_t *q, qfields_t what, unsigned char band, long val);

Where the arguments are interpreted as follows:

q

Specifies the queue(9) structure (and indirectly the qband(9) structure) to which to write a member.

what

Specifies which member to write. Specific values for various members are described above under strqget(9).

band

When zero, specifies that the member is to be written to the queue(9) structure specified by q; when non-zero, the band number of the qband(9) structure to which to write the member.

val

Specifies the long value to write to the member. All values are converted to a long to be passed in this argument.

Additional information is given under Utilities, and provided in the strqset(9) manual page.


5.3.2 Queue Priority Commands

Aside from the putpmsg(2s) and getpmsg(2s) system calls, a number of streamio(7) commands associated with queueing and priorities can be issued by a user level process using the ioctl(2s) system call. The input output controls that accept a queue band or indicate a queue band event are as follows:

I_FLUSHBAND(7) (see streamio(7))

Flushes the Stream for a specified band. This ioctl(2s) command is equivalent to the flushq(9) and flushband(9) utilities available to modules and drivers. It is discussed under Flush Handling.

I_CKBAND(7) (see streamio(7))

Checks whether a message is available to be read from a specified queue band. It is discussed below.

I_GETBAND(7) (see streamio(7))

Gets the priority band associated with the next message on the Stream head read queue. It is discussed below.

I_CANPUT(7) (see streamio(7))

Checks whether messages can be written to a specified queue band. This ioctl(2s) command is equivalent to the canput(9) and bcanput(9) utilities available to modules and drivers. It is discussed under Flow Control.

I_ATMARK(7) (see streamio(7))

This ioctl(2s) command supports Transmission Control Protocol (TCP) urgent data in a byte-stream. It indicates when a marked message has arrived at the Stream head. It is discussed below.

I_GETSIG(7) (see streamio(7))
I_SETSIG(7) (see streamio(7))

Sets the mask of events for which the Stream head will send a calling process a {SIGPOLL} or {SIGURG} signal. Events include S_RDBAND, S_WRBAND and S_BANDURG. This ioctl(2s) command is discussed under Input and Output Polling.

The streamio(7) input output controls in the following sections are all of the form:

int ioctl(int fildes, int cmd, long arg);

5.3.2.1 I_FLUSHBAND

Flushes the Stream for a specified band. This ioctl(2s) command is equivalent to the flushq(9) and flushband(9) utilities available to modules and drivers. It is discussed under Flush Handling.

fildesthe Stream for which the command is issued;
cmdis ‘I_FLUSHBAND’; and,
argis a pointer to a bandinfo(9) structure.

The bandinfo(9) structure is exposed by including the sys/stropts.h system header file. Its format and members are as follows:

struct bandinfo {
        unsigned char bi_pri;
        int bi_flag;
};

where,

bi_prithe priority band to flush;
bi_flaghow to flush: one of FLUSHR, FLUSHW or FLUSHRW.

5.3.2.2 I_CKBAND

Checks whether a message is available to be read from a specified queue band.

fildesthe Stream for which the command is issued;
cmdis ‘I_CKBAND’.
argcontains the band number for which to check for an available message.

5.3.2.3 I_GETBAND

Gets the priority band associated with the next message on the Stream head read queue.

fildesthe Stream for which the command is issued;
cmdis ‘I_GETBAND’.
argis a pointer to an int into which to receive the band number.

5.3.2.4 I_CANPUT

The I_CANPUT(7) (see streamio(7)) ioctl(2s) command has the following form:

int ioctl(int fildes, int cmd, long arg);

where,

fildesthe Stream for which the command is issued;
cmdis ‘I_CANPUT’.
argcontains the band number for which to check for flow control.

Checks whether message can be written to the queue band specified by arg. arg is an integer which contains the queue band to test for flow control. arg can also have the following value:

ANYBAND

When this value is specified, instead of testing a specified band, I_CANPUT(7) (see streamio(7)) tests whether any (existing) band is writable.

Upon success, the I_CANPUT(7) (see streamio(7)) ioctl(2s) command returns zero (‘0’) or a positive integer. The I_CANPUT(7) (see streamio(7)) command returns false (‘0’) if the band cannot be written to (due to flow control), and returns true (‘1’) if the band is writable. Upon failure, the ioctl(2s) call returns ‘-1’ and sets errno(3) to an appropriate error number.

When the I_CANPUT(7) (see streamio(7)) ioctl(2s) command fails, it returns ‘-1’ and sets errno(3) to one of the following errors:

[EINVAL]

arg is outside the range ‘0’ to ‘255’ and does not represent a valid priority band, or is not ANYBAND.

[EIO]

fildes refers to a Stream that is closing.

[ENXIO]

fildes refers to a Stream that has received a hangup.

[EPIPE]

fildes refers to a STREAMS-based pipe and the other end of the pipe is closed.

[ESTRPIPE]

fildes refers to a STREAMS-based pipe and a write operation was attempted with no readers at the other end, or a read operation was attempted, the pipe is empty, and there are no readers writers the other end.

[EINVAL]

fildes refers to a Stream that is linked under a multiplexing driver. If a Stream is linked under a multiplexing driver, all ioctl(2s) commands other than I_UNLINK(7) (see streamio(7)) or I_PUNLINK(7) (see streamio(7)) will return [EINVAL].

Any error received in an M_ERROR message indicating a persistent write error for the Stream will cause I_CANPUT(7) (see streamio(7)) to fail, and the write error will be returned in errno(3).

Any error number returned in errno(3) in response to a general ioctl(2s) failure can also be returned in response to I_ATMARK(7) (see streamio(7)). See also ioctl(2p).

OpenSS7 implements the special flag, ANYBAND, that can be used for an arg value instead of the band number to check whether any existing band is writable. This is similar to the POLLWRBAND flag to poll(2s). ANYBAND uses the otherwise invalid band number ‘-1’. Portable STREAMS applications programs will not use the ANYBAND flag and will not rely upon I_CANPUT(7) (see streamio(7)) to generate an error if passed ‘-1’ as an invalid argument.


5.3.2.5 I_ATMARK

The I_ATMARK(7) (see streamio(7)) ioctl(2s) command has the following form:

int ioctl(int fildes, int cmd, long arg);

where,

fildesthe Stream for which the command is issued;
cmdis ‘I_ATMARK’.
argspecifies a criteria for checking for a mark.

The I_ATMARK(7) (see streamio(7)) command informs the user if the current message on the Stream head read queue is marked by a downstream module or driver. The arg argument determines how the checking is done when there are multiple marked messages on the Stream head read queue. The possible values of the arg argument are as follows:

ANYMARK

Determine if the message at the head of the Stream head read queue is marked by a downstream module or driver.

LASTMARK

Determine if the message at the head of the Stream head read queue is the last message that is marked on the queue by a downstream module or driver.

The bitwise inclusive OR of the flags ANYMARK and LASTMARK is permitted.

STREAMS message blocks that have the MSGMARK flag set in the b_flag member of the msgb(9) structure are marked messages. Solaris also provides the MSGMARKNET and MSGNOTMARKNET flags. The use of these flags is not very clear, but OpenSS7 could use them in the read(2s) logic to determine whether the next message is marked without removing the message from the queue.

When read(2s) encounters a marked message and data has already been read, the read terminates with the amount of data read. The resulting short read is an indication to the user that a marked message could exist on the read queue. (Short reads can also result from zero-byte data, or from a delimited message: one with the MSGDELIM flag set in b_flag). When a short read occurs, the user should test for a marked message using the ANYMARK flag to the I_ATMARK(7) (see streamio(7)) ioctl(2s) command. A subsequent read(2s) will consume the marked message following the marked message. This can be checked by using the LASTMARK flag to the I_ATMARK(7) (see streamio(7)) ioctl(2s) command.

The b_flag member of the msgb(9) structure can have the flag, MSGMARK, set that allows a module or driver to mark a message sent to the Stream head. This is used to support tcp(4)’s ability to indicate the last bye of out-of-band data. Once marked, a message sent to the Stream head causes the Stream head to remember the message. A user may check to see if the message on the front of the Stream head read queue is marked, and whether it is the last marked message on the queue, with the I_ATMARK(7) (see streamio(7)) ioctl(2s) command. If a user is reading data from the Stream head and there are multiple messages on the Stream head read queue, and one of those messages is marked, read(2s) terminates when it reaches the marked message and returns the data only up to that marked message. The rest of the data may be obtained with successive reads. ANYMARK indicates that the user merely wants to check if the message at the head of the Stream head read queue is marked. LASTMARK indicates that the user wants to see if the message is the only one marked on the queue.

Upon success, the I_ATMARK(7) (see streamio(7)) ioctl(2s) command returns zero (‘0’) or a positive integer. The I_ATMARK(7) (see streamio(7)) operation returns a value of true (‘1’) if the marking criteria is met. It returns false (‘0’) if the marking criteria is not met. Upon failure, the I_ATMARK(7) (see streamio(7)) ioctl(2s) command returns ‘-1’ and sets errno(3) to an appropriate error number.

When the I_ATMARK(7) (see streamio(7)) ioctl(2s) command fails, it returns ‘-1’ and sets errno(3) to one of the following errors:

[EINVAL]

arg was other than ANYMARK or LASTMARK, or a bitwise-OR of the two.

Any error number returned in errno(3) in response to a general ioctl(2s) failure can also be returned in response to I_ATMARK(7) (see streamio(7)). See also ioctl(2p).


5.3.2.6 I_GETSIG

Sets the mask of events for which the Stream head will send a calling process a {SIGPOLL} or {SIGURG} signal. Events include S_RDBAND, S_WRBAND and S_BANDURG. This ioctl(2s) command is discussed under Input and Output Polling.

fildesthe Stream for which the command is issued;
cmdis ‘I_GETSIG’.
argis a pointer to a int to contain the retrieved event flags.

Event flags can include the following band related events:

S_RDBANDa message of non-zero priority band has been placed to the Stream head read queue.
S_WRBANDa priority band that was previously flow controlled has become available for writing (i.e., is no longer flow controlled).
S_BANDURGa modifier to S_RDBAND to generate {SIGURG} instead of {SIGPOLL} in response to the event.

5.3.2.7 I_SETSIG

Sets the mask of events for which the Stream head will send a calling process a {SIGPOLL} or {SIGURG} signal. Events include S_RDBAND, S_WRBAND and S_BANDURG. This ioctl(2s) command is discussed under Input and Output Polling.

fildesthe Stream for which the command is issued;
cmdis ‘I_SETSIG’.
argis an integer value that contains the event flags.

Event flags can include the following band related events:

S_RDBANDa message of non-zero priority band has been placed to the Stream head read queue.
S_WRBANDa priority band that was previously flow controlled has become available for writing (i.e., is no longer flow controlled).
S_BANDURGa modifier to S_RDBAND to generate {SIGURG} instead of {SIGPOLL} in response to the event.

5.3.3 The queue Structure

The queue(9) structure is exposed by including sys/stream.h.

typedef struct queue {
        struct qinit *q_qinfo;          /* info structure for the queue */
        struct msgb *q_first;           /* head of queued messages */
        struct msgb *q_last;            /* tail of queued messages */
        struct queue *q_next;           /* next queue in this stream */
        struct queue *q_link;           /* next queue for scheduling */
        void *q_ptr;                    /* private data pointer */
        size_t q_count;                 /* number of bytes in queue */
        unsigned long q_flag;           /* queue state */
        ssize_t q_minpsz;               /* min packet size accepted */
        ssize_t q_maxpsz;               /* max packet size accepted */
        size_t q_hiwat;                 /* hi water mark for flow control */
        size_t q_lowat;                 /* lo water mark for flow control */
        struct qband *q_bandp;          /* band's flow-control information */
        unsigned char q_nband;          /* number of priority bands */
        unsigned char q_blocked;        /* number of bands flow controlled */
        unsigned char qpad1[2];         /* reserved for future use */
        /* Linux fast-STREAMS specific members */
        ssize_t q_msgs;                 /* messages on queue, Solaris counts
                                           mblks, we count msgs */
        rwlock_t q_lock;                /* lock for this queue structure */
        int (*q_ftmsg) (mblk_t *);      /* message filter ala AIX */
} queue_t;

The following members are defined in SVR 4.2:

q_qinfopoints to the qinit(9) structure associated with this queue;
q_firstfirst message on the message queue (NULL if message queue is empty);
q_lastlast message on the message queue (NULL if message queue is empty);
q_nextnext queue in the Stream;
q_linknext queue in the STREAMS scheduler list;
q_ptrpointer to module/driver private data;
q_countnumber of bytes of messages on the queue;
q_flagqueue flag bits (current state of the queue);
q_minpszminimum packet size accepted;
q_maxpszmaximum packet size accepted;
q_hiwathigh water mark (queued bytes) for flow control;
q_lowatlow water mark (queued bytes) for flow control;
q_bandppointer to qband(9) structures associated with this queue;
q_nbandthe number of qband(9) structures associated with this queue;
q_blockedthe number of currently blocked (flow controlled) queue bands;
qpad1reserved for future use;

The following members are not defined in SVR 4.2 and are OpenSS7 specific:

q_msgsnumber of messages on the queue;
q_lockqueue structure lock; and,
q_ftmsgmessage filter ala AIX.

5.3.3.1 Using queue Information


5.3.3.2 queue Flags

#define QENAB           (1<< 0) /* queue is enabled to run */
#define QWANTR          (1<< 1) /* flow controlled forward */
#define QWANTW          (1<< 2) /* back-enable necessary */
#define QFULL           (1<< 3) /* queue is flow controlled */
#define QREADR          (1<< 4) /* this is the read queue */
#define QUSE            (1<< 5) /* queue being allocated */
#define QNOENB          (1<< 6) /* do not enable with putq */
#define QUP             (1<< 7) /* uni-processor emulation */
#define QBACK           (1<< 8) /* the queue has been back enabled */
#define QOLD            (1<< 9) /* module supports old style open/close */
#define QHLIST          (1<<10) /* stream head is on scan list */
#define QTOENAB         (1<<11) /* to be enabled */
#define QSYNCH          (1<<12) /* flag for queue sync */
#define QSAFE           (1<<13) /* safe callbacks needed */
#define QWELDED         (1<<14) /* flags for welded queues */
#define QSVCBUSY        (1<<15) /* service procedure running */
#define QWCLOSE         (1<<16) /* q in close wait */
#define QPROCS          (1<<17) /* putp, srvp disabled */

The following queue(9) flags are defined by SVR 4.2:

QENABqueue is enabled to run
QWANTRflow controlled forward
QWANTWback-enable necessary
QFULLqueue is flow controlled
QREADRthis is the read queue
QUSEqueue being allocated
QNOENBdo not enable with putq
QBACKthe queue has been back enabled
QOLDmodule supports old style open/close
QHLISTstream head is on scan list

The following are not defined by SVR 4.2, but are used by OpenSS7 and other SVR 4.2-based implementations:

QUPuni-processor emulation
QTOENABto be enabled
QSYNCHflag for queue sync
QSAFEsafe callbacks needed
QWELDEDflags for welded queues
QSVCBUSYservice procedure running
QWCLOSEq in close wait
QPROCSputp, srvp disabled

5.3.4 The qband Structure

The qband(9) structure and qband_t(9) type are exposed when sys/stream.h is included and are formatted and contain the following members:

typedef struct qband {
        struct qband *qb_next;          /* next (lower) priority band */
        size_t qb_count;                /* number of bytes queued */
        struct msgb *qb_first;          /* first queue message in this band */
        struct msgb *qb_last;           /* last queued message in this band */
        size_t qb_hiwat;                /* hi water mark for flow control */
        size_t qb_lowat;                /* lo water mark for flow control */
        unsigned long qb_flag;          /* flags */
        long qb_pad1;                   /* OSF: reserved */
} qband_t;

#define qb_msgs qb_pad1

Where the members are interpreted as follows:

qb_nextpoints to the next (lower) priority band;
qb_countnumber of bytes queued to this band in the message queue;
qb_firstthe first message queued in this band (NULL if band is empty);
qb_lastthe last message queued in this band (NULL if band is empty);
qb_hiwathigh water mark (in bytes queued) for this band;
qb_lowatlow water mark (in bytes queued) for this band;
qb_flagqueue band flags (see below);
qb_pad1reserved for future used; and,
qb_msgssame as qb_padq: contains the number of messages queued to the band.

Including sys/stream.h also exposes the following constants for use with the qb_flag member of the qband(9) structure:

QB_FULLwhen set, indicates that the band is considered full;
QB_WANTWwhen set, indicates that a preceding queue wants to write to this band; and,
QB_BACKwhen set, indicates that the queue needs to be back-enabled.

5.3.4.1 Using qband Information


5.3.5 Message Processing


5.3.5.1 Flow Control


5.3.6 Scheduling


5.3.6.1 Flow Control Variables


5.3.6.2 Flow Control Procedures


5.3.6.3 The STREAMS Scheduler


5.4 Service Interfaces


5.4.1 Service Interface Benefits


5.4.2 Service Interface Library Example


5.4.2.1 Accessing the Service Provider


5.4.2.2 Closing the Service Provider


5.4.2.3 Sending Data to the Service Provider


5.4.2.4 Receiving Data


5.4.2.5 Module Service Interface Example


5.5 Message Allocation


5.5.1 Recovering From No Buffers


5.6 Extended Buffers


6 Polling


6.1 Input and Output Polling


6.2 Controlling Terminal


7 Modules and Drivers


7.1 Environment


7.2 Input-Output Control


7.3 Flush Handling


7.4 Driver-Kernel Interface


7.5 Design Guidelines


8 Modules


8.1 Module


8.2 Module Flow Control


8.3 Module Design Guidelines


9 Drivers


9.1 External Device Numbers


9.2 Internal Device Numbers


9.3 spec File System


9.4 Clone Device


9.5 Named STREAMS Device


9.6 Driver


9.7 Cloning


9.8 Loop-Around Driver


9.9 Driver Design Guidelines


10 Multiplexing


10.1 Multiplexors


10.2 Connecting and Disconnecting Lower Stream


10.3 Multiplexor Construction Example


10.4 Multiplexing Driver


10.5 Persistent Links


10.6 Multiplexing Driver Design Guidelines


11 Pipes and FIFOs


11.1 Pipes and FIFOs


11.2 Flushing Pipes and FIFOs


11.3 Named Streams


11.4 Unique Connections


12 Terminal Subsystem


12.1 Terminal Subsystem


12.2 Pseudo-Terminal Subsystem


13 Synchronization

This chapter describes how to multi-thread a STREAMS driver or module. It covers the necessary conversion topics so that new and existing STREAMS modules and drivers will run in a symmetrical multi-processor kernel. This chapter covers primarily STREAMS specific multiprocessor issues and techniques.

Linux is a fully SMP capable operating system able to make effective use of the available parallelism of the symmetric shared-memory multiprocessor computer. All kernel subsystems are multiprocessor safe: scheduler, virtual memory, file systems, block, character, STREAMS input and output, networking protocols and device drivers.

STREAMS in an MP environment introduces some new concepts and terminology as follows:

Thread

sequence of instructions executed within the context of a process

Lock

mechanism for restricting access to data structures

Single Threaded

restricting access to a single thread

Multi-Threaded

allowing two or more threads access

Multiprocessing

two or more CPUs concurrently executing the operating system

The Linux 2.6 and 3.x kernel is multi-threaded to make effective use of symmetric shared-memory multiprocessor computers. All parts of the kernel, including STREAMS modules and drivers, must ensure data integrity in a multiprocessing environment. For the most part, developers must ensure that concurrently running kernel threads do not attempt to manipulate the same data at the same time. The STREAMS framework provides multiprocessing Syncrhonization Levels, which allows the developer control over the level of concurrency allowed in a module. The SVR 4.2 MP DDI/DKI also provides locking mechanisms for protecting data.

There are two types of entry points, callbacks and callouts in the OpenSS7 subsystem:

  1. Synchronous. These entry points (callouts) and callbacks are referenced against a STREAMS queue structure. That is, they were invoked using a STREAMS queue structure as an argument. These procedures are as follows:
    put(9s)
    srv(9s)
    qopen(9)
    qclose(9)
    qbufcall(9)
    qtimeout(9)
    mi_bufcall(9)
    putq(9)
    putbq(9)
    putnext(9)
    qreply(9)
  2. Asynchronous. These callbacks are not referenced against a STREAMS queue structure. That is, they were invoked without a specific STREAMS queue structure as an argument (known to STREAMS). These procedures are as follows:
    bufcall(9)
    esbbufcall(9)
    timeout(9)
    esballoc(9)(free routine)

13.1 MT Configuration

SVR 4.2 MP specifies a synchronization mechanism that can be used during configuration of a STREAMS driver or module to specify the level of synchronization required by a module. The SVR 4 synchronization levels are as follows:

SQLVL_DEFAULT

Default level synchronization. Specifies that the module uses the default synchronization scheme. This is the same as specifying SQLVL_MODULE.

SQLVL_GLOBAL

Global (STREAMS scheduler) level synchronization. Specifies that all of STREAMS can be access by only one thread at the same time. The module is run with global synchronization. This means that only one STREAMS executive thread will be permitted to enter any module. This makes the entire STREAMS executive single threaded and is useful primarily for debugging. This is the same as "Uniprocessor Emulation" on some systems, and reduces the STREAMS executive to running on a single processor at a time. This option should normally be used only for debugging.

SQLVL_ELSEWHERE

Module group level synchronization. Specifies that the module is run with synchronization within a group of modules. Only one thread of execution will be within the group of modules at a time. The group is separately specified as a character string name. This permits a group of modules to run single threaded as though they are running on a single processor, without interfering with the concurrency of other modules outside the group. This can be important for testing and for modules that implicitly share unprotected data structures.

SQLVL_MODULE

Module level synchronization. Specifies that all instances of a module can be accessed by only one thread at the same time. This is the default value. The module is run with synchronization at the module. Only one thread of execution will be permitted within the module. Where the module does not share data structures between modules, this has a similar effect on running on a uniprocessor system. This is the default and works best for non-multiprocessor-safe modules written in accordance with STREAMS guidelines. This level is roughly equivalent to Solaris D_MTPERMOD perimeters.

SQLVL_QUEUEPAIR

Queue pair level synchronization. Specifies that each queue pair can be accessed by only one thread at the same time. Only one thread will be permitted to enter a given queue’s procedures within a given queue pair. Where the read and write side of the queue pair share the same private structure (‘q->q_ptr’), this provides multiprocessor protection of the common data structure to all synchronous entry points without an external lock. This level is roughly equivalent to Solaris D_MTAPAIR perimeters.

SQLVL_QUEUE

Queue level synchronization. Specifies that each queue can be accessed by only one thread at the same time. The module is run with synchronization at the queue. Only one thread of execution will be permitted to enter a given queue’s procedures, however, another thread will be permitted to enter procedures of the other queue in the queue pair. This is useful when the read and write side of a module are largely independent and do not require synchronization between sides of the queue pair. This level is roughly equivalent to Solaris D_MTPERQ perimeters.

SQLVL_NOP

No synchronization. Specifies that each queue can be accessed by more than one thread at a the same time. The protection of internal data and of put(9s) and srv(9s) procedures against timeout(9) or bufcall(9) is done by the module or driver itself. This synchronization level should be used essentially for multiprocessor-efficient modules. This level is roughly equivalent to Solaris D_MP flag.


13.2 Synchronous Entry Points

Synchronous Entry Points are those entry points into the STREAMS driver or module that will be synchronized according to the specified synchronization level.

put(9s)

Queue put procedure. If the module has any synchronization level other than SQLVL_NOP, the put procedure will be exclusive. Attempts to enter the put procedure while another thread is running within the synchronization level will result in the call being postponed until the thread currently in the synchronization level exits.

srv(9s)

If the module has any synchronization level other than SQLVL_NOP, Queue service procedure. the service procedure will be exclusive. Attempts to enter the service procedure while another thread is running within the synchronization level will result in the service procedure being postponed until the thread currently in the synchronization level exits.

qopen(9)

Queue open procedure. The queue open procedure is synchronous and exclusive before the call to qprocson(9), or in any event, until return from the procedure. If the module has synchronization level of global, elsewhere or per-module; the call to the qopen(9) procedure is exclusive.

qclose(9)

Queue close procedure. The queue close procedure is synchronous and exclusive after the call to qprocsoff(9), or in any event, after return from the procedure. If the module has synchronization level of global, elsewhere or per-module; the call to the qclose(9) procedure is exclusive.

qprocson(9)

Queue procedures on.

qprocsoff(9)

Queue procedures off.

freezestr(9)

Freeze stream.

unfreezestr(9)

Thaw stream.

qwriter(9)

Queue writer.


13.3 Synchronous Callbacks

Synchronous Callbacks are those callbacks into the STREAMS driver or module that will be synchronized according to the specified synchronization level. Synchronous callbacks are an extension to the UNIX System V Release 4.2 specifications of STREAMS. Synchronous callback extensions include Solaris extensions and AIX extensions.

These include:

qbufcall(9)– queue referenced buffer call
qtimeout(9)– queue referenced timeout
qunbufcall(9)– queue referenced buffer call cancel
quntimeout(9)– queue referenced timeout cancel
mi_bufcall(9)– queue reference buffer call

13.4 Synchronous Callouts

putnext(9)
qreply(9)

13.5 Asynchronous Entry Points


13.6 Asynchronous Callbacks

Asynchronous Callbacks are those callbacks into the STREAMS driver or module that will not be synchronized according to the specified synchronization level. Asynchronous callbacks are the basic UNIX System V Release 4.2 callbacks.


13.7 Asynchronous Callouts


13.8 STREAMS Framework Integrity

The STREAMS framework guarantees the integrity of the STREAMS scheduler and related data structures, such as the queue(9), msgb(9), and datab(9) structures, assuming that the module properly accesses global operating system data structures, utilities and facilities.

The q_next and q_ptr members of the queue(9) structure will not be modified by the system while a thread is actively executing within a synchronous entry point. The q_next member of the queue(9) structure could change while a thread is executing within an asynchronous entry point.

A STREAMS module or driver must not call another module’s put or service procedure directly. The STREAMS utilities putnext(9), put(9s) and others described in Utilities must be used to pass messages to another queue. Calling another STREAMS module or driver directly circumvents the MP-STREAMS framework.48

To make a STREAMS module or driver MP-SAFE requires that the integrity of private module data structures be protected by the module itself. The integrity of private module data structures can be maintained either by using the MP-STREAMS framework to control concurrency and synchronize access to private data structures, or by the use of private locks within the module, or a combination of the two.


13.9 MP Message Ordering

STREAMS guarantees the ordering of messages along a Stream if all the modules in the Stream preserve message ordering internally. This ordering guarantee only applies to message that are sent along the same Stream and produced by the same source.

STREAMS does not guarantee that a message has been seen by the next put procedure by the time that putnext(9) or qreply(9) return. Under some circumstances, invocation of the next module’s put procedure might be deferred until after an exclusive thread leaves a synchronization boundary.

Regardless of STREAMS integrity protection, or the presence of synchronization barriers, at most one thread will be executing a given module’s service procedure.


13.10 MP-UNSAFE Modules

STREAMS supports modules that are not MP-SAFE and that are expecting to run in a uniprocessor environment.

By default, all STREAMS modules and drivers are considered MP-UNSAFE unless configured into the system as MP-SAFE.

Unsafe drivers run with only the minimum of modification. Unsafe drivers are synchronized, by default, at the level SQLVL_MODULE, which implies that, at any time, only one processor in the entire system is executing the module’s STREAMS code. MP-UNSAFE modules might not gain any performance advantage by being run in a multiprocessor environment.

MP-UNSAFE modules that access data structures private to other STREAMS modules must be synchronized at a broader level of synchronization. All such cooperating modules must be run with synchronization at the level SQLVL_ELSEWHERE, with a synchronization queue that is shared across all the pertinent modules.

MP-UNSAFE modules that do not share data between Stream instances but do shared Stream private data between the read and write put and service procedures can be synchronized at level SQLVL_QUEUEPAIR and will gain some advantage in the multiprocessor environment.

MP-UNSAFE modules that do not share data between Stream instances and do not share data between read and write side put and service procedures, but do share data between put and service procedure on the same side, can be synchronized at level SQLVL_QUEUE and will gain some advantage in the multiprocessor environment.

MP-UNSAFE modules that shared data between Stream instances, but only in the open and close routines, can still assign SQLVL_QUEUEPAIR or SQLVL_QUEUE, provided that an outer barrier is also established using the Solaris®-style outer perimeter (with the D_MTOCEXCL flag).

13.10.1 MP-UNSAFE Open and Close Routines

MP-UNSAFE modules are still responsible for cancelling all outstanding callbacks in their qi_qclose procedure.

MP-UNSAFE modules that are synchronized at SQLVL_QUEUEPAIR or SQLVL_QUEUE, that do not have an exclusive outer perimited established with D_MTOCEXCL, must call qprocsoff(9) in the qi_qclose routine, in addition to cancelling all oustanding callbacks, before deallocating Stream private structures or altering q_qptr pointers.

13.10.2 MP-UNSAFE Put and Service Procedures

13.10.3 MP-UNSAFE Interrupt Service Routines

MP-UNSAFE modules synchronized at synchronization level SQLVL_MODULE, SQLVL_ELSEWHERE, or SQLVL_GLOBAL are singly threaded within the STREAMS framework. However, interrupt service routines exist outside the STREAMS framework. Interrupt service routines that invoke STREAMS utilities will have execution of those utilities deferred until after all threads have left the synchronization barrier.

13.10.4 MP-UNSAFE Shared Data Structures

Modules that share data structure(s), and that are to be protected by STREAMS synchronization, must be configured at the same level of synchronization.

13.10.5 MP-UNSAFE Sleeping

An MP-UNSAFE module that must wait in its open or close procedure for a message from another STREAMS module must wait outside of all synchronization barriers; otherwise the responding thread might never be allowed to enter the synchronization barrier to invoke the module’s put or service procedure. Sleeping outside the synchronization barriers is accomplished by using qwait(9) or qwait_sig(9).

Modules using STREAMS synchronization barriers, either explicitly by configuration, or by default, must use qwait(9) and qwait_sig(9) instead of CV_WAIT(9) or CV_WAIT_SIG(9) from within qi_qopen and qi_qclose procedures.49


13.11 MP-SAFE Modules


13.11.1 MP Put and Service Procedures

The STREAMS utilities qprocson(9) and qprocsoff(9) enable and disable the put and service procedures of a queue pair. Prior to a call to qprocson(9) and after a call to qprocsoff(9), the module’s put and service procedures are disabled. Messages flow around the module as if it were not present in the Stream.

qprocson(9) must be called by the first open(2s) of a module, but only after allocation and initialization of any module resources or private data structures upon which the put and service procedures depend. qprocsoff(9) must be called by the close(2s) routine of a module before deallocating any resources on which the put and service procedures depend.

For example, it is typical for a module’s qi_qopen procedure to allocate a private data structure and associate it with the read- and write-queue q_ptr pointer for use by both the put and service procedure. It is typical for a module’s qi_qclose procedure to free the private data structure. In this case, qprocson(9) should not be called until after the private data structure has been allocated, initialized and attached to the q_ptr pointers. qprocsoff(9) should be called before deallocating the private data structure and invalidating the q_ptr pointers.


13.11.2 MP Timeout and Buffer Callbacks

The timeout(9), bufcall(9) and esbbcall(9) callbacks are asynchronous when invoked from outside the STREAMS framework. The means that the timeout(9), bufcall(9), or esbbcall(9) callback functions might execute concurrent with module procedures.

In contrast, under OpenSS7, when timeout(9), bufcall(9), and esbbcall(9) are invoked from within the STREAMS framework,50 they are equivalent to a call to qtimeout(9), qbufcall(9) with the current synchronization queue used as the q argument. This is possible because STREAMS always knows what queue’s synchronous procedures or callbacks it is running.

To provide for synchronous callbacks that can be invoked from outside the STREAMS framework, the qtimeout(9), quntimeout(9) qbufcall(9), and qunbufcall(9) STREAMS utilities are provided. When using these utilities, the callback function is executed inside any synchronization barrier associated with the queue that is passed to the function.

There are some restrictions on which queue pointer the qtimeout(9) and qbufcall(9) can be passed when called from a module’s open or close procedure, or when called from outside STREAMS (at soft or hard interrupt). The caller is responsible for the validity of the queue pointer. That is, the queue must be allocated and have procedures enabled across the call. The queue pointer argument of a module’s open, close, put, or service procedure can always be passed as an argument to these functions without any special consideration. They should not be passed a q->q_next pointer, unless the Stream is first frozen by the caller with freezestr(9). They may be passed a driver’s read-side queue pointer, or a lower multiplexed Stream’s write-side queue pointer, provided that the caller can ensure that the driver is not closed and the multiplexed Stream is not unlinked across the call. Reference to interior queue pairs must not be performed unless the Stream has first been frozen by the caller with freezestre(9).


13.11.3 MP Open and Close Procedures

STREAMS modules are permitted to sleep in their qi_qopen and qi_qclose procedures. However, MP-UNSAFE modules that use synchronization of these procedures against put and service procedures must leave the synchronization barrier before sleeping. This is accomplished by using the qwait(9) and qwait_sig(9) STREAMS utilities. These utilities are similar to CV_WAIT(9) and CV_WAIT_SIG(9), however, they release the synchronization barrier before sleeping. These MP-UNSAFE utilities may also be used by MP-SAFE modules; however, MP-SAFE modules may also use CV_WAIT(9) or CV_WAIT_SIG(9).

Because callback functions can be asynchronous with respect to the STREAMS framework, they might execute concurrent with a module’s close procedure. It is the responsibility of the module to cancel all outstanding callbacks before deallocating or invalidating references to data structures upon which those callbacks depend, and before returning from the close procedure.

A callback function scheduled with timeout(9) or bufcall(9) are guaranteed to have been cancelled by the time that the corresponding untimeout(9) or unbufcall(9) utilities return. The same is true for qtimeout(9), qbufcall(9), quntimeout(9) and qunbufcall(9).

The Mentat Portable Streams (MPS®) framework provided by the STREAMS Compatibility Modules package for Linux Fast-STREAMS also provides an mi_bufcall(9) function and mi_timer(9) function that can be used to manage buffer callbacks and timeouts as well as converting these asyncrhonous events into STREAMS synchronous events.


13.11.4 MP Module Unloading

STREAMS tracks kernel module references and prohibits a kernel module from unloading while there is a reference to a statically allocated data structure contained within the kernel module. If a STREAMS module does not cancel all callbacks in the module close procedure, the associated kernel module must not be permitted to be unloaded. STREAMS handles all references with the exception of references to the free routine provided to esballoc(9).

STREAMS loadable kernel modules that pass free routines to esballoc(9) are responsible for incrementing their own module counts upon the call to esballoc(9) and decrementing them when the free_rtn function exits.51


13.11.5 MP Locking

Basic spin locks or reader/writer locks can be used by MP-SAFE modules to protect module private data structures. When using locks, however, the following guidelines should be followed:

  • Avoid holding module private locks across calls to putnext(9), qreply(9), or other STREAMS utilities that invoke a put procedure, unless re-entrancy is provided. Otherwise, the calling thread might reenter the same queue procedure and attempt to take the same lock twice, causing a single-party deadlock scenario.
  • Do no hold module private locks, acquired in put or service procedures, across the calls to qprocson(9) or qprocsoff(9). These utilities spin waiting for all put and service procedures to exit, causing a single-party deadlock scenario.
  • Do not hold locks, acquired in the timeout(9) or bufcall(9) callback functions across calls to untimeout(9) or unbufcall(9). These utilities spin waiting for the callback function to exit, causing a single-party deadlock scenario.

13.11.6 MP Asynchronous Callbacks

Interrupt service routines and other asynchronous callback functions require special care by the STREAMS driver writer, because they can execute asynchronous to threads executing within the STREAMS framework.

MP-SAFE modules, or modules using synchronization barriers can use the qtimeout(9) and qbufcall(9) callbacks that are synchronous with respect to the STREAMS framework. Under OpenSS7, even timeout(9) and bufcall(9) utilities are synchronous with respect to the STREAMS framework when invoked from within a qi_putp procedure, qi_srvp procedure, or a synchronous callback. However, when invoked from outside a STREAMS module procedure (or from within qi_qopen or qi_qclose procedures, these functions generate asynchronous callbacks.

Because an asynchronous thread from outside of STREAMScan enter the driver at any time, the driver writer is responsible for ensuring that the asynchronous callback function acquires the necessary private locks before accessing private module data structures and releases those locks before returning. It is also the responsibility of the module to cancel any outstanding callback functions (see untimeout(9) and unbufcall(9)) before the data structures upon which they depend are deallocated and the module closed.

The following guidelines must be followed:

  • Interrupt service routines must be disabled by the callback if the interrupt service routine is accessing shared data structures with the callback function.
  • Outstanding callbacks from timeout(9) and bufcall(9) must be cancelled with a call to untimeout(9) or unbufcall(9).
  • Outstanding callbacks from esballoc(9), must be allowed to complete before the kernel module is permitted to be unloaded.

13.12 Stream Integrity

The q_next field of the queue(9) structure can be dereferenced in that queue’s qi_qopen, qi_qclose, qi_putp, and qi_srvp procedures as well as within any other synchronous procedure or callback (such as qtimeout(9), qbufcall(9), qwriter(9)) predicated on a queue in the same Stream.

All code executing outside the STREAMS framework, such as interrupt service routines, tasklets, network bottom halves, asynchronous timeout(9), bufcall(9), and esballoc(9) callback routines, are not permitted to dereference q_next for any queue pair in any Stream. Asynchronous procedures must use the ‘next’ version of all functions (e.g, ‘canputnext(q)’ instead of ‘canput(q->q_next)’).


14 Reference


14.1 Files


14.2 System Modules


14.3 System Drivers


14.4 System Calls


14.5 Input-Output Controls


14.6 Module Entry Points


14.7 Structures


14.8 Registration


14.9 Message Handling


14.10 Queue Handling


14.11 Miscellaneous Functions


14.12 Extensions


14.13 Compatibility


15 Conformance


15.1 SVR 4.2 Compatibility


15.2 AIX Compatibility


15.3 HP-UX Compatibility


15.4 OSF/1 Compatibility


15.5 UnixWare Compatibility


15.6 Solaris Compatibility


15.7 SUX Compatibility


15.8 UXP Compatibility


16 Portability


16.1 Core Function Support


16.1.1 Core Message Functions

adjmsg(9)trim bytes from the front or back of a STREAMS message
allocb(9)allocate a STREAMS message and data block
bufcall(9)install a buffer callback
copyb(9)copy a STREAMS message block
copymsg(9)copy a STREAMS message
datamsg(9)tests a STREAMS message type for data
dupb(9)duplicate a STREAMS message block
dupmsg(9)duplicate a STREAMS message
esballoc(9)allocate a STREAMS message and data block with a caller supplied data buffer
freeb(9)frees a STREAMS message block
freemsg(9)frees a STREAMS message
linkb(9)link a message block to a STREAMS message
msgdsize(9)calculate the size of the data in a STREAMS message
msgpullup(9)pull up bytes in a STREAMS message
pcmsg(9)test a data block message type for priority control
pullupmsg(9)pull up the bytes in a STREAMS message
rmvb(9)remove a message block from a STREAMS message
testb(9)test if a STREAMS message can be allocated
unbufcall(9)remove a STREAMS buffer callback
unlinkb(9)unlink a message block from a STREAMS message

16.1.2 Core UP Queue Functions

backq(9)find the upstream or downstream queue
bcanput(9)test flow control on a STREAMS message queue
canenable(9)test whether a STREAMS message queue can be scheduled
enableok(9)allow a STREAMS message queue to be scheduled
flushband(9)flushes band STREAMS messages from a message queue
flushq(9)flushes messages from a STREAMS message queue
getq(9)gets a message from a STREAMS message queue
insq(9)inserts a message into a STREAMS message queue
noenable(9)disable a STREAMS message queue from being scheduled
OTHERQ(9)return the other queue of a STREAMS queue pair
putbq(9)put a message back on a STREAMS message queue
putctl(9)put a control message on a STREAMS message queue
putctl1(9)put a 1 byte control message on a STREAMS message queue
putq(9)put a message on a STREAMS message queue
qenable(9)schedules a STREAMS message queue service routine
qreply(9)replies to a message from a STREAMS message queue
qsize(9)return the number of message on a queue
RD(9)return the read queue of a STREAMS queue pair
rmvq(9)remove a message from a STREAMS message queue
SAMESTR(9)test for STREAMS pipe or FIFO
WR(9)return the write queue of a STREAMS queue pair

16.1.3 Core MP Queue Functions

canputnext(9)test flow control on a message queue
canputnext(9)test flow control on a message queue
freezestr(9)freeze the state of a stream queue
put(9s)invoke the put procedure for a STREAMS module or driver with a STREAMS message
putnext(9)put a message on the downstream STREAMS message queue
putnextctl1(9)put a 1 byte control message on the downstream STREAMS message queue
putnextctl(9)put a control message on the downstream STREAMS message queue
qprocsoff(9)disables STREAMS message queue processing for multi-processing
qprocson(9)enables STREAMS message queue processing for multi-processing
strqget(9)gets information about a STREAMS message queue
strqset(9)sets attributes of a STREAMS message queue
unfreezestr(9)thaw the state of a stream queue

16.1.4 Core DDI/DKI Functions

kmem_alloc(9)allocate kernel memory
kmem_free(9)deallocates kernel memory
kmem_zalloc(9)allocate and zero kernel memory
cmn_err(9)print a kernel command error
bcopy(9)copy byte strings
bzero(9)zero a byte string
copyin(9)copy user data in from user space to kernel space
copyout(9)copy user data in from kernel space to user space
delay(9)postpone the calling process for a number of clock ticks
drv_getparm(9)driver retrieve kernel parameter
drv_hztomsec(9)convert kernel tick time between microseconds or milliseconds
drv_htztousec(9)convert kernel tick time between microseconds or milliseconds
drv_msectohz(9)convert kernel tick time between microseconds or milliseconds
drv_priv(9)check if the current process is privileged
drv_usectohz(9)convert kernel tick time between microseconds or milliseconds
drv_usecwait(9)delay for a number of microseconds
min(9)determine the minimum of two integers
max(9)determine the maximum of two integers
getmajor(9)get the internal major device number for a device
getminor(9)get the extended minor device number for a device
makedevice(9)create a device from a major and minor device numbers
strlog(9)pass a message to the STREAMS logger
timeout(9)start a timer
untimeout(9)stop a timer
mknod(9)make block or character special files
mount(9)mount and unmount file systems
umount(9)mount and unmount file systems
unlink(9)remove a file

16.1.5 Some Common Extension Functions

linkmsg(9)link a message block to a STREAMS message
putctl2(9)put a two byte control message on a STREAMS message queue
putnextctl2(9)put a two byte control message on the downstream STREAMS message queue
weldq(9)weld two (or four) queues together
unweldq(9)unweld two (or four) queues

16.1.6 Some Internal Functions

allocq(9)allocate a STREAMS queue pair
bcanget(9)test for message arrival on a band on a stream