#!jade <?xml version="1.0" encoding="euc-kr"?> <!-- <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" "http://www.docbook.org/xml/4.1.2/docbookx.dtd"> --!> <article lang="ko"> <articleinfo> <title>GNU Pth - The GNU Portable Threads</title> <author lang="en"> <firstname>Ralf</firstname> <othername>S.</othername> <surname>Engelschall</surname> <affiliation> <address> <email>rse(at)gnu.org.no.spam</email> </address> </affiliation> </author> <othercredit role="translator"> <surname>왕</surname> <firstname>태경</firstname> <contrib>번역</contrib> <affiliation> <address> <email>bugiii(at)opengl.pe.kr.no.spam</email> </address> </affiliation> </othercredit> <revhistory> <revnumber>0.1</revnumber> <date>2003-12-29</date> <authorinitials>bugiii</authorinitials> <revremark> 영문 원본을 한글로 최초 번역. 1,2,3,4,6장 번역 완료 </revremark> </revhistory> </articleinfo> <abstract> Pth는 전통적인 process 기반 서버의 단점을 극복하고 일반 쓰레드 기법의 장점을 취해서 이벤트로 구동되는 어플리케이션에서 요구되는 입출력 다중화를 동기화의 어려움없이 일련의 흐름으로 쉽게 프로그래밍하게 해주는 쓰레드 라이브러리입니다. </abstract> <section> <title><?NAME?>이름</title> <para> pth - GNU Portable Threads </para> </section> <section> <title><?VERSION?>버전</title> <para> GNU Pth 2.0.0 (17-Feb-2003) </para> </section> <section> <title><?SYNOPSIS?>요약</title> <formalpara><title><?Global Library Management?>전체 라이브러리 관리(Global Library Management)</title></formalpara> <para><function> pth_init, pth_kill, pth_ctrl, pth_version. </function></para> <formalpara><title><?Thread Attribute Handling?>쓰레드 속성 처리(Thread Attribute Handling)</title></formalpara> <para><function> pth_attr_of, pth_attr_new, pth_attr_init, pth_attr_set, pth_attr_get, pth_attr_destroy. </function></para> <formalpara><title><?Thread Control?>쓰레드 제어(Thread Control)</title></formalpara> <para><function> pth_spawn, pth_once, pth_self, pth_suspend, pth_resume, pth_yield, pth_nap, pth_wait, pth_cancel, pth_abort, pth_raise, pth_join, pth_exit. </function></para> <formalpara><title><?Utilities?>유틸리티(Utilities)</title></formalpara> <para><function> pth_fdmode, pth_time, pth_timeout, pth_sfiodisc. </function></para> <formalpara><title><?Cancellation Management?>취소 관리(Cancellation Management)</title></formalpara> <para><function> pth_cancel_point, pth_cancel_state. </function></para> <formalpara><title><?Event Handling?>이벤트 처리(Event Handling)</title></formalpara> <para><function> pth_event, pth_event_typeof, pth_event_extract, pth_event_concat, pth_event_isolate, pth_event_walk, pth_event_status, pth_event_free. </function></para> <formalpara><title><?Key-Based Storage?>키-기반 저장(Key-Based Storage)</title></formalpara> <para><function> pth_key_create, pth_key_delete, pth_key_setdata, pth_key_getdata. </function></para> <formalpara><title><?Message Port Communication?>메시지 포트 통신(Message Port Communication)</title></formalpara> <para><function> pth_msgport_create, pth_msgport_destroy, pth_msgport_find, pth_msgport_pending, pth_msgport_put, pth_msgport_get, pth_msgport_reply. </function></para> <formalpara><title><?Thread Cleanups?>쓰레드 청소(Thread Cleanups)</title></formalpara> <para><function> pth_cleanup_push, pth_cleanup_pop. </function></para> <formalpara><title><?Process Forking?>프로세스 포킹(Process Forking)</title></formalpara> <para><function> pth_atfork_push, pth_atfork_pop, pth_fork. </function></para> <formalpara><title><?Synchronization?>동기화(Synchronization)</title></formalpara> <para><function> pth_mutex_init, pth_mutex_acquire, pth_mutex_release, pth_rwlock_init, pth_rwlock_acquire, pth_rwlock_release, pth_cond_init, pth_cond_await, pth_cond_notify, pth_barrier_init, pth_barrier_reach. </function></para> <formalpara><title><?User-Space Context?>사용자-공간 컨텍스트(User-Space Context)</title></formalpara> <para><function> pth_uctx_create, pth_uctx_make, pth_uctx_save, pth_uctx_restore, pth_uctx_switch, pth_uctx_destroy. </function></para> <formalpara><title><?Generalized POSIX Replacement API?>일반화된 POSIX 대체 API(Generalized POSIX Replacement API)</title></formalpara> <para><function> pth_sigwait_ev, pth_accept_ev, pth_connect_ev, pth_select_ev, pth_poll_ev, pth_read_ev, pth_readv_ev, pth_write_ev, pth_writev_ev, pth_recv_ev, pth_recvfrom_ev, pth_send_ev, pth_sendto_ev. </function></para> <formalpara><title><?Standard POSIX Replacement API?>표준 POSIX 대체 API(Standard POSIX Replacement API)</title></formalpara> <para><function> pth_nanosleep, pth_usleep, pth_sleep, pth_waitpid, pth_system, pth_sigmask, pth_sigwait, pth_accept, pth_connect, pth_select, pth_pselect, pth_poll, pth_read, pth_readv, pth_write, pth_writev, pth_pread, pth_pwrite, pth_recv, pth_recvfrom, pth_send, pth_sendto. </function></para> </section> <section> <title>설명</title> <blockquote lang="en"> <screen> ____ _ _ | _ \| |_| |__ | |_) | __| '_ \ ``Only those who attempt | __/| |_| | | | the absurd can achieve |_| \__|_| |_| the impossible.'' </screen> </blockquote> <para> <?Pth is a very portable POSIX/ANSI-C based library for Unix platforms which provides non-preemptive priority-based scheduling for multiple threads of execution (aka `multithreading') inside event-driven applications. All threads run in the same address space of the application process, but each thread has its own individual program counter, run-time stack, signal mask and errno variable.?> Pth는 이벤트로 구동되는(event-driven) 어플리케이션에 멀티쓰레드를 비선점형(non-preemtive) 우선순위 기준으로(priority-based) 스케쥴링을 제공하는 이식성이 매우 높은 POSIX/ANSI-C 기반의 Unix 플랫폼용 라이브러리입니다. 모든 쓰레드는 어플리케이션 프로세스의 동일한 주소 공간에서 실행되지만, 각 쓰레드는 자신의 독립된 프로그램 카운터, 런타임 스택, 시그널 마스크와 <varname>errno</varname> 변수를 가집니다. </para> <para> <?The thread scheduling itself is done in a cooperative way, i.e., the threads are managed and dispatched by a priority- and event-driven non-preemptive scheduler. The intention is that this way both better portability and run-time performance is achieved than with preemptive scheduling. The event facility allows threads to wait until various types of internal and external events occur, including pending I/O on file descriptors, asynchronous signals, elapsed timers, pending I/O on message ports, thread and process termination, and even results of customized callback functions.?> Pth의 쓰레드 스케쥴링은 상호협력적인(cooperative) 방법으로 동작하는데, 우선순위와 이벤트로 구동되는 비선점형 스케쥴러에 의해서 관리되고 처리(dispatched)됩니다. Pth의 목표는 선점형 스케쥴링보다 더 나은 이식성과 런타임 성능을 얻는 것입니다. 쓰레드는 이벤트 기능(facility)을 이용해서 파일 디스크립터상의 보류된(pending) I/O, 비동기 시그널, 시간 만료 타이머(elapsed timers), 메시지 포트상의 보류된 I/O, 쓰레드와 프로세스의 종료, 그리고 사용자 정의 콜백 함수의 결과 같은 다양한 형태의 내부와 외부적인 이벤트가 발생할 때까지 대기할 수 있습니다. </para> <para> <?Pth also provides an optional emulation API for POSIX.1c threads (`Pthreads') which can be used for backward compatibility to existing multithreaded applications. See Pth's pthread(3) manual page for details.?> Pth는 또한 기존의 멀티쓰레드 어플리케이션의 하위 호환성에 사용할 수 있도록 추가적인 POSIX.1c 쓰레드(Pthread) 에뮬레이션 API를 제공합니다. 자세한 것은 pthread(3) 메뉴얼 페이지를 참고하십시오. </para> <section> <title><?Threading Background?>쓰레딩 기초 지식</title> <para> <?When programming event-driven applications, usually servers, lots of regular jobs and one-shot requests have to be processed in parallel. To efficiently simulate this parallel processing on uniprocessor machines, we use `multitasking' -- that is, we have the application ask the operating system to spawn multiple instances of itself. On Unix, typically the kernel implements multitasking in a preemptive and priority-based way through heavy-weight processes spawned with fork(2). These processes usually do not share a common address space. Instead they are clearly separated from each other, and are created by direct cloning a process address space (although modern kernels use memory segment mapping and copy-on-write semantics to avoid unnecessary copying of physical memory).?> 이벤트로 구동되는 어플리케이션(대부분 서버들)을 프로그래밍할 때, 많은 일상적인 작업(regular jobs)과 1회성의 요청(one-shot requests)을 병렬적으로(in parallel) 처리해야 합니다. 이 병렬 처리를 단일 프로세서(uniprocessor) 머신에서 효과적으로 흉내내기(simulate) 위해서, 멀티태스킹을 사용하는데, 어플리케이션은 자신의 다중 인스턴스를 스폰하도록 운영체제에게 요청합니다. Unix상에서 커널은 전형적으로 무거운(heavy-weight) fork(2)로 스폰되는 프로세스를 통해서 선점형이고 우선순위 기반의 멀티태스킹을 구현합니다. 대개 이들 프로세스들은 일반적인 주소 공간을 공유하지 않습니다. 대신 프로세스들은 완전히 다른 프로세스들과 분리되며, 프로세스 주소 공간을 직접 클론해서 생성합니다. (물론 현대적인 커널은 메모리 세그먼트 매핑과 쓰기시 복사 기법(copy-on-write semantics)으로 불필요한 물리적 메모리의 복사를 피합니다.) </para> <para> <?The drawbacks are obvious: Sharing data between the processes is complicated, and can usually only be done efficiently through shared memory (but which itself is not very portable). Synchronization is complicated because of the preemptive nature of the Unix scheduler (one has to use atomic locks, etc). The machine's resources can be exhausted very quickly when the server application has to serve too many long-running requests (heavy-weight processes cost memory). And when each request spawns a sub-process to handle it, the server performance and responsiveness is horrible (heavy-weight processes cost time to spawn). Finally, the server application doesn't scale very well with the load because of these resource problems. In practice, lots of tricks are usually used to overcome these problems - ranging from pre-forked sub-process pools to semi-serialized processing, etc.?> 이것의 단점은 명백합니다. 프로세스간에 데이터를 공유하는 것이 복잡하고, 대개 공유 메모리를 통해서만 효율적으로 수행됩니다. (하지만, 공유 메모리는 이식성이 많이 부족합니다.) 프로세스간 동기화(synchronization)도 Unix 스케쥴러의 선점형 특성 때문에 처리하기가 까다롭습니다. (원자적 잠금(atomic locks)을 사용해야 합니다.) 서버 어플리케이션은 아주 많고 긴 실행 요청을 (무거운 프로세스는 메모리를 소비 합니다.) 처리해야 할 때, 기계의 자원을 아주 빠르게 소모할 수 있습니다. 마지막으로, 서버 어플리케이션은 이런 자원 문제 때문에 부하에 대해서 잘 대처할(scale) 수 없습니다. 실제로 이런 문제를 극복하기 위해서 미리 포크된 서브 프로세스 풀이나 semi-serialized processing 등과 같은 기법들이 사용됩니다. </para> <para> <?One of the most elegant ways to solve these resource- and data-sharing problems is to have multiple light-weight threads of execution inside a single (heavy-weight) process, i.e., to use multithreading. Those threads usually improve responsiveness and performance of the application, often improve and simplify the internal program structure, and most important, require less system resources than heavy-weight processes. Threads are neither the optimal run-time facility for all types of applications, nor can all applications benefit from them. But at least event-driven server applications usually benefit greatly from using threads.?> 이런 자원과 데이터 공유 문제를 해결할 수 있는 가장 세련된 방법 중에 하나는 하나의 (무거운) 프로세서 안에서 멀티쓰레딩을 사용하는 여러 개의 가벼운 실행 쓰레드를 가지는 것입니다. 이들 쓰레드들은 대개 어플리케이션의 응답성(responsiveness)과 성능을 향상시키고, 종종 내부적인 프로그램 구조를 향상시키고 단순화 시키며, 특히 무거운 프로세스보다 시스템 자원을 덜 요구합니다. 쓰레드는 모든 형태의 어플리케이션에 최적화된 런타임 설비(facility)는 아니며, 모든 어플리케이션이 이득을 볼 수 있는 것도 아닙니다. 그렇지만 최소한 이벤트로 구동되는 서버 어플리케이션에서는 일반적으로 쓰레드를 사용하면 크게 이득을 볼 수 있습니다. </para> </section> <section> <title><?The World of Threading?>쓰레딩의 세계</title> <para> <?Even though lots of documents exists which describe and define the world of threading, to understand Pth, you need only basic knowledge about threading. The following definitions of thread-related terms should at least help you understand thread programming enough to allow you to use Pth.?> 쓰레딩의 세계를 설명하고 정의하는 문서들은 아주 많지만, Pth를 이해하기 위해서, 쓰레딩에 대한 기본적인 지식이 필요합니다. 최소한 다음의 쓰레드에 관련된 용어의 정의는 쓰레드 프로그래밍을 이해하는 데 도움을 줄 것이고 Pth를 사용하는 데 충분할 것입니다. <itemizedlist> <listitem> <para> <?process vs. thread?> 프로세스 vs. 쓰레드 </para> <para> <?A process on Unix systems consists of at least the following fundamental ingredients: virtual memory table, program code, program counter, heap memory, stack memory, stack pointer, file descriptor set, signal table. On every process switch, the kernel saves and restores these ingredients for the individual processes. On the other hand, a thread consists of only a private program counter, stack memory, stack pointer and signal table. All other ingredients, in particular the virtual memory, it shares with the other threads of the same process. ?> Unix 시스템상의 프로세스는 최소한 다음과 같은 기본적인 요소로 구성됩니다: 가상 메모리 테이블, 프로그램 코드, 프로그램 카운터, 힙 메모리, 스택 메모리, 스택 포인터, 파일 디스크립터 세트, 시스널 테이블. 프로세스 전환이 일어 날 때마다, 커널은 이들 요소를 독립된 프로세스들을 위해서 저장하고 복구합니다. 한편, 쓰레드는 단지 자신만의 프로그램 카운터, 스택 메모리, 스택 포인터와 신호 테이블로 구성됩니다. 모든 다른 요소는 (특히 가상 메모리) 같은 프로세스의 다른 스레드들과 공유하게 됩니다. </para> </listitem> <listitem> <para> <?kernel-space vs. user-space threading?> 커널-공간(kernel-space) vs. 사용자-공간(user-space) 쓰레딩 </para> <para> <?Threads on a Unix platform traditionally can be implemented either inside kernel-space or user-space. When threads are implemented by the kernel, the thread context switches are performed by the kernel without the application's knowledge. Similarly, when threads are implemented in user-space, the thread context switches are performed by an application library, without the kernel's knowledge. There also are hybrid threading approaches where, typically, a user-space library binds one or more user-space threads to one or more kernel-space threads (there usually called light-weight processes - or in short LWPs). ?> Unix 플랫폼상의 쓰레드들은 전통적으로 커널-공간 또는 사용자-공간에서 구현될 수 있습니다. 쓰레드가 커널에 의해서 구현될 때, 쓰레드 컨텍스트 전환은 어플리케이션과 상관없이 커널에 의해서 수행됩니다. 비슷하게 쓰레드가 사용자 공간에서 구현될 때, 쓰레드 컨텍스트 전환은 커널과 상관없이 어플리케이션 라이브러리에 의해서 수행됩니다. 또한 하이브리드 쓰레딩 접근법도 있는데, 이것은 전형적으로 사용자-공간 라이브러리가 하나 이상의 사용자-공간 쓰레드를 하나 이상의 커널-공간 쓰레드와 결합시킵니다. (이들을 대개 가벼운(light-weight) 프로세스, 줄여서 LWP 라고 부릅니다.) </para> <para> <?User-space threads are usually more portable and can perform faster and cheaper context switches (for instance via swapcontext(2) or setjmp(3)/longjmp(3)) than kernel based threads. On the other hand, kernel-space threads can take advantage of multiprocessor machines and don't have any inherent I/O blocking problems. Kernel-space threads are usually scheduled in preemptive way side-by-side with the underlying processes. User-space threads on the other hand use either preemptive or non-preemptive scheduling.?> 사용자 공간 쓰레드는 대개 좀 더 이식성이 있으며 컨텍스트 전환을 좀 더 빠르고 손쉽게 수행할 수 있습니다. 한편, 커널 공간 쓰레드는 다중 프로세서(multiprocessor) 기기의 장점을 취할 수 있고, 어떤 고유의 I/O 블러킹 문제를 피할 수 있습니다. 커널-공간 쓰레드는 대개 기본적으로 프로세스와 밀접한 관계를 가지고 선점형으로 스케쥴 됩니다. 한편 사용자-공간 쓰레드는 선점형 또는 비선점형 스케쥴링을 사용합니다. </para> </listitem> <listitem> <para> <?preemptive vs. non-preemptive thread scheduling?> 선점형 vs. 비선점형 쓰레드 스케쥴링 </para> <para> <?In preemptive scheduling, the scheduler lets a thread execute until a blocking situation occurs (usually a function call which would block) or the assigned timeslice elapses. Then it detracts control from the thread without a chance for the thread to object. This is usually realized by interrupting the thread through a hardware interrupt signal (for kernel-space threads) or a software interrupt signal (for user-space threads), like SIGALRM or SIGVTALRM. In non-preemptive scheduling, once a thread received control from the scheduler it keeps it until either a blocking situation occurs (again a function call which would block and instead switches back to the scheduler) or the thread explicitly yields control back to the scheduler in a cooperative way. ?> 선점형 스케쥴링에서, 스케쥴러는 쓰레드가 블러킹 상태가 발생(대개 블럭되는 함수 호출)하거나 지정된 시간이 지날 때까지 실행하도록 합니다. 그리고 나서 스케쥴러는 할 일이 있는 다른 쓰레드로 제어권을 넘겨줍니다. 이것은 대개 하드웨어 인터럽트 신호(커널-공간 쓰레드일 경우)나 SIGALRM이나 SIGVTALRM 같은 소프트웨어 인터럽트 신호(사용자-공간 쓰레드)를 통해서 쓰레드를 중지시켜서(by interrupting) 실현합니다. 비선점형 스케쥴링에서, 쓰레드가 스케쥴러에서 제어권을 넘겨받으면, 블러킹 상황이 발생(블럭되는 함수 호출은 스케쥴러로 전환)하거나 쓰레드가 (상호협력적인 방법인) 명시적으로 스케쥴러에게 제어권을 넘길 때까지 실행됩니다. </para> </listitem> <listitem> <para> <?concurrency vs. parallelism?> 병행성(concurrency) vs. 병렬성(parallelism) </para> <para> <?Concurrency exists when at least two threads are in progress at the same time. Parallelism arises when at least two threads are executing simultaneously. Real parallelism can be only achieved on multiprocessor machines, of course. But one also usually speaks of parallelism or high concurrency in the context of preemptive thread scheduling and of low concurrency in the context of non-preemptive thread scheduling. ?> 병행성은 두개 이상의 쓰레드가 똑같은 시각에 진행 중일 때(in progress) 일어납니다. 병렬성은 최소한 두개 이상의 쓰레드가 동시에 실행될 때(executing) 일어납니다. 물론, 진정한 병렬성은 다중 프로세서 기계에서만 달성될 수 있습니다. 하지만 병렬성 또는 높은 병행성을 선점형 쓰레드 스케쥴링, 낮은 병행성을 비선점형 쓰레드 스케쥴링으로 언급되기도 합니다. </para> </listitem> <listitem> <para> <?responsiveness?> 응답성(responsiveness) </para> <para> <?The responsiveness of a system can be described by the user visible delay until the system responses to an external request. When this delay is small enough and the user doesn't recognize a noticeable delay, the responsiveness of the system is considered good. When the user recognizes or is even annoyed by the delay, the responsiveness of the system is considered bad. ?> 시스템의 응답성은 외부의 요청에 대해 시스템이 응답할 때까지 사용자가 볼 수 있는 지연에 의해서 설명할 수 있습니다. 이 지연이 충분히 작거나 사용자가 뚜렷하게 인식하지 못한다면, 시스템의 응답성은 좋다고 할 수 있습니다. 사용자가 이 지연을 인식하거나 불편을 느끼게 되면, 시스템의 응답성은 나쁘다고 할 수 있습니다. </para> </listitem> <listitem> <para> <?reentrant , thread-safe and asynchronous-safe functions?> 재진입(reentrant), 쓰레드-안전(thread-safe) 그리고 비동기-안전(asynchronous-safe) 함수 </para> <para> <?A reentrant function is one that behaves correctly if it is called simultaneously by several threads and then also executes simultaneously. Functions that access global state, such as memory or files, of course, need to be carefully designed in order to be reentrant. Two traditional approaches to solve these problems are caller-supplied states and thread-specific data. ?> 재진입 함수는 여러 쓰레드들이 이 함수를 동시에 호출하고 동시에 실행될 때 올바르게 동작하는 함수입니다. 메모리 혹은 파일 같은 전역 상태를 억세스하는 함수는 재진입이 가능하도록 조심스럽게 설계 되어야 합니다. 이 문제를 해결하는 두개의 전통적인 접근법은 호출자 쪽에서 상태를 제공하는 것과 쓰레드별(thread-specific) 데이터입니다. </para> <para> <?Thread-safety is the avoidance of data races, i.e., situations in which data is set to either correct or incorrect value depending upon the (unpredictable) order in which multiple threads access and modify the data. So a function is thread-safe when it still behaves semantically correct when called simultaneously by several threads (it is not required that the functions also execute simultaneously). The traditional approach to achieve thread-safety is to wrap a function body with an internal mutual exclusion lock (aka `mutex'). As you should recognize, reentrant is a stronger attribute than thread-safe, because it is harder to achieve and results especially in no run-time contention between threads. So, a reentrant function is always thread-safe, but not vice versa.?> 쓰레드-안전이라는 것은 여러 개의 쓰레드가 데이터를 읽고 쓸 때 (예측할 수 없는) 실행 순서에 따라 부정확한 값이 될 수도 있는, 데이터 경쟁(races)이 없다는 것입니다. 몇개의 쓰레드가 동시에 어떤 함수를 호출하였을 때, 이 함수가 올바른 행동을 한다면 이 함수를 쓰레드-안전입니다. (이 함수가 꼭 동시에 실행되어야 하는 것은 아닙니다.) 쓰레드-안전을 달성하는 전통적인 접근법은 뮤텍스(mutex, mutual exclusion lock)로 함수 전체를 감싸는 것입니다. </para> <para> <?Additionally there is a related attribute for functions named asynchronous-safe, which comes into play in conjunction with signal handlers. This is very related to the problem of reentrant functions. An asynchronous-safe function is one that can be called safe and without side-effects from within a signal handler context. Usually very few functions are of this type, because an application is very restricted in what it can perform from within a signal handler (especially what system functions it is allowed to call). The reason mainly is, because only a few system functions are officially declared by POSIX as guaranteed to be asynchronous-safe. Asynchronous-safe functions usually have to be already reentrant.?> 추가적으로 시그널 핸들러와 결합되어 동작할 때 따라 오는 비동기-안전이라고 하는 함수와 관련된 속성이 있습니다. 이것은 재진입 함수의 문제와 밀접한 연관이 있습니다. 비동기-안전 함수는 시그널 핸들러 컨텍스트상에서 안전하고 부작용없이 호출될 수 있는 함수입니다. 어플리케이션은 시그널 핸들러에서 작업을 아주 제한적으로 (특히 사용할 수 있는 시스템 함수의 제한) 수행해야 하므로, 이런 형태의 함수는 그리 많지 않습니다. 주요 원인은 소수의 시스템 함수만이 비동기-안전을 보장한다고 POSIX에 의해 공식적으로 선언되어 있기 때문입니다. 비동기-안전 함수는 대부분 재진입 함수이기도 합니다. </para> </listitem> </itemizedlist> </para> </section> <section> <title><?User-Space Threads?>사용자-공간 쓰레드</title> <para> <?User-space threads can be implemented in various way. The two traditional approaches are:?> 사용자-공간 쓰레드는 다양한 방법으로 구현할 수 있습니다. 전통적인 두가지 방법은 다음과 같습니다. </para> <orderedlist> <listitem> <para> <?Matrix-based explicit dispatching between small units of execution:?> 매트릭스 기반의 명시적인 작은 실행 유니트 간 처리(Matrix-based explicit dispatching between small units of execution): </para> <para> <?Here the global procedures of the application are split into small execution units (each is required to not run for more than a few milliseconds) and those units are implemented by separate functions. Then a global matrix is defined which describes the execution (and perhaps even dependency) order of these functions. The main server procedure then just dispatches between these units by calling one function after each other controlled by this matrix. The threads are created by more than one jump-trail through this matrix and by switching between these jump-trails controlled by corresponding occurred events.?> 어플리케이션의 전체 프로시져들을 작은 실행 유니트로 잘게 나누고 (몇 밀리초이상 실행되지 않도록) 이들 유니트를 분리된 함수로 구현합니다. 그 다음 이들 함수의 실행 (또한 의존 상태) 순서를 설명하는 전체 매트릭스를 정의합니다. 메인 서버 프로시져는 이 매트릭스에 의해서 제어되는 각기 다른 함수들에 따라 한 함수를 호출해서 이들 유니트들간의 처리(dispatch)를 수행합니다. 이 매트릭스를 통한 하나 이상의 jump-trail과 발생된 이벤트와 대응되어 제어되는 이들 jump-trails 간의 전환에 의해서 쓰레드가 생성됩니다. </para> <para> <?This approach gives the best possible performance, because one can fine-tune the threads of execution by adjusting the matrix, and the scheduling is done explicitly by the application itself. It is also very portable, because the matrix is just an ordinary data structure, and functions are a standard feature of ANSI C.?> 이 접근법은 매트릭스를 조절함으로써 실행 쓰레드를 최적화 할 수 있고, 어플리케이션 자체가 명시적으로 스케쥴링하기 때문에 높은 성능을 낼 수 있습니다. 또한 매트릭스는 일반적인 데이터 구조이고, 함수는 ANSI C의 표준 특징이기 때문에 이식성이 매우 높습니다. </para> <para> <?The disadvantage of this approach is that it is complicated to write large applications with this approach, because in those applications one quickly gets hundreds(!) of execution units and the control flow inside such an application is very hard to understand (because it is interrupted by function borders and one always has to remember the global dispatching matrix to follow it). Additionally, all threads operate on the same execution stack. Although this saves memory, it is often nasty, because one cannot switch between threads in the middle of a function. Thus the scheduling borders are the function borders.?> 이 접근법의 단점은 이 방법으로 큰 어플리케이션을 작성하면 실행 유니트가 쉽게 수백개를 넘어가기 때문에 큰 어플리케이션을 작성하는 것이 복잡해지고, (함수 단위로 작업이 전환되고 다음은 무엇인지 항상 전역 처리 매트릭스를 기억해야 하기 때문에) 어플리케이션 내부의 제어 흐름을 이해하기 매우 힘듭니다. 추가적으로, 모든 스레드는 똑같은 실행 스택상에서 동작해서 메모리를 절약하기는 하지만, 스케쥴링 단위가 함수 단위이기 때문에, 함수 중간에서 쓰레드간 전환이 불가능합니다. 이 때문에 난처한 경우가 있습니다. </para> </listitem> <listitem> <para> <?Context-based implicit scheduling between threads of execution:?> 컨텍스트 기반의 암시적인 실행 쓰레드간 스케쥴링(Context-based implicit scheduling between threads of execution): </para> <para> <?Here the idea is that one programs the application as with forked processes, i.e., one spawns a thread of execution and this runs from the begin to the end without an interrupted control flow. But the control flow can be still interrupted - even in the middle of a function. Actually in a preemptive way, similar to what the kernel does for the heavy-weight processes, i.e., every few milliseconds the user-space scheduler switches between the threads of execution. But the thread itself doesn't recognize this and usually (except for synchronization issues) doesn't have to care about this.?> 이 아이디어는 포크된 프로세스처럼 어플리케이션을 프로그램하는 것입니다. 즉, 실행 쓰레드를 스폰하면 제어 흐름의 가로채기 없이 처음부터 끝까지 실행되는 것입니다. 하지만 제어 흐름은 함수 중간에서도 가로채기 당할 수 있습니다. 실제로 선점형 방법으로, 커널이 무거운 프로세스에게 하는 것과 비슷하게, 매 몇 밀리초마다 사용자-공간 스케쥴러는 실행 쓰레드간 전환을 합니다. 하지만 쓰레드 자체는 이것을 인식하지 않으며 (동기화 문제를 제외하고) 대부분 이것에 대해서 생각할 필요가 없습니다. </para> <para> <?The advantage of this approach is that it's very easy to program, because the control flow and context of a thread directly follows a procedure without forced interrupts through function borders. Additionally, the programming is very similar to a traditional and well understood fork(2) based approach.?> 이 접근법의 장점은 쓰레드의 제어 흐름과 컨텍스트는 함수 단위의 강제적인 가로채기 없이 명령을 순차적으로 실행하기 때문에, 프로그램하기 매우 쉽다는 것입니다. 추가적으로, 프로그래밍은 전통적이고 잘 이해되는 fork(2) 기반 접근법과 매우 유사합니다. </para> <para> <?The disadvantage is that although the general performance is increased, compared to using approaches based on heavy-weight processes, it is decreased compared to the matrix-approach above. Because the implicit preemptive scheduling does usually a lot more context switches (every user-space context switch costs some overhead even when it is a lot cheaper than a kernel-level context switch) than the explicit cooperative/non-preemptive scheduling. Finally, there is no really portable POSIX/ANSI-C based way to implement user-space preemptive threading. Either the platform already has threads, or one has to hope that some semi-portable package exists for it. And even those semi-portable packages usually have to deal with assembler code and other nasty internals and are not easy to port to forthcoming platforms.?> 단점은 비록 무거운 프로세스 기반의 접근법을 사용하는 것과 비교해서 일반적인 성능은 향상되지만, 매트릭스 접근법보다는 성능이 떨어진다는 것입니다. 이것은 암시적인 선점형 스케쥴링은 대개 명시적인 상호협력적인/비선점형 스케쥴링보다 더 많은 컨텍스트 전환을 하기 때문입니다. (사용자-공간 컨텍스트 전환은 커널-공간 컨텍스트 전화보다는 오버헤드가 적습니다.) 마지막으로, 사용자-공간의 선점형 쓰레딩을 구현하기 위한 POSIX/ANSI-C 기반의 이식성 있는 방법이 없다는 것입니다. 플랫폼이 이미 쓰레드를 지원거나 어떤 약간 이식성 있는 패키지가 있다고 하더라도 이들 패키지는 대개 어셈블러 코드와 복잡한 내부사항을 처리해야 하고 새로 나오는 플랫폼에 이식하기도 쉽지 않습니다. </para> <para> <?So , in short: the matrix-dispatching approach is portable and fast, but nasty to program. The thread scheduling approach is easy to program, but suffers from synchronization and portability problems caused by its preemptive nature.?> 요약하면: 매트릭스-처리 접근법은 이식성이 있고 빠르지만, 프로그램하기는 어렵습니다. 쓰레드 스케쥴링 접근법은 프로그램하기 쉽지만, 선점형 특성 때문에 동기화와 이식성 문제로 골치가 아픕니다. </para> </listitem> </orderedlist> </section> <section> <title><?The Compromise of Pth?>Pth의 절충안</title> <para> <?But why not combine the good aspects of both approaches while avoiding their bad aspects? That's the goal of Pth. Pth implements easy-to-program threads of execution, but avoids the problems of preemptive scheduling by using non-preemptive scheduling instead.?> 그러면 왜 이들 단점을 피하기 위해서 양족 접근법의 장점을 결합하지 않는 것일까요? 이것이 바로 Pth의 목표입니다. Pth는 쓰레드 프로그래밍을 쉽게 해주면서, 비선점형 스케쥴링을 사용해서 선점형 스케쥴링의 문제를 피하게 해줍니다. </para> <para> <?This sounds like, and is, a useful approach. Nevertheless, one has to keep the implications of non-preemptive thread scheduling in mind when working with Pth. The following list summarizes a few essential points:?> 이것은 유용한 접근법이기는 하지만, Pth를 사용할 때에는 비선점형 쓰레드 스케쥴링의 구현을 고려해야 합니다. 다음은 몇가지 필수적인 사항을 요약한 것입니다. </para> <itemizedlist> <listitem> <para> <?Pth provides maximum portability, but NOT the fanciest features.?> Pth는 최고의 이식성을 제공하지만 유별난 특징을 제공하지는 않습니다. </para> <para> <?This is, because it uses a nifty and portable POSIX/ANSI-C approach for thread creation (and this way doesn't require any platform dependent assembler hacks) and schedules the threads in non-preemptive way (which doesn't require unportable facilities like SIGVTALRM). On the other hand, this way not all fancy threading features can be implemented. Nevertheless the available facilities are enough to provide a robust and full-featured threading system.?> 이것은 Pth가 쓰레드를 생성하는 데 익숙하면서 이식성 있는 POSIX/ANSI-C 접근법을 사용하고 (이 방법은 어떤 플랫폼에 종속된 어셈블러 hack이 필요없습니다.) 비선점형으로 쓰레드를 스케쥴링하기 때문입니다. (SIGVTALRM같은 이식성 없는 기능이 필요없습니다.) 하지만, 이 방법이 모든 유별난 쓰레딩 특징을 구현할 수 있는 것은 아닙니다. 그럼에도 불구하고, Pth에서 이용 가능한 기능들은 튼튼하고 완전한 특징의 쓰레딩 씨스템을 제공하기에 충분합니다. </para> </listitem> <listitem> <para> <?Pth increases the responsiveness and concurrency of an event-driven application, but NOT the concurrency of number-crunching applications.?> Pth는 이벤트로 구동되는 어플리케이션의 응답성과 동시성을 향상시키지만 대형 수치 연산(number-crunching) 어플리케이션에서는 그렇지 않습니다. </para> <para> <?The reason is the non-preemptive scheduling. Number-crunching applications usually require preemptive scheduling to achieve concurrency because of their long CPU bursts. For them, non-preemptive scheduling (even together with explicit yielding) provides only the old concept of `coroutines'. On the other hand, event driven applications benefit greatly from non-preemptive scheduling. They have only short CPU bursts and lots of events to wait on, and this way run faster under non-preemptive scheduling because no unnecessary context switching occurs, as it is the case for preemptive scheduling. That's why Pth is mainly intended for server type applications, although there is no technical restriction.?> 그 이유는 비선점형 스케쥴링 때문입니다. 대형 수치 연산 어플리케이션은 대개 긴 CPU 점유시간(burst) 때문에 병행성을 확보하기 위해 선점형 스케쥴링을 필요로 합니다. 그런 경우, 비선점형 스케쥴링은 (명시적인 양보로) 오래된 'coroutines' 개념만을 지원합니다. 한편, 이벤트로 구동되는 어플리케이션은 비선점형 스케쥴링으로 큰 이득을 볼 수 있습니다. 이런 어플리케이션은 짧은 CPU 점유시간과 대기하는 많은 이벤트를 가지면서, 불필요한 컨택스트 전환이 발생되지 않기 때문에 비선점형 스케쥴링하에서 더 빠르게 실행합니다. 이것이 Pth가 주로 서버 형태의 어플리케이션에 적합한 이유이지만, 기술상의 제약이 있지는 않습니다. </para> </listitem> <listitem> <para> <?Pth requires thread-safe functions, but NOT reentrant functions.?> Pth는 쓰레드-안전 함수를 요구하지만 재진입 함수를 요구하지는 않습니다. </para> <para> <?This nice fact exists again because of the nature of non-preemptive scheduling, where a function isn't interrupted and this way cannot be reentered before it returned. This is a great portability benefit, because thread-safety can be achieved more easily than reentrance possibility. Especially this means that under Pth more existing third-party libraries can be used without side-effects than it's the case for other threading systems.?> 이 훌륭한 사실은 비선점형 스케쥴링의 성질 때문이기도 합니다. 함수는 중간에서 가로채기 당하지 않으며 함수가 리턴하기 전에는 재진입 될 수 없습니다. 이것은 쓰레드-안전이 재진입 가능성보다 훨신 달성하기 쉽기 때문에 이식성에 커다란 이점이 됩니다. 특히 이것은 Pth 하에서 더 많은 기존의 써드파티 라이브러리가 다른 쓰레딩 시스템의 경우보다 부작용없이 사용될 수 있다는 것을 뜻합니다. </para> </listitem> <listitem> <para> <?Pth doesn't require any kernel support, but can NOT benefit from multiprocessor machines.?> Pth는 커널의 지원을 요구하지 않지만 다중프로세서 기기의 이득을 취할 수 없을 수도 있습니다. </para> <para> <?This means that Pth runs on almost all Unix kernels, because the kernel does not need to be aware of the Pth threads (because they are implemented entirely in user-space). On the other hand, it cannot benefit from the existence of multiprocessors, because for this, kernel support would be needed. In practice, this is no problem, because multiprocessor systems are rare, and portability is almost more important than highest concurrency.?> 이것은 Unix 커널이 Pth 쓰레드를 인식할 필요가 없기 때문에 (완전히 사용자-공간에서 구현되기 때문에) Pth는 대부분의 모든 Unix 커널상에서 동작한다는 것을 의미합니다. 한편, 다중프로세서의 이득을 취할 못할 수도 있는데, 이를 위해서는 커널 지원이 필요하기 때문입니다. 실제로, 다중프로세서 시스템은 드물고, 아주 높은 병행성 보다는 이식성이 대부분 더 중요하기 때문에, 이것은 문제가 아닙니다. (역자 주: 다중프로세서 기계에서 다른 하나 이상의 CPU가 도움이 될 것은 분명합니다. 예를 들면, 실질적인 하부의 I/O나 관리자용 접속, 다른 프로세스 등을 다른 CPU가 맡아서 처리할 수 있습니다.) </para> </listitem> </itemizedlist> </section> <section> <title><?The life cycle of a thread?>쓰레드의 생명 주기</title> <para> <?To understand the Pth Application Programming Interface (API), it helps to first understand the life cycle of a thread in the Pth threading system. It can be illustrated with the following directed graph:?> Pth API를 이해하기 위해서, Pth 쓰레딩 시스템에서 쓰레드의 생명 주기를 이해하는 것이 우선적으로 도움이 됩니다. 이것은 다음과 같은 선 그래프로 표현할 수 있습니다. </para> <para><screen> NEW | V +---> READY ---+ | ^ | | | V WAITING <--+-- RUNNING | : V SUSPENDED DEAD </screen></para> <para> <?When a new thread is created, it is moved into the NEW queue of the scheduler. On the next dispatching for this thread, the scheduler picks it up from there and moves it to the READY queue. This is a queue containing all threads which want to perform a CPU burst. There they are queued in priority order. On each dispatching step, the scheduler always removes the thread with the highest priority only. It then increases the priority of all remaining threads by 1, to prevent them from `starving'.?> 새로운 쓰레드가 생성되면 이 쓰레드는 스케쥴러의 NEW 큐로 옮겨집니다. 스케쥴러는 이 쓰레드를 다음번 처리 단계(dispatching)에서 NEW 큐에서 뽑아서 READY 큐로 옮깁니다. 이 큐는 CPU를 점유하기 원하는 모든 쓰레드를 담고 있습니다. 쓰레드는 우선순위 순으로 큐 됩니다. 각 처리 단계에서, 스케쥴러는 항상 가장 높은 우선순위의 쓰레드만을 제거합니다. 그리고나서 쓰레드간의 무한정 대기 상태(starving)를 방지하기 위해서 모든 남아있는 쓰레드의 우선순위를 1씩 높입니다. </para> <para> <?The thread which was removed from the READY queue is the new RUNNING thread (there is always just one RUNNING thread, of course). The RUNNING thread is assigned execution control. After this thread yields execution (either explicitly by yielding execution or implicitly by calling a function which would block) there are three possibilities: Either it has terminated, then it is moved to the DEAD queue, or it has events on which it wants to wait, then it is moved into the WAITING queue. Else it is assumed it wants to perform more CPU bursts and immediately enters the READY queue again.?> READY 큐에서 제거된 쓰레드는 새로운 RUNNING 쓰레드가 됩니다. (물론 항상 하나의 RUNNING 쓰레드만이 존재합니다.) RUNNING 쓰레드에게는 실행 제어권이 주어집니다. 이 쓰레드가 실행을 양보하면 (실행을 명시적으로 양보하거나 암묵적으로 블러킹될 수 있는 함수를 호출) 세가지 가능성이 있습니다. 쓰레드가 종료되면 DEAD 큐로 이동하고, 대기를 원하는 이벤트가 있으면 WAITING 큐로 이동하며, 아니면 CPU 를 점유하기 원한다고 가정해서 다시 READY 큐로 즉시 들어갑니다. </para> <para> <?Before the next thread is taken out of the READY queue, the WAITING queue is checked for pending events. If one or more events occurred, the threads that are waiting on them are immediately moved to the READY queue.?> 다음 쓰레드가 READY 큐에서 꺼내지기 전에, WAITING 큐를 대기중인 이벤트를 위해서 검사합니다. 하나 이상의 이벤트가 발생하면, 이벤트를 대기중인 쓰레드는 READY 큐로 즉시 이동됩니다. </para> <para> <?The purpose of the NEW queue has to do with the fact that in Pth a thread never directly switches to another thread. A thread always yields execution to the scheduler and the scheduler dispatches to the next thread. So a freshly spawned thread has to be kept somewhere until the scheduler gets a chance to pick it up for scheduling. That is what the NEW queue is for.?> NEW 큐의 목적은 Pth에서 쓰레드가 절대 다른 쓰레드로 직접 전환되지 못하도록 하는 것입니다. 쓰레드는 항상 스케쥴러로 실행을 양보하고 스케쥴러가 다음 쓰레드를 처리하도록 합니다. 스케쥴러가 스케쥴링을 위해서 쓰레드를 뽑을 기회가 생길 때까지 금방 생성된 쓰레드는 어디선가 유지되어야 합니다. </para> <para> <?The purpose of the DEAD queue is to support thread joining. When a thread is marked to be unjoinable, it is directly kicked out of the system after it terminated. But when it is joinable, it enters the DEAD queue. There it remains until another thread joins it.?> DEAD 큐의 목적은 쓰레드 병합(joining)을 지원하기 위해서 입니다. 쓰레드가 병합 불필요(unjoinable)로 지정되었다면 쓰레드가 종료될 때 시스템 바깥으로 바로 퇴출됩니다. 하지만 쓰레드가 병합 필요(joinable)라면, 쓰레드는 DEAD 큐로 들어갑니다. 다른 쓰레드가 이 쓰레드와 병합할 때까지 이 큐에 남게 됩니다. </para> <para> <?Finally , there is a special separated queue named SUSPENDED, to where threads can be manually moved from the NEW, READY or WAITING queues by the application. The purpose of this special queue is to temporarily absorb suspended threads until they are again resumed by the application. Suspended threads do not cost scheduling or event handling resources, because they are temporarily completely out of the scheduler's scope. If a thread is resumed, it is moved back to the queue from where it originally came and this way again enters the schedulers scope.?> 마지막으로, 어플리케이션이 NEW, READY, WAITING 큐에서 수동으로 옮길 수 있는 특수하게 분리된 SUSPENDED 라고 불리는 큐가 있습니다. 이 특수한 큐의 목적은 그 쓰레드들이 어플리케이션에 의해서 재실행 될 때까지 임시로 일시 중지된 쓰레드를 받아들이는 것입니다. 일시 중지된 쓰레드들은 임시로 스케쥴러의 영역에서 완전히 벋어나 있기 때문에 스케쥴링이나 이벤트 처리 자원을 소비하지 않습니다. 만약 쓰레드가 재실행된다면, 원래 있던 큐로 다시 이동하고 다시 스케쥴러의 영역에 들어가게 됩니다. </para> </section> </section> <section> <title>APPLICATION PROGRAMMING INTERFACE (API)</title> <para> <?In the following the Pth Application Programming Interface (API) is discussed in detail. With the knowledge given above, it should now be easy to understand how to program threads with this API. In good Unix tradition, Pth functions use special return values (NULL in pointer context, FALSE in boolean context and -1 in integer context) to indicate an error condition and set (or pass through) the errno system variable to pass more details about the error to the caller.?> 다음은 Pth API의 자세한 설명입니다. 앞 절의 설명을 이해하면, 이 API로 쓰레드를 프로그래밍하는 방법을 이해하기 쉽습니다. Unix 함수들과 마찬가지로, Pth 함수는 오류 상태를 표시하는데 특수한 리턴 값을 사용하고 (포인터 컨텍스트는 NULL, 불린 컨텍스트는 FALSE, 정수 컨텍스트는 -1) errno 시스템 변수로 오류에 대한 자세한 정보를 알려줍니다. </para> <section> <title>Global Library Management</title> <para> <?The following functions act on the library as a whole. They are used to initialize and shutdown the scheduler and fetch information from it.?> 다음 함수들은 라이브러리 전체에 작용합니다. 스케쥴러를 초기화하고 종료하거나 정보를 얻는데 사용합니다. </para> <itemizedlist> <listitem> <para> int pth_init(void); </para> <para> <?This initializes the Pth library. It has to be the first Pth API function call in an application, and is mandatory. It's usually done at the begin of the main() function of the application. This implicitly spawns the internal scheduler thread and transforms the single execution unit of the current process into a thread (the `main' thread). It returns TRUE on success and FALSE on error.?> Pth 라이브러리를 초기화합니다. 어플리케이션에서 처음으로 호출하는 Pth API 함수이며, 꼭 호출하여야 합니다. 대개 어플리케이션의 main() 함수의 처음에서 호출합니다. 이것은 암묵적으로 내부의 스케쥴러 쓰레드를 스폰하고 현재 프로세스의 단일 실행 단위를 쓰레드('main' 쓰레드)로 변환합니다. 성공하면 TRUE를, 실패하면 FALSE를 리턴합니다. </para> </listitem> <listitem> <para> int pth_kill(void); </para> <para> This kills the Pth library. It should be the last Pth API function call in an application, but is not really required. It's usually done at the end of the main function of the application. At least, it has to be called from within the main thread. It implicitly kills all threads and transforms back the calling thread into the single execution unit of the underlying process. The usual way to terminate a Pth application is either a simple `pth_exit(0);' in the main thread (which waits for all other threads to terminate, kills the threading system and then terminates the process) or a `pth_kill(); exit(0)' (which immediately kills the threading system and terminates the process). The pth_kill() return immediately with a return code of FALSE if it is not called from within the main thread. Else it kills the threading system and returns TRUE. <?Pth 라이브러리를 종료합니다. 어플리케이션에서 마지막으로 호출하는 Pth API 함수이지만 꼭 호출할 필요는 없습니다. 대개 어플리케이션의 main() 함수의 끝에서 호출합니다. 하지만, 꼭 메인 쓰레드안에서 호출해야 합니다. 암묵적으로 모든 쓰레드를 종료하고?> </para> </listitem> <listitem> <para> long pth_ctrl(unsigned long query, ...); </para> <para> This is a generalized query/control function for the Pth library. The argument query is a bitmask formed out of one or more PTH_CTRL_XXXX queries. Currently the following queries are supported: </para> <itemizedlist> <listitem> <para> PTH_CTRL_GETTHREADS </para> <para> This returns the total number of threads currently in existence. This query actually is formed out of the combination of queries for threads in a particular state, i.e., the PTH_CTRL_GETTHREADS query is equal to the OR-combination of all the following specialized queries: </para> <para> PTH_CTRL_GETTHREADS_NEW for the number of threads in the new queue (threads created via pth_spawn(3) but still not scheduled once), PTH_CTRL_GETTHREADS_READY for the number of threads in the ready queue (threads who want to do CPU bursts), PTH_CTRL_GETTHREADS_RUNNING for the number of running threads (always just one thread!), PTH_CTRL_GETTHREADS_WAITING for the number of threads in the waiting queue (threads waiting for events), PTH_CTRL_GETTHREADS_SUSPENDED for the number of threads in the suspended queue (threads waiting to be resumed) and PTH_CTRL_GETTHREADS_DEAD for the number of threads in the new queue (terminated threads waiting for a join). </para> </listitem> <listitem> <para> PTH_CTRL_GETAVLOAD </para> <para> This requires a second argument of type `float *' (pointer to a floating point variable). It stores a floating point value describing the exponential averaged load of the scheduler in this variable. The load is a function from the number of threads in the ready queue of the schedulers dispatching unit. So a load around 1.0 means there is only one ready thread (the standard situation when the application has no high load). A higher load value means there a more threads ready who want to do CPU bursts. The average load value updates once per second only. The return value for this query is always 0. </para> </listitem> <listitem> <para> PTH_CTRL_GETPRIO </para> <para> This requires a second argument of type `pth_t' which identifies a thread. It returns the priority (ranging from PTH_PRIO_MIN to PTH_PRIO_MAX) of the given thread. </para> </listitem> <listitem> <para> PTH_CTRL_GETNAME </para> <para> This requires a second argument of type `pth_t' which identifies a thread. It returns the name of the given thread, i.e., the return value of pth_ctrl(3) should be casted to a `char *'. </para> </listitem> <listitem> <para> PTH_CTRL_DUMPSTATE </para> <para> This requires a second argument of type `FILE *' to which a summary of the internal Pth library state is written to. The main information which is currently written out is the current state of the thread pool. </para> </listitem> </itemizedlist> <para> The function returns -1 on error. </para> </listitem> <listitem> <para> long pth_version(void); </para> <para> This function returns a hex-value `0xVRRTLL' which describes the current Pth library version. V is the version, RR the revisions, LL the level and T the type of the level (alphalevel=0, betalevel=1, patchlevel=2, etc). For instance Pth version 1.0b1 is encoded as 0x100101. The reason for this unusual mapping is that this way the version number is steadily increasing. The same value is also available under compile time as PTH_VERSION. </para> </listitem> </itemizedlist> </section> <section> <title>Thread Attribute Handling</title> <para> Attribute objects are used in Pth for two things: First stand-alone/unbound attribute objects are used to store attributes for to be spawned threads. Bounded attribute objects are used to modify attributes of already existing threads. The following attribute fields exists in attribute objects: </para> <itemizedlist> <listitem> <para> PTH_ATTR_PRIO (read-write) [int] </para> <para> Thread Priority between PTH_PRIO_MIN and PTH_PRIO_MAX. The default is PTH_PRIO_STD. </para> </listitem> <listitem> <para> PTH_ATTR_NAME (read-write) [char *] </para> <para> Name of thread (up to 40 characters are stored only), mainly for debugging purposes. </para> </listitem> <listitem> <para> PTH_ATTR_DISPATCHES (read-write) [int] </para> <para> In bounded attribute objects, this field is incremented every time the context is switched to the associated thread. </para> </listitem> <listitem> <para> PTH_ATTR_JOINABLE (read-write> [int] </para> <para> The thread detachment type, TRUE indicates a joinable thread, FALSE indicates a detached thread. When a thread is detached, after termination it is immediately kicked out of the system instead of inserted into the dead queue. </para> </listitem> <listitem> <para> PTH_ATTR_CANCEL_STATE (read-write) [unsigned int] </para> <para> The thread cancellation state, i.e., a combination of PTH_CANCEL_ENABLE or PTH_CANCEL_DISABLE and PTH_CANCEL_DEFERRED or PTH_CANCEL_ASYNCHRONOUS. </para> </listitem> <listitem> <para> PTH_ATTR_STACK_SIZE (read-write) [unsigned int] </para> <para> The thread stack size in bytes. Use lower values than 64 KB with great care! </para> </listitem> <listitem> <para> PTH_ATTR_STACK_ADDR (read-write) [char *] </para> <para> A pointer to the lower address of a chunk of malloc(3)'ed memory for the stack. </para> </listitem> <listitem> <para> PTH_ATTR_TIME_SPAWN (read-only) [pth_time_t] </para> <para> The time when the thread was spawned. This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_TIME_LAST (read-only) [pth_time_t] </para> <para> The time when the thread was last dispatched. This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_TIME_RAN (read-only) [pth_time_t] </para> <para> The total time the thread was running. This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_START_FUNC (read-only) [void *(*)(void *)] </para> <para> The thread start function. This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_START_ARG (read-only) [void *] </para> <para> The thread start argument. This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_STATE (read-only) [pth_state_t] </para> <para> The scheduling state of the thread, i.e., either PTH_STATE_NEW, PTH_STATE_READY, PTH_STATE_WAITING, or PTH_STATE_DEAD This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_EVENTS (read-only) [pth_event_t] </para> <para> The event ring the thread is waiting for. This can be queried only when the attribute object is bound to a thread. </para> </listitem> <listitem> <para> PTH_ATTR_BOUND (read-only) [int] </para> <para> Whether the attribute object is bound (TRUE) to a thread or not (FALSE). </para> </listitem> </itemizedlist> <para> The following API functions can be used to handle the attribute objects: </para> <itemizedlist> <listitem> <para> pth_attr_t pth_attr_of(pth_t tid); </para> <para> This returns a new attribute object bound to thread tid. Any queries on this object directly fetch attributes from tid. And attribute modifications directly change tid. Use such attribute objects to modify existing threads. </para> </listitem> <listitem> <para> pth_attr_t pth_attr_new(void); </para> <para> This returns a new unbound attribute object. An implicit pth_attr_init() is done on it. Any queries on this object just fetch stored attributes from it. And attribute modifications just change the stored attributes. Use such attribute objects to pre-configure attributes for to be spawned threads. </para> </listitem> <listitem> <para> int pth_attr_init(pth_attr_t attr); </para> <para> This initializes an attribute object attr to the default values: PTH_ATTR_PRIO := PTH_PRIO_STD, PTH_ATTR_NAME := `unknown', PTH_ATTR_DISPATCHES := 0, PTH_ATTR_JOINABLE := TRUE, PTH_ATTR_CANCELSTATE := PTH_CANCEL_DEFAULT, PTH_ATTR_STACK_SIZE := 64*1024 and PTH_ATTR_STACK_ADDR := NULL. All other PTH_ATTR_* attributes are read-only attributes and don't receive default values in attr, because they exists only for bounded attribute objects. </para> </listitem> <listitem> <para> int pth_attr_set(pth_attr_t attr, int field, ...); </para> <para> This sets the attribute field field in attr to a value specified as an additional argument on the variable argument list. The following attribute fields and argument pairs can be used: <screen> PTH_ATTR_PRIO int PTH_ATTR_NAME char * PTH_ATTR_DISPATCHES int PTH_ATTR_JOINABLE int PTH_ATTR_CANCEL_STATE unsigned int PTH_ATTR_STACK_SIZE unsigned int PTH_ATTR_STACK_ADDR char *</screen> </para> </listitem> <listitem> <para> int pth_attr_get(pth_attr_t attr, int field, ...); </para> <para> This retrieves the attribute field field in attr and stores its value in the variable specified through a pointer in an additional argument on the variable argument list. The following fields and argument pairs can be used: <screen> PTH_ATTR_PRIO int * PTH_ATTR_NAME char ** PTH_ATTR_DISPATCHES int * PTH_ATTR_JOINABLE int * PTH_ATTR_CANCEL_STATE unsigned int * PTH_ATTR_STACK_SIZE unsigned int * PTH_ATTR_STACK_ADDR char ** PTH_ATTR_TIME_SPAWN pth_time_t * PTH_ATTR_TIME_LAST pth_time_t * PTH_ATTR_TIME_RAN pth_time_t * PTH_ATTR_START_FUNC void *(**)(void *) PTH_ATTR_START_ARG void ** PTH_ATTR_STATE pth_state_t * PTH_ATTR_EVENTS pth_event_t * PTH_ATTR_BOUND int *</screen> </para> </listitem> <listitem> <para> int pth_attr_destroy(pth_attr_t attr); </para> <para> This destroys a attribute object attr. After this attr is no longer a valid attribute object. </para> </listitem> </itemizedlist> </section> <section> <title>Thread Control</title> <para> The following functions control the threading itself and make up the main API of the Pth library. </para> <itemizedlist> <listitem> <para> pth_t pth_spawn(pth_attr_t attr, void *(*entry)(void *), void *arg); </para> <para> This spawns a new thread with the attributes given in attr (or PTH_ATTR_DEFAULT for default attributes - which means that thread priority, joinability and cancel state are inherited from the current thread) with the starting point at routine entry; the dispatch count is not inherited from the current thread if attr is not specified - rather, it is initialized to zero. This entry routine is called as `pth_exit(entry(arg))' inside the new thread unit, i.e., entry's return value is fed to an implicit pth_exit(3). So the thread can also exit by just returning. Nevertheless the thread can also exit explicitly at any time by calling pth_exit(3). But keep in mind that calling the POSIX function exit(3) still terminates the complete process and not just the current thread. </para> <para> There is no Pth-internal limit on the number of threads one can spawn, except the limit implied by the available virtual memory. Pth internally keeps track of thread in dynamic data structures. The function returns NULL on error. </para> </listitem> <listitem> <para> int pth_once(pth_once_t *ctrlvar, void (*func)(void *), void *arg); </para> <para> This is a convenience function which uses a control variable of type pth_once_t to make sure a constructor function func is called only once as `func(arg)' in the system. In other words: Only the first call to pth_once(3) by any thread in the system succeeds. The variable referenced via ctrlvar should be declared as `pth_once_t variable-name = PTH_ONCE_INIT;' before calling this function. </para> </listitem> <listitem> <para> pth_t pth_self(void); </para> <para> This just returns the unique thread handle of the currently running thread. This handle itself has to be treated as an opaque entity by the application. It's usually used as an argument to other functions who require an argument of type pth_t. </para> </listitem> <listitem> <para> int pth_suspend(pth_t tid); </para> <para> This suspends a thread tid until it is manually resumed again via pth_resume(3). For this, the thread is moved to the SUSPENDED queue and this way is completely out of the scheduler's event handling and thread dispatching scope. Suspending the current thread is not allowed. The function returns TRUE on success and FALSE on errors. </para> </listitem> <listitem> <para> int pth_resume(pth_t tid); </para> <para> This function resumes a previously suspended thread tid, i.e. tid has to stay on the SUSPENDED queue. The thread is moved to the NEW, READY or WAITING queue (dependent on what its state was when the pth_suspend(3) call were made) and this way again enters the event handling and thread dispatching scope of the scheduler. The function returns TRUE on success and FALSE on errors. </para> </listitem> <listitem> <para> int pth_raise(pth_t tid, int sig) </para> <para> This function raises a signal for delivery to thread tid only. When one just raises a signal via raise(3) or kill(2), its delivered to an arbitrary thread which has this signal not blocked. With pth_raise(3) one can send a signal to a thread and its guarantees that only this thread gets the signal delivered. But keep in mind that nevertheless the signals action is still configured process-wide. When sig is 0 plain thread checking is performed, i.e., `pth_raise(tid, 0)' returns TRUE when thread tid still exists in the PTH system but doesn't send any signal to it. </para> </listitem> <listitem> <para> int pth_yield(pth_t tid); </para> <para> This explicitly yields back the execution control to the scheduler thread. Usually the execution is implicitly transferred back to the scheduler when a thread waits for an event. But when a thread has to do larger CPU bursts, it can be reasonable to interrupt it explicitly by doing a few pth_yield(3) calls to give other threads a chance to execute, too. This obviously is the cooperating part of Pth. A thread has not to yield execution, of course. But when you want to program a server application with good response times the threads should be cooperative, i.e., when they should split their CPU bursts into smaller units with this call. </para> <para> Usually one specifies tid as NULL to indicate to the scheduler that it can freely decide which thread to dispatch next. But if one wants to indicate to the scheduler that a particular thread should be favored on the next dispatching step, one can specify this thread explicitly. This allows the usage of the old concept of coroutines where a thread/routine switches to a particular cooperating thread. If tid is not NULL and points to a new or ready thread, it is guaranteed that this thread receives execution control on the next dispatching step. If tid is in a different state (that is, not in PTH_STATE_NEW or PTH_STATE_READY) an error is reported. </para> <para> The function usually returns TRUE for success and only FALSE (with errno set to EINVAL) if tid specified an invalid or still not new or ready thread. </para> </listitem> <listitem> <para> int pth_nap(pth_time_t naptime); </para> <para> This functions suspends the execution of the current thread until naptime is elapsed. naptime is of type pth_time_t and this way has theoretically a resolution of one microsecond. In practice you should neither rely on this nor that the thread is awakened exactly after naptime has elapsed. It's only guarantees that the thread will sleep at least naptime. But because of the non-preemptive nature of Pth it can last longer (when another thread kept the CPU for a long time). Additionally the resolution is dependent of the implementation of timers by the operating system and these usually have only a resolution of 10 microseconds or larger. But usually this isn't important for an application unless it tries to use this facility for real time tasks. </para> </listitem> <listitem> <para> int pth_wait(pth_event_t ev); </para> <para> This is the link between the scheduler and the event facility (see below for the various pth_event_xxx() functions). It's modeled like select(2), i.e., one gives this function one or more events (in the event ring specified by ev) on which the current thread wants to wait. The scheduler awakes the thread when one ore more of them occurred or failed after tagging them as such. The ev argument is a pointer to an event ring which isn't changed except for the tagging. pth_wait(3) returns the number of occurred or failed events and the application can use pth_event_status(3) to test which events occurred or failed. </para> </listitem> <listitem> <para> int pth_cancel(pth_t tid); </para> <para> This cancels a thread tid. How the cancellation is done depends on the cancellation state of tid which the thread can configure itself. When its state is PTH_CANCEL_DISABLE a cancellation request is just made pending. When it is PTH_CANCEL_ENABLE it depends on the cancellation type what is performed. When its PTH_CANCEL_DEFERRED again the cancellation request is just made pending. But when its PTH_CANCEL_ASYNCHRONOUS the thread is immediately canceled before pth_cancel(3) returns. The effect of a thread cancellation is equal to implicitly forcing the thread to call `pth_exit(PTH_CANCELED)' at one of his cancellation points. In Pth thread enter a cancellation point either explicitly via pth_cancel_point(3) or implicitly by waiting for an event. </para> </listitem> <listitem> <para> int pth_abort(pth_t tid); </para> <para> This is the cruel way to cancel a thread tid. When it's already dead and waits to be joined it just joins it (via `pth_join(tid, NULL)') and this way kicks it out of the system. Else it forces the thread to be not joinable and to allow asynchronous cancellation and then cancels it via `pth_cancel(tid)'. </para> </listitem> <listitem> <para> int pth_join(pth_t tid, void **value); </para> <para> This joins the current thread with the thread specified via tid. It first suspends the current thread until the tid thread has terminated. Then it is awakened and stores the value of tid's pth_exit(3) call into *value (if value and not NULL) and returns to the caller. A thread can be joined only when it has the attribute PTH_ATTR_JOINABLE set to TRUE (the default). A thread can only be joined once, i.e., after the pth_join(3) call the thread tid is completely removed from the system. </para> </listitem> <listitem> <para> void pth_exit(void *value); </para> <para> This terminates the current thread. Whether it's immediately removed from the system or inserted into the dead queue of the scheduler depends on its join type which was specified at spawning time. If it has the attribute PTH_ATTR_JOINABLE set to FALSE, it's immediately removed and value is ignored. Else the thread is inserted into the dead queue and value remembered for a subsequent pth_join(3) call by another thread. </para> </listitem> </itemizedlist> </section> <section> <title>Utilities</title> <para> Utility functions. </para> <itemizedlist> <listitem> <para> int pth_fdmode(int fd, int mode); This switches the non-blocking mode flag on file descriptor fd. The argument mode can be PTH_FDMODE_BLOCK for switching fd into blocking I/O mode, PTH_FDMODE_NONBLOCK for switching fd into non-blocking I/O mode or PTH_FDMODE_POLL for just polling the current mode. The current mode is returned (either PTH_FDMODE_BLOCK or PTH_FDMODE_NONBLOCK) or PTH_FDMODE_ERROR on error. Keep in mind that since Pth 1.1 there is no longer a requirement to manually switch a file descriptor into non-blocking mode in order to use it. This is automatically done temporarily inside Pth. Instead when you now switch a file descriptor explicitly into non-blocking mode, pth_read(3) or pth_write(3) will never block the current thread. </para> </listitem> <listitem> <para> pth_time_t pth_time(long sec, long usec); </para> <para> This is a constructor for a pth_time_t structure which is a convenient function to avoid temporary structure values. It returns a pth_time_t structure which holds the absolute time value specified by sec and usec. </para> </listitem> <listitem> <para> pth_time_t pth_timeout(long sec, long usec); </para> <para> This is a constructor for a pth_time_t structure which is a convenient function to avoid temporary structure values. It returns a pth_time_t structure which holds the absolute time value calculated by adding sec and usec to the current time. </para> </listitem> <listitem> <para> Sfdisc_t *pth_sfiodisc(void); </para> <para> This functions is always available, but only reasonably usable when Pth was built with Sfio support (--with-sfio option) and PTH_EXT_SFIO is then defined by pth.h. It is useful for applications which want to use the comprehensive Sfio I/O library with the Pth threading library. Then this function can be used to get an Sfio discipline structure (Sfdisc_t) which can be pushed onto Sfio streams (Sfio_t) in order to let this stream use pth_read(3)/pth_write(2) instead of read(2)/write(2). The benefit is that this way I/O on the Sfio stream does only block the current thread instead of the whole process. The application has to free(3) the Sfdisc_t structure when it is no longer needed. The Sfio package can be found at http://www.research.att.com/sw/tools/sfio/. </para> </listitem> </itemizedlist> </section> <section> <title>Cancellation Management</title> <para> Pth supports POSIX style thread cancellation via pth_cancel(3) and the following two related functions: </para> <itemizedlist> <listitem> <para> void pth_cancel_state(int newstate, int *oldstate); </para> <para> This manages the cancellation state of the current thread. When oldstate is not NULL the function stores the old cancellation state under the variable pointed to by oldstate. When newstate is not 0 it sets the new cancellation state. oldstate is created before newstate is set. A state is a combination of PTH_CANCEL_ENABLE or PTH_CANCEL_DISABLE and PTH_CANCEL_DEFERRED or PTH_CANCEL_ASYNCHRONOUS. PTH_CANCEL_ENABLE|PTH_CANCEL_DEFERRED (or PTH_CANCEL_DEFAULT) is the default state where cancellation is possible but only at cancellation points. Use PTH_CANCEL_DISABLE to complete disable cancellation for a thread and PTH_CANCEL_ASYNCHRONOUS for allowing asynchronous cancellations, i.e., cancellations which can happen at any time. </para> </listitem> <listitem> <para> void pth_cancel_point(void); </para> <para> This explicitly enter a cancellation point. When the current cancellation state is PTH_CANCEL_DISABLE or no cancellation request is pending, this has no side-effect and returns immediately. Else it calls `pth_exit(PTH_CANCELED)'. </para> </listitem> </itemizedlist> </section> <section> <title>Event Handling</title> <para> Pth has a very flexible event facility which is linked into the scheduler through the pth_wait(3) function. The following functions provide the handling of event rings. </para> <itemizedlist> <listitem> <para> pth_event_t pth_event(unsigned long spec, ...); </para> <para> This creates a new event ring consisting of a single initial event. The type of the generated event is specified by spec. The following types are available: </para> <itemizedlist> <listitem> <para> PTH_EVENT_FD </para> <para> This is a file descriptor event. One or more of PTH_UNTIL_FD_READABLE, PTH_UNTIL_FD_WRITEABLE or PTH_UNTIL_FD_EXCEPTION have to be OR-ed into spec to specify on which state of the file descriptor you want to wait. The file descriptor itself has to be given as an additional argument. Example: `pth_event(PTH_EVENT_FD|PTH_UNTIL_FD_READABLE, fd)'. </para> </listitem> <listitem> <para> PTH_EVENT_SELECT </para> <para> This is a multiple file descriptor event modeled directly after the select(2) call (actually it is also used to implement pth_select(3) internally). It's a convenient way to wait for a large set of file descriptors at once and at each file descriptor for a different type of state. Additionally as a nice side-effect one receives the number of file descriptors which causes the event to be occurred (using BSD semantics, i.e., when a file descriptor occurred in two sets it's counted twice). The arguments correspond directly to the select(2) function arguments except that there is no timeout argument (because timeouts already can be handled via PTH_EVENT_TIME events). </para> <para> Example: `pth_event(PTH_EVENT_SELECT, &rc, nfd, rfds, wfds, efds)' where rc has to be of type `int *', nfd has to be of type `int' and rfds, wfds and efds have to be of type `fd_set *' (see select(2)). The number of occurred file descriptors are stored in rc. </para> </listitem> <listitem> <para> PTH_EVENT_SIGS </para> <para> This is a signal set event. The two additional arguments have to be a pointer to a signal set (type `sigset_t *') and a pointer to a signal number variable (type `int *'). This event waits until one of the signals in the signal set occurred. As a result the occurred signal number is stored in the second additional argument. Keep in mind that the Pth scheduler doesn't block signals automatically. So when you want to wait for a signal with this event you've to block it via sigprocmask(2) or it will be delivered without your notice. Example: `sigemptyset(&set); sigaddset(&set, SIGINT); pth_event(PTH_EVENT_SIG, &set, &sig);'. </para> </listitem> <listitem> <para> PTH_EVENT_TIME </para> <para> This is a time point event. The additional argument has to be of type pth_time_t (usually on-the-fly generated via pth_time(3)). This events waits until the specified time point has elapsed. Keep in mind that the value is an absolute time point and not an offset. When you want to wait for a specified amount of time, you've to add the current time to the offset (usually on-the-fly achieved via pth_timeout(3)). Example: `pth_event(PTH_EVENT_TIME, pth_timeout(2,0))'. </para> </listitem> <listitem> <para> PTH_EVENT_MSG </para> <para> This is a message port event. The additional argument has to be of type pth_msgport_t. This events waits until one or more messages were received on the specified message port. Example: `pth_event(PTH_EVENT_MSG, mp)'. </para> </listitem> <listitem> <para> PTH_EVENT_TID </para> <para> This is a thread event. The additional argument has to be of type pth_t. One of PTH_UNTIL_TID_NEW, PTH_UNTIL_TID_READY, PTH_UNTIL_TID_WAITING or PTH_UNTIL_TID_DEAD has to be OR-ed into spec to specify on which state of the thread you want to wait. Example: `pth_event(PTH_EVENT_TID|PTH_UNTIL_TID_DEAD, tid)'. </para> </listitem> <listitem> <para> PTH_EVENT_FUNC </para> <para> This is a custom callback function event. Three additional arguments have to be given with the following types: `int (*)(void *)', `void *' and `pth_time_t'. The first is a function pointer to a check function and the second argument is a user-supplied context value which is passed to this function. The scheduler calls this function on a regular basis (on his own scheduler stack, so be very careful!) and the thread is kept sleeping while the function returns FALSE. Once it returned TRUE the thread will be awakened. The check interval is defined by the third argument, i.e., the check function is polled again not until this amount of time elapsed. Example: `pth_event(PTH_EVENT_FUNC, func, arg, pth_time(0,500000))'. </para> </listitem> </itemizedlist> </listitem> <listitem> <para> unsigned long pth_event_typeof(pth_event_t ev); </para> <para> This returns the type of event ev. It's a combination of the describing PTH_EVENT_XX and PTH_UNTIL_XX value. This is especially useful to know which arguments have to be supplied to the pth_event_extract(3) function. </para> </listitem> <listitem> <para> int pth_event_extract(pth_event_t ev, ...); </para> <para> When pth_event(3) is treated like sprintf(3), then this function is sscanf(3), i.e., it is the inverse operation of pth_event(3). This means that it can be used to extract the ingredients of an event. The ingredients are stored into variables which are given as pointers on the variable argument list. Which pointers have to be present depends on the event type and has to be determined by the caller before via pth_event_typeof(3). </para> <para> To make it clear, when you constructed ev via `ev = pth_event(PTH_EVENT_FD, fd);' you have to extract it via `pth_event_extract(ev, &fd)', etc. For multiple arguments of an event the order of the pointer arguments is the same as for pth_event(3). But always keep in mind that you have to always supply pointers to variables and these variables have to be of the same type as the argument of pth_event(3) required. </para> </listitem> <listitem> <para> pth_event_t pth_event_concat(pth_event_t ev, ...); </para> <para> This concatenates one or more additional event rings to the event ring ev and returns ev. The end of the argument list has to be marked with a NULL argument. Use this function to create real events rings out of the single-event rings created by pth_event(3). </para> </listitem> <listitem> <para> pth_event_t pth_event_isolate(pth_event_t ev); </para> <para> This isolates the event ev from possibly appended events in the event ring. When in ev only one event exists, this returns NULL. When remaining events exists, they form a new event ring which is returned. </para> </listitem> <listitem> <para> pth_event_t pth_event_walk(pth_event_t ev, int direction); </para> <para> This walks to the next (when direction is PTH_WALK_NEXT) or previews (when direction is PTH_WALK_PREV) event in the event ring ev and returns this new reached event. Additionally PTH_UNTIL_OCCURRED can be OR-ed into direction to walk to the next/previous occurred event in the ring ev. </para> </listitem> <listitem> <para> pth_status_t pth_event_status(pth_event_t ev); </para> <para> This returns the status of event ev. This is a fast operation because only a tag on ev is checked which was either set or still not set by the scheduler. In other words: This doesn't check the event itself, it just checks the last knowledge of the scheduler. The possible returned status codes are: PTH_STATUS_PENDING (event is still pending), PTH_STATUS_OCCURRED (event successfully occurred), PTH_STATUS_FAILED (event failed). </para> </listitem> <listitem> <para> int pth_event_free(pth_event_t ev, int mode); </para> <para> This deallocates the event ev (when mode is PTH_FREE_THIS) or all events appended to the event ring under ev (when mode is PTH_FREE_ALL). </para> </listitem> </itemizedlist> </section> <section> <title>Key-Based Storage</title> <para> The following functions provide thread-local storage through unique keys similar to the POSIX Pthread API. Use this for thread specific global data. </para> <itemizedlist> <listitem> <para> int pth_key_create(pth_key_t *key, void (*func)(void *)); </para> <para> This created a new unique key and stores it in key. Additionally func can specify a destructor function which is called on the current threads termination with the key. </para> </listitem> <listitem> <para> int pth_key_delete(pth_key_t key); </para> <para> This explicitly destroys a key key. </para> </listitem> <listitem> <para> int pth_key_setdata(pth_key_t key, const void *value); </para> <para> This stores value under key. </para> </listitem> <listitem> <para> void *pth_key_getdata(pth_key_t key); </para> <para> This retrieves the value under key. </para> </listitem> </itemizedlist> </section> <section> <title>Message Port Communication</title> <para> The following functions provide message ports which can be used for efficient and flexible inter-thread communication. </para> <itemizedlist> <listitem> <para> pth_msgport_t pth_msgport_create(const char *name); </para> <para> This returns a pointer to a new message port. If name name is not NULL, the name can be used by other threads via pth_msgport_find(3) to find the message port in case they do not know directly the pointer to the message port. </para> </listitem> <listitem> <para> void pth_msgport_destroy(pth_msgport_t mp); </para> <para> This destroys a message port mp. Before all pending messages on it are replied to their origin message port. </para> </listitem> <listitem> <para> pth_msgport_t pth_msgport_find(const char *name); </para> <para> This finds a message port in the system by name and returns the pointer to it. </para> </listitem> <listitem> <para> int pth_msgport_pending(pth_msgport_t mp); </para> <para> This returns the number of pending messages on message port mp. </para> </listitem> <listitem> <para> int pth_msgport_put(pth_msgport_t mp, pth_message_t *m); </para> <para> This puts (or sends) a message m to message port mp. </para> </listitem> <listitem> <para> pth_message_t *pth_msgport_get(pth_msgport_t mp); </para> <para> This gets (or receives) the top message from message port mp. Incoming messages are always kept in a queue, so there can be more pending messages, of course. </para> </listitem> <listitem> <para> int pth_msgport_reply(pth_message_t *m); </para> <para> This replies a message m to the message port of the sender. </para> </listitem> </itemizedlist> </section> <section> <title>Thread Cleanups</title> <para> Per-thread cleanup functions. </para> <itemizedlist> <listitem> <para> int pth_cleanup_push(void (*handler)(void *), void *arg); </para> <para> This pushes the routine handler onto the stack of cleanup routines for the current thread. These routines are called in LIFO order when the thread terminates. </para> </listitem> <listitem> <para> int pth_cleanup_pop(int execute); </para> <para> This pops the top-most routine from the stack of cleanup routines for the current thread. When execute is TRUE the routine is additionally called. </para> </listitem> </itemizedlist> </section> <section> <title>Process Forking</title> <para> The following functions provide some special support for process forking situations inside the threading environment. </para> <itemizedlist> <listitem> <para> int pth_atfork_push(void (*prepare)(void *), void (*)(void *parent), void (*)(void *child), void *arg); </para> <para> This function declares forking handlers to be called before and after pth_fork(3), in the context of the thread that called pth_fork(3). The prepare handler is called before fork(2) processing commences. The parent handler is called after fork(2) processing completes in the parent process. The child handler is called after fork(2) processing completed in the child process. If no handling is desired at one or more of these three points, the corresponding handler can be given as NULL. Each handler is called with arg as the argument. </para> <para> The order of calls to pth_atfork_push(3) is significant. The parent and child handlers are called in the order in which they were established by calls to pth_atfork_push(3), i.e., FIFO. The prepare fork handlers are called in the opposite order, i.e., LIFO. </para> </listitem> <listitem> <para> int pth_atfork_pop(void); </para> <para> This removes the top-most handlers on the forking handler stack which were established with the last pth_atfork_push(3) call. It returns FALSE when no more handlers couldn't be removed from the stack. </para> </listitem> <listitem> <para> pid_t pth_fork(void); </para> <para> This is a variant of fork(2) with the difference that the current thread only is forked into a separate process, i.e., in the parent process nothing changes while in the child process all threads are gone except for the scheduler and the calling thread. When you really want to duplicate all threads in the current process you should use fork(2) directly. But this is usually not reasonable. Additionally this function takes care of forking handlers as established by pth_fork_push(3). </para> </listitem> </itemizedlist> </section> <section> <title>Synchronization</title> <para> The following functions provide synchronization support via mutual exclusion locks (mutex), read-write locks (rwlock), condition variables (cond) and barriers (barrier). Keep in mind that in a non-preemptive threading system like Pth this might sound unnecessary at the first look, because a thread isn't interrupted by the system. Actually when you have a critical code section which doesn't contain any pth_xxx() functions, you don't need any mutex to protect it, of course. </para> <para> But when your critical code section contains any pth_xxx() function the chance is high that these temporarily switch to the scheduler. And this way other threads can make progress and enter your critical code section, too. This is especially true for critical code sections which implicitly or explicitly use the event mechanism. </para> <itemizedlist> <listitem> <para> int pth_mutex_init(pth_mutex_t *mutex); </para> <para> This dynamically initializes a mutex variable of type `pth_mutex_t'. Alternatively one can also use static initialization via `pth_mutex_t mutex = PTH_MUTEX_INIT'. </para> </listitem> <listitem> <para> int pth_mutex_acquire(pth_mutex_t *mutex, int try, pth_event_t ev); </para> <para> This acquires a mutex mutex. If the mutex is already locked by another thread, the current threads execution is suspended until the mutex is unlocked again or additionally the extra events in ev occurred (when ev is not NULL). Recursive locking is explicitly supported, i.e., a thread is allowed to acquire a mutex more than once before its released. But it then also has be released the same number of times until the mutex is again lockable by others. When try is TRUE this function never suspends execution. Instead it returns FALSE with errno set to EBUSY. </para> </listitem> <listitem> <para> int pth_mutex_release(pth_mutex_t *mutex); </para> <para> This decrements the recursion locking count on mutex and when it is zero it releases the mutex mutex. </para> </listitem> <listitem> <para> int pth_rwlock_init(pth_rwlock_t *rwlock); </para> <para> This dynamically initializes a read-write lock variable of type `pth_rwlock_t'. Alternatively one can also use static initialization via `pth_rwlock_t rwlock = PTH_RWLOCK_INIT'. </para> </listitem> <listitem> <para> int pth_rwlock_acquire(pth_rwlock_t *rwlock, int op, int try, pth_event_t ev); </para> <para> This acquires a read-only (when op is PTH_RWLOCK_RD) or a read-write (when op is PTH_RWLOCK_RW) lock rwlock. When the lock is only locked by other threads in read-only mode, the lock succeeds. But when one thread holds a read-write lock, all locking attempts suspend the current thread until this lock is released again. Additionally in ev events can be given to let the locking timeout, etc. When try is TRUE this function never suspends execution. Instead it returns FALSE with errno set to EBUSY. </para> </listitem> <listitem> <para> int pth_rwlock_release(pth_rwlock_t *rwlock); </para> <para> This releases a previously acquired (read-only or read-write) lock. </para> </listitem> <listitem> <para> int pth_cond_init(pth_cond_t *cond); </para> <para> This dynamically initializes a condition variable variable of type `pth_cond_t'. Alternatively one can also use static initialization via `pth_cond_t cond = PTH_COND_INIT'. </para> </listitem> <listitem> <para> int pth_cond_await(pth_cond_t *cond, pth_mutex_t *mutex, pth_event_t ev); </para> <para> This awaits a condition situation. The caller has to follow the semantics of the POSIX condition variables: mutex has to be acquired before this function is called. The execution of the current thread is then suspended either until the events in ev occurred (when ev is not NULL) or cond was notified by another thread via pth_cond_notify(3). While the thread is waiting, mutex is released. Before it returns mutex is reacquired. </para> </listitem> <listitem> <para> int pth_cond_notify(pth_cond_t *cond, int broadcast); </para> <para> This notified one or all threads which are waiting on cond. When broadcast is TRUE all thread are notified, else only a single (unspecified) one. </para> </listitem> <listitem> <para> int pth_barrier_init(pth_barrier_t *barrier, int threshold); </para> <para> This dynamically initializes a barrier variable of type `pth_barrier_t'. Alternatively one can also use static initialization via `pth_barrier_t barrier = PTH_BARRIER_INIT(threadhold)'. </para> </listitem> <listitem> <para> int pth_barrier_reach(pth_barrier_t *barrier); </para> <para> This function reaches a barrier barrier. If this is the last thread (as specified by threshold on init of barrier) all threads are awakened. Else the current thread is suspended until the last thread reached the barrier and this way awakes all threads. The function returns (beside FALSE on error) the value TRUE for any thread which neither reached the barrier as the first nor the last thread; PTH_BARRIER_HEADLIGHT for the thread which reached the barrier as the first thread and PTH_BARRIER_TAILLIGHT for the thread which reached the barrier as the last thread. </para> </listitem> </itemizedlist> </section> <section> <title>User-Space Context</title> <para> The following functions provide a stand-alone sub-API for user-space context switching. It internally is based on the same underlying machine context switching mechanism the threads in GNU Pth are based on. Hence these functions you can use for implementing your own simple user-space threads. The pth_uctx_t context is somewhat modeled after POSIX ucontext(3). </para> <para> The time required to create (via pth_uctx_make(3)) a user-space context can range from just a few microseconds up to a more dramatical time (depending on the machine context switching method which is available on the platform). On the other hand, the raw performance in switching the user-space contexts is always very good (nearly independent of the used machine context switching method). For instance, on an Intel Pentium-III CPU with 800Mhz running under FreeBSD 4 one usually achieves about 260,000 user-space context switches (via pth_uctx_switch(3)) per second. </para> <itemizedlist> <listitem> <para> int pth_uctx_create(pth_uctx_t *uctx); </para> <para> This function creates a user-space context and stores it into uctx. There is still no underlying user-space context configured. You still have to do this with pth_uctx_make(3) or pth_uctx_set(3). On success, this function returns TRUE, else FALSE. </para> </listitem> <listitem> <para> int pth_uctx_make(pth_uctx_t uctx, char *sk_addr, size_t sk_size, const sigset_t *sigmask, void (*start_func)(void *), void *start_arg, pth_uctx_t uctx_after); </para> <para> This function makes a new user-space context in uctx which will operate on the run-time stack sk_addr (which is of maximum size sk_size), with the signals in sigmask blocked (if sigmask is not NULL) and starting to execute with the call start_func(start_arg). If sk_addr is NULL, a stack is dynamically allocated. The stack size sk_size has to be at least 16384 (16KB). If the start function start_func returns and uctx_after is not NULL, an implicit user-space context switch to this context is performed. Else (if uctx_after is NULL) the process is terminated with exit(3). This function is somewhat modeled after POSIX makecontext(3). On success, this function returns TRUE, else FALSE. </para> </listitem> <listitem> <para> int pth_uctx_save(pth_uctx_t uctx); </para> <para> This function saves the current user-space context in uctx for later restoring by either pth_uctx_restore(3) or pth_uctx_switch(3). This function is somewhat modeled after POSIX getcontext(3). If uctx is NULL, FALSE is returned instead of TRUE. This is the only error possible. </para> </listitem> <listitem> <para> int pth_uctx_restore(pth_uctx_t uctx); </para> <para> This function restores the current user-space context from uctx, which previously had to be set with either pth_uctx_make(3) or pth_uctx_save(3). This function is somewhat modeled after POSIX setcontext(3). If uctx is NULL or uctx contains no valid user-space context, FALSE is returned instead of TRUE. These are the only errors possible. </para> </listitem> <listitem> <para> int pth_uctx_switch(pth_uctx_t uctx_from, pth_uctx_t uctx_to); </para> <para> This function saves the current user-space context in uctx_from for later restoring by either pth_uctx_restore(3) or pth_uctx_switch(3) and restores the new user-space context from uctx, which previously had to be set with either pth_uctx_make(3) or pth_uctx_save(3). This function is somewhat modeled after POSIX swapcontext(3). If uctx_from or uctx_to are NULL or if uctx_to contains no valid user-space context, FALSE is returned instead of TRUE. These are the only errors possible. </para> </listitem> <listitem> <para> int pth_uctx_destroy(pth_uctx_t uctx); </para> <para> This function destroys the user-space context in uctx. The run-time stack associated with the user-space context is deallocated only if it was given by the application (see sk_addr of pth_uctx_create(3)). If uctx is NULL, FALSE is returned instead of TRUE. This is the only error possible. </para> </listitem> </itemizedlist> </section> <section> <title>Generalized POSIX Replacement API</title> <para> The following functions are generalized replacements functions for the POSIX API, i.e., they are similar to the functions under `Standard POSIX Replacement API' but all have an additional event argument which can be used for timeouts, etc. </para> <itemizedlist> <listitem> <para> int pth_sigwait_ev(const sigset_t *set, int *sig, pth_event_t ev); </para> <para> This is equal to pth_sigwait(3) (see below), but has an additional event argument ev. When pth_sigwait(3) suspends the current threads execution it usually only uses the signal event on set to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> int pth_connect_ev(int s, const struct sockaddr *addr, socklen_t addrlen, pth_event_t ev); </para> <para> This is equal to pth_connect(3) (see below), but has an additional event argument ev. When pth_connect(3) suspends the current threads execution it usually only uses the I/O event on s to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> int pth_accept_ev(int s, struct sockaddr *addr, socklen_t *addrlen, pth_event_t ev); </para> <para> This is equal to pth_accept(3) (see below), but has an additional event argument ev. When pth_accept(3) suspends the current threads execution it usually only uses the I/O event on s to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> int pth_select_ev(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds, struct timeval *timeout, pth_event_t ev); </para> <para> This is equal to pth_select(3) (see below), but has an additional event argument ev. When pth_select(3) suspends the current threads execution it usually only uses the I/O event on rfds, wfds and efds to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> int pth_poll_ev(struct pollfd *fds, unsigned int nfd, int timeout, pth_event_t ev); </para> <para> This is equal to pth_poll(3) (see below), but has an additional event argument ev. When pth_poll(3) suspends the current threads execution it usually only uses the I/O event on fds to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_read_ev(int fd, void *buf, size_t nbytes, pth_event_t ev); </para> <para> This is equal to pth_read(3) (see below), but has an additional event argument ev. When pth_read(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_readv_ev(int fd, const struct iovec *iovec, int iovcnt, pth_event_t ev); </para> <para> This is equal to pth_readv(3) (see below), but has an additional event argument ev. When pth_readv(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_write_ev(int fd, const void *buf, size_t nbytes, pth_event_t ev); </para> <para> This is equal to pth_write(3) (see below), but has an additional event argument ev. When pth_write(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_writev_ev(int fd, const struct iovec *iovec, int iovcnt, pth_event_t ev); </para> <para> This is equal to pth_writev(3) (see below), but has an additional event argument ev. When pth_writev(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_recv_ev(int fd, void *buf, size_t nbytes, int flags, pth_event_t ev); </para> <para> This is equal to pth_recv(3) (see below), but has an additional event argument ev. When pth_recv(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_recvfrom_ev(int fd, void *buf, size_t nbytes, int flags, struct sockaddr *from, socklen_t *fromlen, pth_event_t ev); </para> <para> This is equal to pth_recvfrom(3) (see below), but has an additional event argument ev. When pth_recvfrom(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_send_ev(int fd, const void *buf, size_t nbytes, int flags, pth_event_t ev); </para> <para> This is equal to pth_send(3) (see below), but has an additional event argument ev. When pth_send(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> <listitem> <para> ssize_t pth_sendto_ev(int fd, const void *buf, size_t nbytes, int flags, const struct sockaddr *to, socklen_t tolen, pth_event_t ev); </para> <para> This is equal to pth_sendto(3) (see below), but has an additional event argument ev. When pth_sendto(3) suspends the current threads execution it usually only uses the I/O event on fd to awake. With this function any number of extra events can be used to awake the current thread (remember that ev actually is an event ring). </para> </listitem> </itemizedlist> </section> <section> <title>Standard POSIX Replacement API</title> <para> The following functions are standard replacements functions for the POSIX API. The difference is mainly that they suspend the current thread only instead of the whole process in case the file descriptors will block. </para> <itemizedlist> <listitem> <para> int pth_nanosleep(const struct timespec *rqtp, struct timespec *rmtp); </para> <para> This is a variant of the POSIX nanosleep(3) function. It suspends the current threads execution until the amount of time in rqtp elapsed. The thread is guaranteed to not wake up before this time, but because of the non-preemptive scheduling nature of Pth, it can be awakened later, of course. If rmtp is not NULL, the timespec structure it references is updated to contain the unslept amount (the request time minus the time actually slept time). The difference between nanosleep(3) and pth_nanosleep(3) is that that pth_nanosleep(3) suspends only the execution of the current thread and not the whole process. </para> </listitem> <listitem> <para> int pth_usleep(unsigned int usec); </para> <para> This is a variant of the 4.3BSD usleep(3) function. It suspends the current threads execution until usec microseconds (= usec*1/1000000 sec) elapsed. The thread is guaranteed to not wake up before this time, but because of the non-preemptive scheduling nature of Pth, it can be awakened later, of course. The difference between usleep(3) and pth_usleep(3) is that that pth_usleep(3) suspends only the execution of the current thread and not the whole process. </para> </listitem> <listitem> <para> unsigned int pth_sleep(unsigned int sec); </para> <para> This is a variant of the POSIX sleep(3) function. It suspends the current threads execution until sec seconds elapsed. The thread is guaranteed to not wake up before this time, but because of the non-preemptive scheduling nature of Pth, it can be awakened later, of course. The difference between sleep(3) and pth_sleep(3) is that pth_sleep(3) suspends only the execution of the current thread and not the whole process. </para> </listitem> <listitem> <para> pid_t pth_waitpid(pid_t pid, int *status, int options); </para> <para> This is a variant of the POSIX waitpid(2) function. It suspends the current threads execution until status information is available for a terminated child process pid. The difference between waitpid(2) and pth_waitpid(3) is that pth_waitpid(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see waitpid(2). </para> </listitem> <listitem> <para> int pth_system(const char *cmd); </para> <para> This is a variant of the POSIX system(3) function. It executes the shell command cmd with Bourne Shell (sh) and suspends the current threads execution until this command terminates. The difference between system(3) and pth_system(3) is that pth_system(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see system(3). </para> </listitem> <listitem> <para> int pth_sigmask(int how, const sigset_t *set, sigset_t *oset) </para> <para> This is the Pth thread-related equivalent of POSIX sigprocmask(2) respectively pthread_sigmask(3). The arguments how, set and oset directly relate to sigprocmask(2), because Pth internally just uses sigprocmask(2) here. So alternatively you can also directly call sigprocmask(2), but for consistency reasons you should use this function pth_sigmask(3). </para> </listitem> <listitem> <para> int pth_sigwait(const sigset_t *set, int *sig); </para> <para> This is a variant of the POSIX.1c sigwait(3) function. It suspends the current threads execution until a signal in set occurred and stores the signal number in sig. The important point is that the signal is not delivered to a signal handler. Instead it's caught by the scheduler only in order to awake the pth_sigwait() call. The trick and noticeable point here is that this way you get an asynchronous aware application that is written completely synchronously. When you think about the problem of asynchronous safe functions you should recognize that this is a great benefit. </para> </listitem> <listitem> <para> int pth_connect(int s, const struct sockaddr *addr, socklen_t addrlen); </para> <para> This is a variant of the 4.2BSD connect(2) function. It establishes a connection on a socket s to target specified in addr and addrlen. The difference between connect(2) and pth_connect(3) is that pth_connect(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see connect(2). </para> </listitem> <listitem> <para> int pth_accept(int s, struct sockaddr *addr, socklen_t *addrlen); </para> <para> This is a variant of the 4.2BSD accept(2) function. It accepts a connection on a socket by extracting the first connection request on the queue of pending connections, creating a new socket with the same properties of s and allocates a new file descriptor for the socket (which is returned). The difference between accept(2) and pth_accept(3) is that pth_accept(3) suspends only the execution of the current thread and not the whole process. For more details about the arguments and return code semantics see accept(2). </para> </listitem> <listitem> <para> int pth_select(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds, struct timeval *timeout); </para> <para> This is a variant of the 4.2BSD select(2) function. It examines the I/O descriptor sets whose addresses are passed in rfds, wfds, and efds to see if some of their descriptors are ready for reading, are ready for writing, or have an exceptional condition pending, respectively. For more details about the arguments and return code semantics see select(2). </para> </listitem> <listitem> <para> int pth_pselect(int nfd, fd_set *rfds, fd_set *wfds, fd_set *efds, const struct timespec *timeout, const sigset_t *sigmask); </para> <para> This is a variant of the POSIX pselect(2) function, which in turn is a stronger variant of 4.2BSD select(2). The difference is that the higher-resolution struct timespec is passed instead of the lower-resolution struct timeval and that a signal mask is specified which is temporarily set while waiting for input. For more details about the arguments and return code semantics see pselect(2) and select(2). </para> </listitem> <listitem> <para> int pth_poll(struct pollfd *fds, unsigned int nfd, int timeout); </para> <para> This is a variant of the SysV poll(2) function. It examines the I/O descriptors which are passed in the array fds to see if some of them are ready for reading, are ready for writing, or have an exceptional condition pending, respectively. For more details about the arguments and return code semantics see poll(2). </para> </listitem> <listitem> <para> ssize_t pth_read(int fd, void *buf, size_t nbytes); </para> <para> This is a variant of the POSIX read(2) function. It reads up to nbytes bytes into buf from file descriptor fd. The difference between read(2) and pth_read(2) is that pth_read(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see read(2). </para> </listitem> <listitem> <para> ssize_t pth_readv(int fd, const struct iovec *iovec, int iovcnt); </para> <para> This is a variant of the POSIX readv(2) function. It reads data from file descriptor fd into the first iovcnt rows of the iov vector. The difference between readv(2) and pth_readv(2) is that pth_readv(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see readv(2). </para> </listitem> <listitem> <para> ssize_t pth_write(int fd, const void *buf, size_t nbytes); </para> <para> This is a variant of the POSIX write(2) function. It writes nbytes bytes from buf to file descriptor fd. The difference between write(2) and pth_write(2) is that pth_write(2) suspends execution of the current thread until the file descriptor is ready for writing. For more details about the arguments and return code semantics see write(2). </para> </listitem> <listitem> <para> ssize_t pth_writev(int fd, const struct iovec *iovec, int iovcnt); </para> <para> This is a variant of the POSIX writev(2) function. It writes data to file descriptor fd from the first iovcnt rows of the iov vector. The difference between writev(2) and pth_writev(2) is that pth_writev(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see writev(2). </para> </listitem> <listitem> <para> ssize_t pth_pread(int fd, void *buf, size_t nbytes, off_t offset); </para> <para> This is a variant of the POSIX pread(3) function. It performs the same action as a regular read(2), except that it reads from a given position in the file without changing the file pointer. The first three arguments are the same as for pth_read(3) with the addition of a fourth argument offset for the desired position inside the file. </para> </listitem> <listitem> <para> ssize_t pth_pwrite(int fd, const void *buf, size_t nbytes, off_t offset); </para> <para> This is a variant of the POSIX pwrite(3) function. It performs the same action as a regular write(2), except that it writes to a given position in the file without changing the file pointer. The first three arguments are the same as for pth_write(3) with the addition of a fourth argument offset for the desired position inside the file. </para> </listitem> <listitem> <para> ssize_t pth_recv(int fd, void *buf, size_t nbytes, int flags); </para> <para> This is a variant of the SUSv2 recv(2) function and equal to ``pth_recvfrom(fd, buf, nbytes, flags, NULL, 0)''. </para> </listitem> <listitem> <para> ssize_t pth_recvfrom(int fd, void *buf, size_t nbytes, int flags, struct sockaddr *from, socklen_t *fromlen); </para> <para> This is a variant of the SUSv2 recvfrom(2) function. It reads up to nbytes bytes into buf from file descriptor fd while using flags and from/fromlen. The difference between recvfrom(2) and pth_recvfrom(2) is that pth_recvfrom(2) suspends execution of the current thread until the file descriptor is ready for reading. For more details about the arguments and return code semantics see recvfrom(2). </para> </listitem> <listitem> <para> ssize_t pth_send(int fd, const void *buf, size_t nbytes, int flags); </para> <para> This is a variant of the SUSv2 send(2) function and equal to ``pth_sendto(fd, buf, nbytes, flags, NULL, 0)''. </para> </listitem> <listitem> <para> ssize_t pth_sendto(int fd, const void *buf, size_t nbytes, int flags, const struct sockaddr *to, socklen_t tolen); </para> <para> This is a variant of the SUSv2 sendto(2) function. It writes nbytes bytes from buf to file descriptor fd while using flags and to/tolen. The difference between sendto(2) and pth_sendto(2) is that pth_sendto(2) suspends execution of the current thread until the file descriptor is ready for writing. For more details about the arguments and return code semantics see sendto(2). </para> </listitem> </itemizedlist> </section> </section> <section> <title><?EXAMPLE?>예제</title> <para> <?The following example is a useless server which does nothing more than listening on TCP port 12345 and displaying the current time to the socket when a connection was established. For each incoming connection a thread is spawned. Additionally, to see more multithreading, a useless ticker thread runs simultaneously which outputs the current time to stderr every 5 seconds. The example contains no error checking and is only intended to show you the look and feel of Pth.?> 다음 예제는 TCP 포트 12345의 접속을 대기하다가 접속이 일어나면 소켓에 현재 시간을 알려주는 단순한 서버입니다. 새로운 접속마다 쓰레드를 스폰합니다. 그리고, 멀티쓰레딩을 더 잘 보여주기 위해서, 매 5초마다 stderr로 현재 시간을 출력하는 단순한 시간 표시기(ticker) 쓰레드를 동시에 실행합니다. 이 예제는 Pth을 어떻게 사용하는가에 집중하기 위해서 어떠한 오류 검사도 하지 않습니다. </para> <para><screen> /* the socket connection handler thread */ static void *handler(void *_arg) { int fd = (int)_arg; time_t now; char *ct; now = time(NULL); ct = ctime(&now); pth_write(fd, ct, strlen(ct)); close(fd); return NULL; } /* the stderr time ticker thread */ static void *ticker(void *_arg) { time_t now; char *ct; float load; for (;;) { pth_sleep(5); now = time(NULL); ct = ctime(&now); ct[strlen(ct)-1] = '\0'; pth_ctrl(PTH_CTRL_GETAVLOAD, &load); printf("ticker: time: %s, average load: %.2f\n", ct, load); } } /* the main thread/procedure */ int main(int argc, char *argv[]) { pth_attr_t attr; struct sockaddr_in sar; struct protoent *pe; struct sockaddr_in peer_addr; int peer_len; int sa, sw; int port; pth_init(); signal(SIGPIPE, SIG_IGN); attr = pth_attr_new(); pth_attr_set(attr, PTH_ATTR_NAME, "ticker"); pth_attr_set(attr, PTH_ATTR_STACK_SIZE, 64*1024); pth_attr_set(attr, PTH_ATTR_JOINABLE, FALSE); pth_spawn(attr, ticker, NULL); pe = getprotobyname("tcp"); sa = socket(AF_INET, SOCK_STREAM, pe->p_proto); sar.sin_family = AF_INET; sar.sin_addr.s_addr = INADDR_ANY; sar.sin_port = htons(PORT); bind(sa, (struct sockaddr *)&sar, sizeof(struct sockaddr_in)); listen(sa, 10); pth_attr_set(attr, PTH_ATTR_NAME, "handler"); for (;;) { peer_len = sizeof(peer_addr); sw = pth_accept(sa, (struct sockaddr *)&peer_addr, &peer_len); pth_spawn(attr, handler, (void *)sw); } } </screen></para> </section> <section> <title><?BUILD ENVIRONMENTS?>빌드 환경</title> <para> <?In this section we will discuss the canonical ways to establish the build environment for a Pth based program. The possibilities supported by Pth range from very simple environments to rather complex ones.?> 이 장에서는 Pth 기반의 프로그램을 빌드하는 환경을 구축하는 표준적인 방법에 대해서 설명합니다. 아주 간단한 환경부터 복잡한 것까지 차례대로 설명합니다. </para> <section> <title><?Manual Build Environment (Novice)?>수동 빌드 환경(초급)</title> <para> <?As a first example, assume we have the above test program staying in the source file foo.c. Then we can create a very simple build environment by just adding the following Makefile:?> 첫번째 예제의 소스 파일을 foo.c라고 가정하면, 다음의 Makefile로 빌드 환경을 간단하게 만들 수 있습니다. </para> <para><screen> $ vi Makefile | CC = cc | CFLAGS = `pth-config --cflags` | LDFLAGS = `pth-config --ldflags` | LIBS = `pth-config --libs` | | all: foo | foo: foo.o | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) | foo.o: foo.c | $(CC) $(CFLAGS) -c foo.c | clean: | rm -f foo foo.o </screen></para> <para> <?This imports the necessary compiler and linker flags on-the-fly from the Pth installation via its pth-config program. This approach is straight-forward and works fine for small projects.?> Pth를 설치하면 있는 pth-config 프로그램을 통해서 필요한 컴파일러와 링커의 플래그를 그때 그때 얻습니다. 이 접근법은 손쉽게 접근할 수 있고 작은 프로젝트 잘 맞습니다. </para> </section> <section> <title><?Autoconf Build Environment (Advanced)?>Autoconf 빌드 환경(고급)</title> <para> The previous approach is simple but inflexible. First, to speed up building, it would be nice to not expand the compiler and linker flags every time the compiler is started. Second, it would be useful to also be able to build against uninstalled Pth, that is, against a Pth source tree which was just configured and built, but not installed. Third, it would be also useful to allow checking of the Pth version to make sure it is at least a minimum required version. And finally, it would be also great to make sure Pth works correctly by first performing some sanity compile and run-time checks. All this can be done if we use GNU autoconf and the AC_CHECK_PTH macro provided by Pth. For this, we establish the following three files: </para> <para> First we again need the Makefile, but this time it contains autoconf placeholders and additional cleanup targets. And we create it under the name Makefile.in, because it is now an input file for autoconf: </para> <para><screen> $ vi Makefile.in | CC = @CC@ | CFLAGS = @CFLAGS@ | LDFLAGS = @LDFLAGS@ | LIBS = @LIBS@ | | all: foo | foo: foo.o | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) | foo.o: foo.c | $(CC) $(CFLAGS) -c foo.c | clean: | rm -f foo foo.o | distclean: | rm -f foo foo.o | rm -f config.log config.status config.cache | rm -f Makefile </screen></para> <para> Because autoconf generates additional files, we added a canonical distclean target which cleans this up. Secondly, we wrote configure.ac, a (minimal) autoconf script specification: </para> <para><screen> $ vi configure.ac | AC_INIT(Makefile.in) | AC_CHECK_PTH(1.3.0) | AC_OUTPUT(Makefile) </screen></para> <para> Then we let autoconf's aclocal program generate for us an aclocal.m4 file containing Pth's AC_CHECK_PTH macro. Then we generate the final configure script out of this aclocal.m4 file and the configure.ac file: </para> <para><screen> $ aclocal --acdir=`pth-config --acdir` $ autoconf </screen></para> <para> After these steps, the working directory should look similar to this: </para> <para><screen> $ ls -l -rw-r--r-- 1 rse users 176 Nov 3 11:11 Makefile.in -rw-r--r-- 1 rse users 15314 Nov 3 11:16 aclocal.m4 -rwxr-xr-x 1 rse users 52045 Nov 3 11:16 configure -rw-r--r-- 1 rse users 63 Nov 3 11:11 configure.ac -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c </screen></para> <para> If we now run configure we get a correct Makefile which immediately can be used to build foo (assuming that Pth is already installed somewhere, so that pth-config is in $PATH): </para> <para><screen> $ ./configure creating cache ./config.cache checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for GNU Pth... version 1.3.0, installed under /usr/local updating cache ./config.cache creating ./config.status creating Makefile rse@en1:/e/gnu/pth/ac $ make gcc -g -O2 -I/usr/local/include -c foo.c gcc -L/usr/local/lib -o foo foo.o -lpth </screen></para> <para> If Pth is installed in non-standard locations or pth-config is not in $PATH, one just has to drop the configure script a note about the location by running configure with the option --with-pth=dir (where dir is the argument which was used with the --prefix option when Pth was installed). </para> </section> <section> <title><?Autoconf Build Environment with Local Copy of Pth (Expert)?>Pth의 복사본으로 Autoconf 빌드하는 환경(전문가)</title> <para> Finally let us assume the foo program stays under either a GPL or LGPL distribution license and we want to make it a stand-alone package for easier distribution and installation. That is, we don't want to oblige the end-user to install Pth just to allow our foo package to compile. For this, it is a convenient practice to include the required libraries (here Pth) into the source tree of the package (here foo). Pth ships with all necessary support to allow us to easily achieve this approach. Say, we want Pth in a subdirectory named pth/ and this directory should be seamlessly integrated into the configuration and build process of foo. </para> <para> First we again start with the Makefile.in, but this time it is a more advanced version which supports subdirectory movement: </para> <para><screen> $ vi Makefile.in | CC = @CC@ | CFLAGS = @CFLAGS@ | LDFLAGS = @LDFLAGS@ | LIBS = @LIBS@ | | SUBDIRS = pth | | all: subdirs_all foo | | subdirs_all: | @$(MAKE) $(MFLAGS) subdirs TARGET=all | subdirs_clean: | @$(MAKE) $(MFLAGS) subdirs TARGET=clean | subdirs_distclean: | @$(MAKE) $(MFLAGS) subdirs TARGET=distclean | subdirs: | @for subdir in $(SUBDIRS); do \ | echo "===> $$subdir ($(TARGET))"; \ | (cd $$subdir; $(MAKE) $(MFLAGS) $(TARGET) || exit 1) || exit 1; \ | echo "<=== $$subdir"; \ | done | | foo: foo.o | $(CC) $(LDFLAGS) -o foo foo.o $(LIBS) | foo.o: foo.c | $(CC) $(CFLAGS) -c foo.c | | clean: subdirs_clean | rm -f foo foo.o | distclean: subdirs_distclean | rm -f foo foo.o | rm -f config.log config.status config.cache | rm -f Makefile </screen></para> <para> Then we create a slightly different autoconf script configure.ac: </para> <para><screen> $ vi configure.ac | AC_INIT(Makefile.in) | AC_CONFIG_AUX_DIR(pth) | AC_CHECK_PTH(1.3.0, subdir:pth --disable-tests) | AC_CONFIG_SUBDIRS(pth) | AC_OUTPUT(Makefile) </screen></para> <para> Here we provided a default value for foo's --with-pth option as the second argument to AC_CHECK_PTH which indicates that Pth can be found in the subdirectory named pth/. Additionally we specified that the --disable-tests option of Pth should be passed to the pth/ subdirectory, because we need only to build the Pth library itself. And we added a AC_CONFIG_SUBDIR call which indicates to autoconf that it should configure the pth/ subdirectory, too. The AC_CONFIG_AUX_DIR directive was added just to make autoconf happy, because it wants to find a install.sh or shtool script if AC_CONFIG_SUBDIRS is used. </para> <para> Now we let autoconf's aclocal program again generate for us an aclocal.m4 file with the contents of Pth's AC_CHECK_PTH macro. Finally we generate the configure script out of this aclocal.m4 file and the configure.ac file. </para> <para><screen> $ aclocal --acdir=`pth-config --acdir` $ autoconf </screen></para> <para> Now we have to create the pth/ subdirectory itself. For this, we extract the Pth distribution to the foo source tree and just rename it to pth/: </para> <para><screen> $ gunzip < pth-X.Y.Z.tar.gz | tar xvf - $ mv pth-X.Y.Z pth </screen></para> <para> Optionally to reduce the size of the pth/ subdirectory, we can strip down the Pth sources to a minimum with the striptease feature: </para> <para><screen> $ cd pth $ ./configure $ make striptease $ cd .. </screen></para> <para> After this the source tree of foo should look similar to this: </para> <para><screen> $ ls -l -rw-r--r-- 1 rse users 709 Nov 3 11:51 Makefile.in -rw-r--r-- 1 rse users 16431 Nov 3 12:20 aclocal.m4 -rwxr-xr-x 1 rse users 57403 Nov 3 12:21 configure -rw-r--r-- 1 rse users 129 Nov 3 12:21 configure.ac -rw-r--r-- 1 rse users 4227 Nov 3 11:11 foo.c drwxr-xr-x 2 rse users 3584 Nov 3 12:36 pth $ ls -l pth/ -rw-rw-r-- 1 rse users 26344 Nov 1 20:12 COPYING -rw-rw-r-- 1 rse users 2042 Nov 3 12:36 Makefile.in -rw-rw-r-- 1 rse users 3967 Nov 1 19:48 README -rw-rw-r-- 1 rse users 340 Nov 3 12:36 README.1st -rw-rw-r-- 1 rse users 28719 Oct 31 17:06 config.guess -rw-rw-r-- 1 rse users 24274 Aug 18 13:31 config.sub -rwxrwxr-x 1 rse users 155141 Nov 3 12:36 configure -rw-rw-r-- 1 rse users 162021 Nov 3 12:36 pth.c -rw-rw-r-- 1 rse users 18687 Nov 2 15:19 pth.h.in -rw-rw-r-- 1 rse users 5251 Oct 31 12:46 pth_acdef.h.in -rw-rw-r-- 1 rse users 2120 Nov 1 11:27 pth_acmac.h.in -rw-rw-r-- 1 rse users 2323 Nov 1 11:27 pth_p.h.in -rw-rw-r-- 1 rse users 946 Nov 1 11:27 pth_vers.c -rw-rw-r-- 1 rse users 26848 Nov 1 11:27 pthread.c -rw-rw-r-- 1 rse users 18772 Nov 1 11:27 pthread.h.in -rwxrwxr-x 1 rse users 26188 Nov 3 12:36 shtool </screen></para> <para> Now when we configure and build the foo package it looks similar to this: </para> <para><screen> $ ./configure creating cache ./config.cache checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for GNU Pth... version 1.3.0, local under pth updating cache ./config.cache creating ./config.status creating Makefile configuring in pth running /bin/sh ./configure --enable-subdir --enable-batch --disable-tests --cache-file=.././config.cache --srcdir=. loading cache .././config.cache checking for gcc... (cached) gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no [...] $ make ===> pth (all) ./shtool scpp -o pth_p.h -t pth_p.h.in -Dcpp -Cintern -M '==#==' pth.c pth_vers.c gcc -c -I. -O2 -pipe pth.c gcc -c -I. -O2 -pipe pth_vers.c ar rc libpth.a pth.o pth_vers.o ranlib libpth.a <=== pth gcc -g -O2 -Ipth -c foo.c gcc -Lpth -o foo foo.o -lpth </screen></para> <para> As you can see, autoconf now automatically configures the local (stripped down) copy of Pth in the subdirectory pth/ and the Makefile automatically builds the subdirectory, too. </para> </section> </section> <section> <title>SYSTEM CALL WRAPPER FACILITY</title> <para> Pth per default uses an explicit API, including the system calls. For instance you've to explicitly use pth_read(3) when you need a thread-aware read(3) and cannot expect that by just calling read(3) only the current thread is blocked. Instead with the standard read(3) call the whole process will be blocked. But because for some applications (mainly those consisting of lots of third-party stuff) this can be inconvenient. Here it's required that a call to read(3) `magically' means pth_read(3). The problem here is that such magic Pth cannot provide per default because it's not really portable. Nevertheless Pth provides a two step approach to solve this problem: </para> <section> <title>Soft System Call Mapping</title> <para> This variant is available on all platforms and can always be enabled by building Pth with --enable-syscall-soft. This then triggers some #define's in the pth.h header which map for instance read(3) to pth_read(3), etc. Currently the following functions are mapped: fork(2), nanosleep(3), usleep(3), sleep(3), sigwait(3), waitpid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2), write(2), recv(2), send(2), recvfrom(2), sendto(2). </para> <para> The drawback of this approach is just that really all source files of the application where these function calls occur have to include pth.h, of course. And this also means that existing libraries, including the vendor's stdio, usually will still block the whole process if one of its I/O functions block. </para> </section> <section> <title>Hard System Call Mapping</title> <para> This variant is available only on those platforms where the syscall(2) function exists and there it can be enabled by building Pth with --enable-syscall-hard. This then builds wrapper functions (for instances read(3)) into the Pth library which internally call the real Pth replacement functions (pth_read(3)). Currently the following functions are mapped: fork(2), nanosleep(3), usleep(3), sleep(3), waitpid(2), system(3), select(2), poll(2), connect(2), accept(2), read(2), write(2). </para> <para> The drawback of this approach is that it depends on syscall(2) interface and prototype conflicts can occur while building the wrapper functions due to different function signatures in the vendor C header files. But the advantage of this mapping variant is that the source files of the application where these function calls occur have not to include pth.h and that existing libraries, including the vendor's stdio, magically become thread-aware (and then block only the current thread). </para> </section> </section> <section> <title>IMPLEMENTATION NOTES</title> <para> Pth is very portable because it has only one part which perhaps has to be ported to new platforms (the machine context initialization). But it is written in a way which works on mostly all Unix platforms which support makecontext(2) or at least sigstack(2) or sigaltstack(2) [see pth_mctx.c for details]. Any other Pth code is POSIX and ANSI C based only. </para> <para> The context switching is done via either SUSv2 makecontext(2) or POSIX make[sig]setjmp(3) and [sig]longjmp(3). Here all CPU registers, the program counter and the stack pointer are switched. Additionally the Pth dispatcher switches also the global Unix errno variable [see pth_mctx.c for details] and the signal mask (either implicitly via sigsetjmp(3) or in an emulated way via explicit setprocmask(2) calls). </para> <para> The Pth event manager is mainly select(2) and gettimeofday(2) based, i.e., the current time is fetched via gettimeofday(2) once per context switch for time calculations and all I/O events are implemented via a single central select(2) call [see pth_sched.c for details]. </para> <para> The thread control block management is done via virtual priority queues without any additional data structure overhead. For this, the queue linkage attributes are part of the thread control blocks and the queues are actually implemented as rings with a selected element as the entry point [see pth_tcb.h and pth_pqueue.c for details]. </para> <para> Most time critical code sections (especially the dispatcher and event manager) are speeded up by inline functions (implemented as ANSI C pre-processor macros). Additionally any debugging code is completely removed from the source when not built with -DPTH_DEBUG (see Autoconf --enable-debug option), i.e., not only stub functions remain [see pth_debug.c for details]. </para> </section> <section> <title>RESTRICTIONS</title> <para> Pth (intentionally) provides no replacements for non-thread-safe functions (like strtok(3) which uses a static internal buffer) or synchronous system functions (like gethostbyname(3) which doesn't provide an asynchronous mode where it doesn't block). When you want to use those functions in your server application together with threads, you've to either link the application against special third-party libraries (or for thread-safe/reentrant functions possibly against an existing libc_r of the platform vendor). For an asynchronous DNS resolver library use the GNU adns package from Ian Jackson ( see http://www.gnu.org/software/adns/adns.html ). </para> </section> <section> <title>HISTORY</title> <para> The Pth library was designed and implemented between February and July 1999 by Ralf S. Engelschall after evaluating numerous (mostly preemptive) thread libraries and after intensive discussions with Peter Simons, Martin Kraemer, Lars Eilebrecht and Ralph Babel related to an experimental (matrix based) non-preemptive C++ scheduler class written by Peter Simons. </para> <para> Pth was then implemented in order to combine the non-preemptive approach of multithreading (which provides better portability and performance) with an API similar to the popular one found in Pthread libraries (which provides easy programming). </para> <para> So the essential idea of the non-preemptive approach was taken over from Peter Simons scheduler. The priority based scheduling algorithm was suggested by Martin Kraemer. Some code inspiration also came from an experimental threading library (rsthreads) written by Robert S. Thau for an ancient internal test version of the Apache webserver. The concept and API of message ports was borrowed from AmigaOS' Exec subsystem. The concept and idea for the flexible event mechanism came from Paul Vixie's eventlib (which can be found as a part of BIND v8). </para> </section> <section> <title>BUG REPORTS AND SUPPORT</title> <para> If you think you have found a bug in Pth, you should send a report as complete as possible to bug-pth@gnu.org. If you can, please try to fix the problem and include a patch, made with 'diff -u3', in your report. Always, at least, include a reasonable amount of description in your report to allow the author to deterministically reproduce the bug. </para> <para> For further support you additionally can subscribe to the pth-users@gnu.org mailing list by sending an Email to pth-users-request@gnu.org with `subscribe pth-users' (or `subscribe pth-users address' if you want to subscribe from a particular Email address) in the body. Then you can discuss your issues with other Pth users by sending messages to pth-users@gnu.org. Currently (as of August 2000) you can reach about 110 Pth users on this mailing list. Old postings you can find at http://www.mail-archive.com/pth-users@gnu.org/. </para> </section> <section> <title>SEE ALSO</title> <section> <title>Related Web Locations</title> <para> `comp.programming.threads Newsgroup Archive', http://www.deja.com/topics_if.xp?search=topic&group=comp.programming.threads </para> <para> `comp.programming.threads Frequently Asked Questions (F.A.Q.)', http://www.lambdacs.com/newsgroup/FAQ.html </para> <para> `Multithreading - Definitions and Guidelines', Numeric Quest Inc 1998; http://www.numeric-quest.com/lang/multi-frame.html </para> <para> `The Single UNIX Specification, Version 2 - Threads', The Open Group 1997; http://www.opengroup.org/onlinepubs /007908799/xsh/threads.html </para> <para> SMI Thread Resources, Sun Microsystems Inc; http://www.sun.com/workshop/threads/ </para> <para> Bibliography on threads and multithreading, Torsten Amundsen; http://liinwww.ira.uka.de/bibliography/Os/threads.html </para> </section> <section> <title>Related Books</title> <para> B. Nichols, D. Buttlar, J.P. Farrel: `Pthreads Programming - A POSIX Standard for Better Multiprocessing', O'Reilly 1996; ISBN 1-56592-115-1 </para> <para> B. Lewis, D. J. Berg: `Multithreaded Programming with Pthreads', Sun Microsystems Press, Prentice Hall 1998; ISBN 0-13-680729-1 </para> <para> B. Lewis, D. J. Berg: `Threads Primer - A Guide To Multithreaded Programming', Prentice Hall 1996; ISBN 0-13-443698-9 </para> <para> S. J. Norton, M. D. Dipasquale: `Thread Time - The Multithreaded Programming Guide', Prentice Hall 1997; ISBN 0-13-190067-6 </para> <para> D. R. Butenhof: `Programming with POSIX Threads', Addison Wesley 1997; ISBN 0-201-63392-2 </para> </section> <section> <title>Related Manpages</title> <para> pth-config(1), pthread(3). getcontext(2), setcontext(2), makecontext(2), swapcontext(2), sigstack(2), sigaltstack(2), sigaction(2), sigemptyset(2), sigaddset(2), sigprocmask(2), sigsuspend(2), sigsetjmp(3), siglongjmp(3), setjmp(3), longjmp(3), select(2), gettimeofday(2). </para> </section> </section> <section> <title>AUTHOR</title> <para> Ralf S. Engelschall </para> <para> rse(at)engelschall.com </para> <para> www.engelschall.com </para> <para> Please send FSF & GNU inquiries & questions to gnu(at)gnu.org. </para> <para> There are also other ways to contact the FSF. </para> <para> Please send comments on these web pages to webmasters(at)gnu.org, send other questions to gnu(at)gnu.org. </para> <para> Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved. </para> <para> Copyright © 1999-2002 Ralf S. Engelschall <rse(at)gnu.org> </para> <para> Last Modified: 2001-02-13 02:08:27 </para> </section> </article> ---- CategoryManual ---- CategoryManual