#include <unix.h>
Inheritance diagram for ost::UnixSession:
Public Member Functions | |
UnixSession (const char *pathname, int size=512, int pri=0, int stack=0) | |
Create a Unix domain socket that will be connected to a local server server and that will execute under it's own thread. | |
UnixSession (UnixSocket &server, int size=512, int pri=0, int stack=0) | |
Create a Unix domain socket from a bound Unix domain server by accepting a pending connection from that server and execute a thread for the accepted connection. | |
virtual | ~UnixSession () |
Virtual destructor. | |
Protected Member Functions | |
int | waitConnection (timeout_t timeout=TIMEOUT_INF) |
Normally called during the thread Initial() method by default, this will wait for the socket connection to complete when connecting to a remote socket. | |
void | initial (void) |
The initial method is used to esablish a connection when delayed completion is used. |
The Unix domain session also supports a non-blocking connection scheme which prevents blocking during the constructor and moving the process of completing a connection into the thread that executes for the session.
|
Create a Unix domain socket that will be connected to a local server server and that will execute under it's own thread.
|
|
Create a Unix domain socket from a bound Unix domain server by accepting a pending connection from that server and execute a thread for the accepted connection.
|
|
Virtual destructor.
|
|
The initial method is used to esablish a connection when delayed completion is used. This assures the constructor terminates without having to wait for a connection request to complete. Reimplemented from ost::Thread. |
|
Normally called during the thread Initial() method by default, this will wait for the socket connection to complete when connecting to a remote socket. One might wish to use setCompletion() to change the socket back to blocking I/O calls after the connection completes. To implement the session one must create a derived class which implements Run().
|