260 likes | 501 Vues
IEG 4180 Tutorial 5. Prepared by Ke Liu (Note: Certain content adopt from Zero ’ s tutorial notes). Application. OS. Call Lower Layer Receive Routine. B. Recv(). Wait for Packet. C. Do Some Processing. A. Move Data to User Buffer. D. Traditional Blocking I/O.
E N D
IEG 4180 Tutorial 5 Prepared by Ke Liu (Note: Certain content adopt from Zero’s tutorial notes)
Application OS Call Lower LayerReceive Routine B Recv() Wait for Packet C Do SomeProcessing A Move Data toUser Buffer D Traditional Blocking I/O Time Required = A + B + C + D Max Rate = Packet Size / Time Required
Overlapped I/O • When using overlapped I/O, the I/O operation will run in the background • While the I/O operation runs in the background, the application can do some other processing • When the I/O operation completes, the application is notified • There are multiple mechanisms for notifying the application that an I/O operation has been completed: • Event Object Signaling • Alertable I/O
Overlapped I/O • Advantages • Non-blocking • Use application buffers to receive data directly • Allow posting multiple receive calls
Application OS WSARecv() Call Lower LayerReceive Routinewith App. Buffer ProcessPrevious Data Wait for Packet Wait forCompletion Event Overlapped I/OThe Model - Use of Event Object Need to Figure Out which Buffer is Being Filled (or Returned)
Overlapped I/OCreate Overlapped Socket • Use WSASocket() instead of socket() • Use normal bind(), accept(), connect() etc…
Overlapped I/OSend & Receive Data • For TCP, use • WSASend() • WSARecv() • For UDP, use • WSASendTo() • WSARecvFrom()
Overlapped I/OReceive • Important parameters for WSARecv and WSARecvFrom: • Socket • Array of WSABUF structures • Number of elements in WSABUF array • WSAOVERLAPPED structure • Pointer to I/O completion routine (used for alertable I/O)
Overlapped I/OReceive • The return value • Does not return the number of bytes received. • Only tell you it success or error. • SOCKET_ERROR may be returned even there was no error. • Use WSAGetLastError() to check, if error code is WSA_IO_PENDING, it means there is no error!!!
Overlapped I/OWSABUF • The definition of buffer for overlapped I/O • len • The of buffer • Have to be filled in in advance • buf • The memory space that actually hold the data typedef struct __WSABUF{ u_long len; char FAR* buf; } WSABUF, *LPWSABUF;
Overlapped I/OWSAOVERLAPPED structure • A mean for notification typedef struct _WSAOVERLAPPED{ DWORD Internal; DWORD InternalHigh; DWORD Offset; DWORD OffsetHigh; WSAEVENT hEvent; } WSAOVERLAPPED, *LPWSAOVERLAPPED; • hEvent • Function call returns immediately, some mechanisms are needed to determine the status and the completion of the request • Used in event object notification
Overlapped I/OEvent Object Notification • Create an event object • Similar to Mutex and Semaphore, event objects also have signaled or nonsignaled state • Pass this object to hEvent of the WSAOVERLAPPED structure • To know when the I/O operation complete • WSAWaitForMultipleEvents() • To retrieve the results of overlapped operations • WSAGetOverlappedResult() • Reset the event object to nonsignaled state • WSAResetEvent() • To free resources occupied by the event object • WSACloseEvent() WSAEVENT WSACreateEvent(void);
Overlapped I/OAlertable I/O-Introduction • Instead of using event object notification, make the OS calls one of your functions when I/O operations complete • Completion routines • Functions that will be called when I/O complete • Specified in the last parameter of WSASend() / WSASendTo() / WSARecv() / WSARecvFrom() int i = WSARecvFrom(..., lpOverlapped, lpCompletionRoutine);
Application OS WSARecv() Call Lower LayerReceive Routinewith App. Buffer Sleep & Wait forAny Completion Wait for Packet Process Data Overlapped I/O Alertable I/O Move Data Processing to the Completion Routine
Overlapped I/OAlertable I/O -Completion Routines • cbTransferred • Number of bytes transferred • Equals to zero when connection is closed • lpOverlapped • hEvent can be freely used by your code, just like the LPVOID parameter in thread procedure • You have to manage the buffer usage yourself! • For example, you issued 10 WSARecv() with 10 buffers • The data will be filled in the buffers according to the calling order • Reissue WSARecv() on processed buffers void CALLBACK CompletionRoutine( IN DWORD dwError, /* the error code */ IN DWORD cbTransferred, /* in bytes */ IN LPWSAOVERLAPPED lpOverlapped, /* the structure of this I/O */ IN DWORD dwFlags );
Overlapped I/O Alertable Wait state • The thread using alertable I/O has to enter alertable wait state, so that the completion routines can be called • To enter alertable wait state • Just like the ordinary Sleep() • Return when timeout or completion DWORD SleepEx( DWORD dwMilliseconds, BOOL bAlertable /* set to true */ );
Why IOCP? (Situation) • High concurrency servers normally have multi-threading supported by hardware • Multi-threading to utilize multiple cores
Why IOCP? (Problem 1) • What if thread is created whenever a new client come? • Suffer from thread creation and disposal • Solution: Employ thread pool to reduce thread creation or disposal overhead (time) • Most likely have significant context switching
Why IOCP? (Problem 2) • What is the size of the thread pool? • Less than # of processor cores • Resources under-utilized, serve less clients than possible • More than # of processor cores • Context switching# of active threads > # of processor cores
How IOCP behave? • Require a thread pool • Allow at most predefined # of threads to be active • Avoid switching between threads frequently • Pick threads from thread pool in LIFO • Reduce context switch • Reduce cache miss
What if IOCP not available? • Not every servers are running on Windows • Mainly supported by NT-kernel • Linux?
Common Single thread models • Single client • Blocking I/O • Most portion of time is waiting for further action • Concurrent clients • Polling (with Non-Blocking I/O) • Expense of looping through many useless function calls • Select-based I/O (with (Non-)Blocking I/O) • Maintenance of “fd_set”
Common Multi-threaded models • Mixing with single thread I/O models • One thread per client (e.g.: Blocking I/O) • Single thread for concurrent clients (e.g.: Polling, Select-based I/O) • Threads for specific stages (Computation, I/O) • Single client may be served by multiple thread
OS specific I/O models • Microsoft Windows • Message-Driven I/O • Alertable I/O • Linux • poll() • AIO • epoll • BSD • kqueue