1 / 66

TDC561 Network Programming

TDC561 Network Programming. Week 7: Client/Server Design Alternatives Study Cases: TFTP. Camelia Zlatea, PhD Email: czlatea@cs.depaul.edu.

van
Télécharger la présentation

TDC561 Network Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TDC561 Network Programming Week 7: Client/Server Design Alternatives Study Cases: TFTP Camelia Zlatea, PhD Email: czlatea@cs.depaul.edu

  2. Douglas Comer, David Stevens, Internetworking with TCP/IP : Client-Server Programming, Volume III (BSD Unix and ANSI C), 2nd edition, 1996 (ISBN 0-13-260969-X) Chap. 2, 8 W. Richard Stevens, Network Programming : Networking API: Sockets and XTI, Volume 1, 2nd edition, 1998 (ISBN 0-13-490012-X) Chap. 7, 27 References

  3. Server Design Iterative Connectionless Iterative Connection-Oriented Concurrent Connectionless Concurrent Connection-Oriented

  4. Concurrent Server Design Alternatives • Single Process Concurrency • One child per client • Spawn one thread per client • Pre-forking multiple processes • Pre-threaded Server

  5. One thread per client • Similar with fork - call pthread_create instead. • Using threads makes it easier (less overhead) to have child threads share information. • Sharing information must be done carefully • Mutual exclusion - pthread_mutex • Synchronization - pthread_cond

  6. Example: Concurrency using threads 1/3 /* Code fragment that uses pthread_create to implement concurrency */ void *process_client(arg) void *arg; { int fd; /* client socket file descriptor */ int nbytes; char buf[BUFSIZE]; fd = *((int *)(arg)); printf("Thread %u is serving client at %d\n", pthread_self(), fd); for ( ; ; ) { nbytes = read(fd, buf, BUFSIZE); if ((nbytes == -1) && (errno != EINTR)) break; if (!nbytes) break; process_command(buf, nbytes); } return(); }

  7. Example: Concurrency using threads 2/3 main(int argc, char *argv[]) { /* Variable declaration section */ /* The calls socket(), bind(), and listen() */ while(1) { /* infinite accept() loop */ newfd = accept(sockfd, (struct sockaddr *)&theiraddr,&sinsize); if (newfd < 0) { /* error in accept() */ perror("accept"); exit(-1); }

  8. Example: Concurrency using threads 3/3 if (error=pthread_create(tid, NULL, process_client, (void *)(newfd))) fprintf(stderr, "Could not create thread %d: %s\n", tid, strerror(error)); } }

  9. Multithreading Benefits • Provide concurrency • Introduce another level of scheduling • Fine control of multithreading with priority • Tolerant communication latency • Overlap I/O with CPU work • Provide parallelism • Allow either SPMD or MPMD styles • Be able to partition computation domains • Utilize multiple CPUs for speedup • Provide resource/data sharing • Dynamically allocated memory can be shared • Pointers are valid to all threads • Multiple threads can share resources • Share the same set of open files • Share working directory • SPMD = Single Program Multiple Data • MPMD = Multiple Program Multiple Data

  10. Drawbacks of multithreading • Need coordination for data sharing • Mutual exclusion • Synchronization • Lack protection between threads • Thread’s stack can be accessible • As well as local variables • Less robust against programming errors • Hard to debug multithreading programs

  11. pthread_detach int pthread_detach(pthread_t tid); • A detached thread is like a daemon process • When it terminates, all its resources are released • We cannot wait for a detached thread to terminate • A thread can detach itself

  12. About Condition Variables • It refers to blocking until some event occurs, which means waiting for a unbounded/unpredictable duration • It has atomic operations for waiting and signaling, that is, testing and blocking are un-divisible • It associates with some shared variable, which are protected by a mutex lock • Condition variable C: • Condition • shared variables S • predicate P of S • A mutex lock L • Operations • Mutex lock: pthread_mutex_lock • Mutex unlock: pthread_mutex_unlock • Condition variable wait: pthread_cond_wait • Condition variable signal: pthread_cond_signal or pthread_cond_broadcast

  13. About Condition Variables • Whenever a thread changes one variable of S, it signals on the condition variable C • This signal wakes up a waiting thread, which then checks to see if P is now satisfied • It does not guarantee that P has become true • It just tells the blocked thread that one of S has changed • The signaled thread should retest P (the Boolean predicate) • When using condition variables there is always a Boolean predicate, an invariant, associated with each condition wait that must be true before the thread should proceed. • The return from pthread_cond_wait() does not imply anything about the value of this predicate, the predicate should always be re-evaluated.

  14. About Condition Variables #include <pthread.h> int pthread_cond_signal(pthread_cond_t *cond); /* • unblocks at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond) • scheduling policy determines the order in which threads are unblocked • no effect if there are no threads currently blocked on cond. */ int pthread_cond_broadcast(pthread_cond_t *cond); /* • unblocks all threads currently blocked on the specified condition variable cond. • no effect if there are no threads currently blocked on cond */

  15. About Condition Variables Understand inside of pthread_cond_wait Upon the call of pthread_cond_wait • Release mutex lock L • Blocking the calling thread Upon the wakeup from signal of C • Acquire mutex lock L • Return • That is why we need to pass the associated mutex L as the 2nd argument when calling pthread_cond_wait

  16. About Condition Variables • Event (a) and Event (b): Deadlock? Starvation? Ok? pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER; Thread1 Thread2 . .(a) . .. . .. do_work_A(); pthread_cond_wait(&cond, &mutex); (b) pthread_cond_signal(&cond); do_work_B();

  17. About Condition Variables • Event (a), Event (b), Event (c) : Deadlock? Starvation? Ok? pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER; Thread1 Thread2 . .(a) . .. . .. while ( not P ) (c) do_work_A(); pthread_cond_wait(&cond, &mutex); P = true ; (b) pthread_cond_signal(&cond); do_work_B();

  18. About Condition Variables • Event (a), Event (b), Event (c) : Deadlock? Starvation? Ok? pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER; Thread1 Thread2 . .(a) . .. . .. pthread_mutex_lock(&mtx); while( not P ) (c) do_work_A(); pthread_cond_wait(&cond, &mutex); pthread_mutex_lock(&mtx); pthread_mutex_unlock(&item_lock); P = true ; (b) pthread_cond_signal(&cond); do_work_B(); pthread_mutex_unlock(&mtx);

  19. Producer Function void *producer(void * arg1) { int i; for (i = 1; i <= SUMSIZE; i++) { pthread_mutex_lock(&slot_lock); /* acquire right to a slot */ while (nslots <= 0) pthread_cond_wait (&slots, &slot_lock); nslots--; pthread_mutex_unlock(&slot_lock); put_item(i*i); pthread_mutex_lock(&item_lock); /* release right to an item */ nitems++; pthread_cond_signal(&items); pthread_mutex_unlock(&item_lock); } pthread_mutex_lock(&item_lock); /* NOTIFIES GLOBAL TERMINATION */ producer_done = 1; pthread_cond_broadcast(&items); pthread_mutex_unlock(&item_lock); return NULL; }

  20. Consumer Function void *consumer(void *arg2) { int myitem; for ( ; ; ) { pthread_mutex_lock(&item_lock); /* acquire right to an item */ while ((nitems <=0) && !producer_done) pthread_cond_wait(&items, &item_lock); if ((nitems <= 0) && producer_done) { /* DISTRIBUTED GLOBAL TERMINATION */ pthread_mutex_unlock(&item_lock); break; } nitems--; pthread_mutex_unlock(&item_lock); get_item(&myitem); sum += myitem; pthread_mutex_lock(&slot_lock); /* release right to a slot */ nslots++; pthread_cond_signal(&slots); pthread_mutex_unlock(&slot_lock); } return NULL; }

  21. Pre-forked Server • Creating a new process for each client is expensive. • We can create a set of processes, each of which can take care of a client. • Each child process is an iterative server.

  22. Pre-forked TCP Server • Initial process creates socket and binds to well known address. • Process now calls fork() a number of times. • All children call accept() • The next incoming connection will be handed to one child.

  23. Pre-forking • As the Stevens/Comer textbooks show, having too many pre-forked children can be bad. • Using dynamic process allocation instead of a hard-coded number of children can avoid problems. • The parent process just manages the children, doesn’t worry about clients.

  24. Sockets library vs. system call • A pre-forked TCP server won’t usually work the way we want if sockets is not part of the kernel: • calling accept() is a library call, not an atomic operation. • We can get around this by making sure only one child calls accept() at a time using a locking scheme.

  25. Pre-threaded Server • Same benefits as pre-forking. • Can also have the main thread do all the calls to accept() and hand off each client to an existing thread.

  26. Pre-threaded Server • Multithreaded Clients • Hide Communication Latency • Establish concurrent connections • Multithreaded Servers • One thread per request, create-die • Dispatcher/worker model, fixed # of threads (SPMD, MPMD) Dispatcher Thread Worker Thread Worker Thread Worker Thread

  27. Choosing a Server Design Schema for an Application • Many factors: • expected number of simultaneous clients. • Transaction size (time to compute or lookup the answer) • Variability in transaction size. • Available system resources (perhaps what resources can be required in order to run the service). • Real-Time/Non-Real-Time Applications • Approach • It is important to understand the issues and options. • Knowledge of queuing theory can be a big help. • You might need to test a few alternatives to determine the best design.

  28. /* PSEUDO-CODE OUTLINE for pre-threaded server using SPMD */ /* global data and variables */ #define MAXCLIENTS 10 #define MAXWORKERS 10 pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond[MAXWORKERS]; pthread_cond_t idle; int client_sd[MAXCLIENTS]; /* active socket array */ int worker_state[MAXWORKERS]; /* worker state array */ workerptr new_wk[MAXWORKERS]; /* reference to worker information slot { int worker_num; int sd; } */ pthread_t tid[MAXWORKERS];

  29. void *handle_client( void *arg) {/* worker thread */ workerptr me = (workerptr) arg; int workernum = me->workernum; int sd = me->sd; char buf[BUFSIZE]; int n; /* By default a new thread is joinable, we don't really want this (unless we do something special we end up with the thread equivalent of zombies). So we explicitly change the thread type to detached */ pthread_detach(pthread_self()); printf("Thread %ld started for client number %d (sd %d)\n", pthread_self(), workernum,client_sd[workernum]);

  30. while(1) { /* worker thread */ /* wait for work to do */ while(worker_state[workernum]== 0) pthread_cond_wait(&cond[workernum], &mtx); /* do the work requested */ pthread_mutex_lock(&mtx); sd = me->sd; /* get the updated socket fd */ pthread_mutex_unlock(&mtx); while ( (n=read(sd,buf,BUFSIZE))>0) { do_work(); } /* work done - set itself idle assumes that read returned EOF */ pthread_mutex_lock(&mtx); close(client_sd[workernum]); worker_state[workernum]=0; printf(“Worker %d has completed work \n",workernum); pthread__cond_signal(&idle); /* notifies dispatcher*/ pthread_mutex_unlock(&mtx); } /* end while */

  31. int main() { /* Dispatcher */ int ld,sd; struct sockaddr_in skaddr; struct sockaddr_in from; int addrlen,length; int i; pthread_t tid[MAXWORKERS]; if ((ld = socket( AF_INET, SOCK_STREAM, 0 )) < 0) { perror("Problem creating socket\n"); exit(1); } skaddr.sin_family = AF_INET; skaddr.sin_addr.s_addr = htonl(INADDR_ANY); skaddr.sin_port = htons(0); if (bind(ld, (struct sockaddr *) &skaddr, sizeof(skaddr))<0) { perror("Problem binding\n"); exit(0); }

  32. /* find out what port we were assigned and print it out */ length = sizeof( skaddr ); if (getsockname(ld, (struct sockaddr *) &skaddr, &length)<0) { perror("Error getsockname\n"); exit(1); } pport=ntohs(skaddr.sin_port); printf("%d\n",pport); /* put the socket into passive mode (waiting for connections) */ if (listen(ld,5) < 0 ) { perror("Error calling listen\n"); exit(1); }

  33. /* do some initialization */ for (i=0;i<MAXWORKERS;i++) { worker_state[i]=0; new_wk[i] = malloc(sizeof(workerstruct)); new_wk[i]->workernum=i; new_wk[i]->sd=0; cond[i] = PTHREAD_COND_INITIALIZER; pthread_create(&tid[i],NULL,handle_client,(void *) new_wk[i]); }

  34. /* Dispatcher now processes incoming connections forever ... */ while (1) { printf("Ready for a connection...\n"); addrlen=sizeof(skaddr); if ( (sd = accept( ld, (struct sockaddr*) &from, &addrlen)) < 0) { perror("Problem with accept call\n"); exit(1); } printf("Got a connection - processing...\n"); for (i=0;i<MAXWORKERS;i++) { pthread_mutex_lock(&mtx); if (worker_state[i]==0) /* worker i is idle – dispatch him to work */ { pthread_mutex_unlock(&mtx); break;} pthread_mutex_unlock(&mtx); }/ * for */ if (i = MAXWORKERS) { /* all workers busy */ pthread_mutex_lock(&mtx); pthread__cond_wait(&idle, mutex);/* wait for one idle; */ pthread_mutex_unlock(&mtx); } else { /* dispatch worker */ pthread_mutex_lock(&mtx); client_sd[i]=sd; worker_state[i]=1; new_wk[i]->sd=sd; pthread__cond_signal(&cond[i]); /* wake up worker */ pthread_mutex_unlock(&mtx); } } }

  35. Pre-Threaded Server ( For Assignment #3 ) • Safety Requirement • Absence of deadlock ( Worker Threads and Dispatcher) • Progress Requirement • Absence of Starvation ( either for Worker Thread or Dispatcher ), given there are incoming client requests • Fairness to handle incoming client request (bound the waiting time for an external client, reduce response time to an external client) • Guarantee Distributed Global Termination for the server thread pool. • Dispatcher can decide to terminate the thread pool (dispatcher_is_done = true) and then notifies all worker threads about this (using pthread_signal_broadcast) • When a worker thread becomes idle and the dispatcher_done he does not block anymore, but he returns and therefore worker thread is terminate • Eventually, after a finite interval of time all worker threads will terminate

  36. TFTP (Trivial File Transfer Protocol) • TFTP Specs • RFC 783 • RFC 1350/ STD33 http://www.rfc-editor.org/rfc/std/std33.txt • Transfer files between processes. • Minimal overhead (no security). • Designed for UDP, although could be used with many transport protocols. • Easy to implement • Small - possible to include in firmware • Used to bootstrap workstations and network devices

  37. Diskless Workstation Booting - step 1 Help! I don't know who I am! My Ethernet address is: 4C:23:17:77:A6:03 Diskless Workstation RARP

  38. Diskless Workstation Booting – step 2 RARP Server I know all! You are to be know as: 128.113.45.211 Diskless Workstation RARP REPLY

  39. Diskless Workstation Booting - step 3 I need the file named boot-128.113.45.211 Diskless Workstation TFTP Request (Broadcast)

  40. Diskless Workstation Booting - step 4 TFTP Server here is part 1 I got part 1 here is part 2 Diskless Workstation boot file TFTP File Transfer

  41. TFTP Protocol 5 message types: • Read request • Write request • Data • ACK (acknowledgment) • Error

  42. Messages • Each is an independent UDP Datagram • Each has a 2 byte opcode (1st 2 bytes) • The rest depends on the opcode.

  43. Message Formats FILENAME MODE DATA ERROR MESSAGE OPCODE 0 0 OPCODE BLOCK# OPCODE BLOCK# OPCODE BLOCK# 0 2 bytes 2 bytes

  44. Read Request 01 filename 0 mode 0 null terminated ascii string containing name of file null terminated ascii string containing transfer mode 2 byte opcode network byte order variable length fields!

  45. Write Request 02 filename 0 mode 0 null terminated ascii string containing name of file null terminated ascii string containing transfer mode 2 byte opcode network byte order variable length fields!

  46. TFTP Data Packet 03 block # data 0 to 512 bytes 2 byte block number network byte order 2 byte opcode network byte order all data packets have 512 bytes except the last one.

  47. TFTP Acknowledgment 04 block # 2 byte block number network byte order 2 byte opcode network byte order

  48. TFTP Error Packet 05 errcode errstring 0 null terminated ascii error string 2 byte opcode network byte order 2 byte error code network byte order

  49. TFTP Error Codes 0 - not defined 1 - File not found 2 - Access violation 3 - Disk full 4 - Illegal TFTP operation 5 - Unknown port 6 - File already exists 7 - No such user

  50. TFTP transfer modes • “netascii” : for transferring text files. • all lines end with \r\n (CR,LF). • provides standard format for transferring text files. • both ends responsible for converting to/from netascii format. • “octet” : for transferring binary files. • no translation done.

More Related