1 / 10

From UML to Performance Models: High Level View

From UML to Performance Models: High Level View. Dorina C. Petriu Gordon Gu Carleton University well formed annotated UML models introduction to LQN high-level view of the transformation www.sce.carleton.ca/rads/puma/. Well-formed annotated UML model.

meryl
Télécharger la présentation

From UML to Performance Models: High Level View

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. From UML to Performance Models: High Level View Dorina C. Petriu Gordon Gu Carleton University well formed annotated UML models introduction to LQN high-level view of the transformation www.sce.carleton.ca/rads/puma/

  2. Well-formed annotated UML model • key use cases described by representative scenarios • frequently executed, have performance constraints • resources used by each scenario • resource types: active or passive, physical or logical, hardware or software • examples: processor, disk, process, software server, lock, buffer • quantitative resource demands must be given for each scenario step • how much, how many times? • workload intensity for each scenario • open workload: arrival rate of requests for the scenario • closed workload: number of simultaneous users

  3. Software architecture and deployment DEserver DEclient Client Server server client <<GRMdeploy>> <<GRMdeploy>> 1..n 1..k <<PAresource>> <<PAresource>> DEclient Ethernet Retrieve <<PAhost>> <<PAhost>> ServerCPU ClientCPU <<PAresource>> SDiskIO <<PAresource>> DEserver <<PAresource>> Sdisk

  4. Scenario with performance annotations Client RetrieveT SDiskIOT wait_S wait_D request document <<PAstep>> {PAdemand=(‘msrd’, <<PAclosed Load>> ’mean’,(220/$cpuS,’ms’))} {Papopulation = $Nusers} accept request <<PAstep>> {PAdemand=(‘msrd’,’mean’, (1.30 + 130/$cpuS,’ms’))} <<PAstep>> read request {PAdemand=(‘asmd’,’mean’ <<PAstep>> (0.5,’ms’)), {PAdemand=(‘msrd’,’mean’, PAextOp=(‘net1’,1) (35/$cpuS,’ms’))} update logfile PArespTime= (‘req’,’mean’, (1,’sec’), (‘pred’,’mean’,$RespT)}} <PAstep>> {PAdemand=(‘msrd’,’mean’, write to logfile (0.70,’ms’)), <<PAstep>> PAextOp=(‘writeDisk’, $RP’)} {PAdemand=(‘msrd’,’mean’, (25/$cpuS,’ms’))} parse request <<PAstep>> {PAdemand=(‘msrd’,’mean’, get ($gcdC/$cpuS,’ms’))} document <<PAstep>> <<PAstep>> read {PAdemand=(‘msrd’,’mean’, {PAdemand=(‘msrd’,’mean’, from disk ($cdS,’ms’)), ($scdC/$cpuS,’ms’)), PAextOp=(‘readDisk’, $DocS’)} PAextOp=(‘net2’,$DocS’)} <<PAstep>> send {PAdemand=(‘asmd’, document ’mean’, (1.5,’ms’))} <PAstep>> {PAdemand=(‘msr’,’mean’, receive document recycle thread (170/$cpuS,’ms’))}

  5. clientE ClientT Client CPU DBWrite DBRead DB DB CPU DKWrite DKRead Disk DB Disk Layered Queueing Network (LQN) model http://www.sce.carleton.ca/rads/lqn/lqn-documentation • Advantages of LQN modeling • models software tasks (rectangles) and hardware devices (circles) • represents nested services (a server is also a client to other servers) • software components have entries corresponding to different services • arcs represent service requests (synchronous and asynchronous) • multi-servers used to model components with internal concurrency • What can we get from the LQN solver • Service time (mean, variance) • Waiting time • Probability of missing deadline • Throughput • Utilization

  6. XCPU UML -> LQN Transformations: Mapping the structure 1..n 1..n << PAresource >> << PAresource >> << << PAresource PAresource >> >> (1) (1) Comp Comp CompT CompT Comp Comp <<GRMdeploy>> (5) (5) << << PAhost PAhost >> >> XCPU (2) XCPU XCPU << << << PAresource PAresource PAresource >> >> >> (2) Active Active Active Active (3) (3) << PAresource >> (3) << PAresource >> << << PAhost PAhost >> >> << PAhost >> ThreadT XCPU XCPU XCPU Thread Thread XCPU XCPU XCPU <<GRMdeploy>> (6) (6) (4) (4) (4) << PAhost >> << << << PAhost >> << PAresource >> Ydisk Ydisk Ydisk XCPU XCPU Ydisk Ydisk XCPU Ydisk

  7. Client Client User User WebServer WebServer Server Server waiting waiting request request request request service service service service serve request serve request wait for reply wait for reply and reply and reply and reply e2, ph1 e1, ph1 e1, ph1 ... ... e2, ph2 complete complete continue continue work work service (opt) service (opt) UML->LQN Transformation: Mapping the Behavior e1 Client [ph1] Client CPU e2 Server [ph1, ph2] Server CPU

  8. Mapping software architecture and physical devices to LQN Client Server server DEclientT client 1..n 1..k <<PAresource>> <<PAresource>> RetrieveT Retrieve DEclient <<PAresource>> SDiskIO SDiskIOT DEserver a) Mapping software architecture to LQN tasks DEserver DEclient DEclientT Client CPU <<GRMdeploy>> <<GRMdeploy>> RetrieveT Ethernet Server CPU <<PAhost>> <<PAhost>> ServerCPU ClientCPU SDiskIOT Sdisk <<PAresource>> <<PAresource>> Sdisk b) Mapping physical resources (processors and I/O devices) to LQN devices

  9. Effect of communication network DEclientT ClientCPU DEserver DEclient net1 <<GRMdeploy>> <<GRMdeploy>> dummy CPU Ethernet <<PAhost>> RetrieveT Ethernet <<PAhost>> ServerCPU ServerCPU ClientCPU <<PAresource>> <<PAresource>> net2 Sdisk SDiskIOT Sdisk

  10. SDiskIOT RetrieveT DEclientT wait_r wait_d request document accept request read request entry write phase 1 entry clientE phase 2 update logfile entry retrieveE phase 1 write to logfile parse request clientE [ph2] get document Client CPU DEclientT read from disk send document net1 dummy CPU net1E send document entry read phase 1 accept request retrieveE [ph1,ph2] RetrieveT sever CPU Ethernet entry retrieveE ph 2 net2 net2E read [ph1] write [ph1] SDiskIOT Sdisk Groups of scenario steps to LQN entries

More Related