1 / 26

L11: Lower Power High Level Synthesis(2)

L11: Lower Power High Level Synthesis(2). 1999. 8 성균관대학교 조 준 동 교수 http://vada.skku.ac.kr. Exploiting spatial locality for interconnect power reduction. A spatially local cluster: group of algorithm operations that are tightly connected to each other in the flowgraph representation.

neve-fry
Télécharger la présentation

L11: Lower Power High Level Synthesis(2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. L11: Lower Power High Level Synthesis(2) 1999. 8 성균관대학교 조 준 동 교수 http://vada.skku.ac.kr

  2. Exploiting spatial locality for interconnect power reduction • A spatially local cluster: group of algorithm operations that are tightly connected to each other in the flowgraph representation. • Two nodes are tightly connected to each other on the flowgraph representaion if the shortest distance between them, in terms of number of edges traversed, is low. • A spatially local assignment is a mapping of the algorithm operations to specific hardware units such that no operations in different clusters share the same hardware. • Partitioning the algorithm into spatially local clusters ensures that the majority of the data transfers take place within clusters (with local bus) and relatively few occur between clusters (with global bus). • The partitioning information is passed to the architecture netlist and floorplanning tools. • Local: A given adder outputs data to its own inputs Global: A given adder outputs data to the aother adder's inputs

  3. Hardware Mapping • The last step in the synthesis process maps the allocated, assigned and scheduled flow graph (called the decorated flow graph) onto the available hardware blocks. • The result of this process is a structural description of the processor architecture, (e.g., sdl input to the Lager IV silicon assembly environment). • The mapping process transforms the flow graph into three structural sub-graphs: the data path structure graph the controller state machine graph the interface graph (between data path control inputs and the controller output signals)

  4. Spectral Partitioning in High-Level Synthesis • The eigenvector placement obtained forms an ordering in which nodes tightly connected to each other are placed close together. • The relative distances is a measure of the tightness of connections. • Use the eigenvector ordering to generate several partitioning solutions • The area estimates are based on distribution graphs. • A distribution graph displays the expected number of operations executed in each time slot. • Local bus power: the number of global data transfers times the area of the cluster • Global bus power: the number of global data transfer times the total area:

  5. Finding a good Partition

  6. Interconnection Estimation • For connection within a datapath (over-the-cell routing), routing between units increases the actual height of the datapath by approximately 20-30% and that most wire lengths are about 30-40% of the datapath height. • Average global bus length : square root of the estimated chip area. • The three terms represent white space, active area of the components, and wiring area. The coefficients are derived statistically.

  7. Incorporating into HYPER-LP

  8. Experiments

  9. Datapath Generation • Register file recognition and the multiplexer reduction: • Individual registers are merged as much as possible into register files • reduces the number of bus multiplexers, the overall number of busses (since all registers in a file share the input and output busses) and the number of control signals (since a register file uses a local decoder). • Minimize the multiplexer and I/O bus, simultaneously (clique partitioning is Np-complete, thus Simulated Annealing is used) • Data path partitioning is to optimize the processor floorplan • The core idea is to grow pairs of as large as possible isomorphic regions from corresponding of seed nodes.

  10. Hardware Mapper

  11. Hyper's Basic Architecture Model

  12. Hyper's Crossbar Network

  13. Refined Architecture Model

  14. Bus Merging

  15. Fanin Bus Merging

  16. Fanout Bus merging

  17. Global bus Merging

  18. Test Example

  19. Control Signal Assignment

  20. Factors of the coarse-grained model(obtained by switch level simulator)

  21. Low Power Scheduling and Binding (a)저전력을 고려하지 않은 스케쥴링 (b) 저전력을 고려한 스케쥴링 - 설계 자동화 연구실 -

  22. The coarse-grained model provides a fast estimation of the power consumption when no information of the activity of the input data to the functional units is available.

  23. Fine-grained model • When information of the activity of the input data to the functional units is available.

  24. Effect of the operand activity on the power consumptionof an 8 X 8-bit Booth multiplier. AHD Input data

  25. High-Level Power Estimation: PMUX and PFU

  26. Loop Interchange If matrix A is laid out in memory in column-major form, execution order (a.2) implies more cache misses than the execution order in (b.2). Thus, the compiler chooses algorithm (b.1) to reduce the running time.

More Related