1 / 33

Analytical Projects

SHA-3 contest - Your Round 3 Report Analyzing the Influence of a Computer Platform on Ranking of the SHA-3 Candidates in Terms of Performance in Software Homomorphic Encryption Security of GSM and 3G/4G Telephony Security of Metro/Subway Cards Security of Voting Machines

nigel-horn
Télécharger la présentation

Analytical Projects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SHA-3 contest - Your Round 3 Report Analyzing the Influence of a Computer Platform on Ranking of the SHA-3 Candidates in Terms of Performance in Software Homomorphic Encryption Security of GSM and 3G/4G Telephony Security of Metro/Subway Cards Security of Voting Machines Survey of Codebreaking Machines and Projects Based on FPGAs, GPUs, Cell processors, etc. Encryption Schemes for Copy Protection of Digital Media Analytical Projects

  2. Cryptographic StandardContests

  3. Cryptographic Standards Before 1997 Secret-Key Block Ciphers 2005 1999 1977 IBM & NSA DES – Data Encryption Standard Triple DES 1995 2003 1993 Hash Functions NSA SHA-1–Secure Hash Algorithm SHA-2 SHA 2000 1970 1990 1980 2010 time

  4. Why a Contest for a Cryptographic Standard? • Avoid back-door theories • Speed-up the acceptance of the standard • Stimulate non-classified research on methods of • designing a specific cryptographic transformation • Focus the effort of a relatively small cryptographic • community

  5. Cryptographic Standard Contests IX.1997 X.2000 AES 15 block ciphers 1 winner NESSIE I.2000 XII.2002 CRYPTREC V.2008 XI.2004 34 stream ciphers 4 HW winners + 4 SW winners eSTREAM XII.2012 X.2007 51 hash functions 1 winner SHA-3 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 time

  6. Cryptographic Contests - Evaluation Criteria Security Software Efficiency Hardware Efficiency μProcessors μControllers FPGAs ASICs Licensing Simplicity Flexibility

  7. AES Contest 1997-2000

  8. Rules of the Contest Each team submits Detailed cipher specification Justification of design decisions Tentative results of cryptanalysis Source code in C Source code in Java Test vectors

  9. AES: Candidate Algorithms 2 8 4 Germany: Korea: Canada: CAST-256 Deal Magenta Crypton Japan: Belgium: USA: E2 Mars RC6 Twofish Safer+ HPC Rijndael France: 1 DFC Israel, UK, Norway: Australia: Costa Rica: LOKI97 Serpent Frog

  10. AES Contest Timeline June 1998 Round 1 15 Candidates CAST-256, Crypton, Deal, DFC, E2, Frog, HPC, LOKI97, Magenta, Mars, RC6, Rijndael, Safer+, Serpent, Twofish, Security Software efficiency August 1999 Round 2 5 final candidates Mars, RC6, Twofish (USA) Rijndael, Serpent (Europe) Security Software efficiency Hardware efficiency October 2000 1 winner: Rijndael Belgium

  11. Security: Theoretical attacks better than exhaustive key search Serpent 9 32 23 10 Twofish 16 6 Mars 16 5 11 without 16 mixing rounds Rijndael 3 10 7 5 20 RC6 15 0 5 10 15 20 25 30 35 # of rounds in the attack/total # of rounds

  12. Security: Theoretical attacks better than exhaustive key search 28% 72% Serpent 38% 62% Twofish Mars 31% 69% 70% 30% Rijndael RC6 25% 75% 0 10 20 30 40 50 60 70 80 90 100 # of rounds in the attack/total # of rounds  100%

  13. Security: Authors of attacks Attacked cipher Team Twofish MARS Kelsey, Kohno, Schneier Ferguson, Stay, Wagner, Whiting Serpent Serpent Rijndael Knudsen, Meier RC6 Other groups Lucks, U. Mannheim Twofish Gilbert, Minier, France Telecom Gilbert, Handschuh, Joux, Vaudenay, France Telecom

  14. NIST Report: Security & Simplicity Security MARS High Serpent Twofish Rijndael Adequate RC6 Simple Complex Simplicity

  15. Efficiency in software: NIST-specified platform 200 MHz Pentium Pro, Borland C++ Throughput [Mbits/s] 128-bit key 192-bit key 256-bit key 30 25 20 15 10 5 0 Rijndael Twofish RC6 Mars Serpent

  16. AES Contest: Encryption time in clock cycles on various platforms Twofish team: Bruce Schneier & Doug Whiting better

  17. NIST Report: Software Efficiency Encryption and Decryption Speed 32-bit processors 64-bit processors DSPs Rijndael Twofish RC6 Rijndael Twofish high Rijndael Mars Twofish Mars RC6 Mars RC6 medium low Serpent Serpent Serpent

  18. NIST Report: Software Efficiency Encryption and decryption speed in software on smart cards 32-bit processors 8-bit processors Rijndael RC6 Rijndael high RC6 Mars Twofish medium Mars Twofish Serpent low Serpent

  19. Efficiency in Software Strong dependence on: 1. Instruction set architecture (e.g., variable rotations) 2. Programming language (assembler, C, Java) 3. Compiler 4. Compiler options 5. Programming style

  20. Efficiency in FPGAs: Speed Xilinx Virtex XCV-1000 Throughput [Mbit/s] 500 444 George Mason University 431 450 414 University of Southern California 400 353 Worcester Polytechnic Institute 350 294 300 250 177 200 173 149 143 150 112 102 104 88 100 62 61 50 0 RC6 Mars Rijndael Twofish Serpent x1 Serpent x8

  21. Efficiency in ASICs: Speed MOSIS 0.5μm, NSA Group Throughput [Mbit/s] 700 606 128-bit key scheduling 600 3-in-1 (128, 192, 256 bit) key scheduling 500 443 400 300 202 202 200 105 105 103 104 57 57 100 0 Mars RC6 Twofish Rijndael Serpent x1

  22. Lessons Learned Results for ASICs matched very well results for FPGAs, and were both very different than software FPGA ASIC x8 x1 x1 GMU+USC, Xilinx Virtex XCV-1000 NSA Team, ASIC, 0.5μm MOSIS Serpent fastest in hardware, slowest in software

  23. Lessons Learned Hardware results matter! Final round of the AES Contest, 2000 Votes at the AES 3 conference Speed in FPGAs GMU results

  24. Limitations of the AES Evaluation • Optimization for maximum throughput • Single high-speedarchitecture per candidate • No use of embedded resources of FPGAs (Block RAMs, dedicated multipliers) • Single FPGA family from a single vendor: • Xilinx Virtex

  25. SHA-3 Contest 2007-2012

  26. NIST SHA-3 Contest - Timeline Round 1 Round 3 Round 2 51 candidates 14 5 1 Dec. 2010 July 2009 Mid 2012 Oct. 2008

  27. SHA-3 Contest – Recent and Future Milestones 23 Aug 2010 – Second SHA-3 Candidate Conference, Santa Barbara, USA 9 Dec 2010 – Announcement of 5 algorithms qualified to Round 3 31 Jan 2011 – Acceptance of final tweaks for Round 3 Candidates 16 Feb 2011 – Publication of Round 2 report 22 Mar 2012 – Third SHA-3 Candidate Conference, Washington D.C. or Gaithersburg, MD, USA Summer 2012 – Announcement of the winner Beginning of 2013 – Publication of the new FIPS standard

  28. eBACS: ECRYPT Benchmarking of Cryptographic Systems: http://bench.cr.yp.to/ SUPERCOP - toolkit developed by D. Bernstein and T. Lange for measuring performance of cryptographic software • measurements on multiple machines (currently over 90) • each implementation is recompiled multiple times • (currently over 1600 times) with various compiler options • time measured in clock cycles/byte for multiple • input/output sizes • median, lower quartile (25th percentile), and upper quartile • (75th percentile) reported • standardized function arguments (common API)

  29. SUPERCOP Extension for Microcontrollers – XBX: 2009-present • Allows on-board timing measurements • Supports at least the following • microcontrollers: • 8-bit: • Atmel ATmega1284P (AVR) • 32-bit: • TI AR7 (MIPS) • Atmel AT91RM9200 (ARM 920T) • Intel XScale IXP420 (ARM v5TE) • Cortex-M3 (ARM) Developers: • Christian Wenzel-Benner, • ITK Engineering AG, Germany • Jens Gräf, LiNetCo GmbH, • Heiger, Germany

  30. http://cryptography.gmu.edu/athena ATHENa – Automated Tool for Hardware EvaluatioN • Open-source benchmarking environment, • written in Perl, aimed at • AUTOMATED generation of • OPTIMIZED results for • MULTIPLE hardware platforms. The most recent version 0.6.2 released in June 2011. Full features in ATHENa 1.0 to be released in 2012.

  31. Basic Dataflow of ATHENa User FPGA Synthesis and Implementation 6 5 3 Ranking of designs 2 Database query HDL + scripts + configuration files Result Summary + Database Entries ATHENa Server 1 HDL + FPGA Tools Download scripts andconfiguration files8 4 Designer Database Entries Interfaces+ Testbenches 31 0

  32. Low Area Implementation of a Selected Lightweight Hash Function Use of Embedded FPGA Resources (BRAMs, DSP units, etc.) in Implementations of 5 Round 3 SHA-3 Candidates 3. Your ECE 545 project + extension discussed with the Instructor Hardware Projects

  33. Optimizing Best Available Software Implementations of the SHA-3 candidates (using coding techniques, special instructions, assembly language, etc.). Comparing the sphlib 2.1 C (or Java) Implementations of Hash Functions with the Best C (or Java) Implementations Submitted to eBACS. Porting Selected C Implementations of the SHA-3 Candidates to the TI MSP430 microcontroller or Other Microcontroller Available to You. Software Implementations of Selected Lightweight Hash Functions. Software Projects

More Related