1 / 52

Telecommunication Systems Design

Telecommunication Systems Design. October 10, 2007 Akiyuki Goto Faculty of Information Technology King Mongkut’s Institute of Technology North Bangkok. Revision. CONTENTS. 1. Preface 2. Overview of Telecommunication Systems 3. Network Planning

adonis
Télécharger la présentation

Telecommunication Systems Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Telecommunication Systems Design October 10, 2007 Akiyuki Goto Faculty of Information Technology King Mongkut’s Institute of Technology North Bangkok

  2. Revision

  3. CONTENTS 1. Preface 2. Overview of Telecommunication Systems 3. Network Planning ( Master plan, QoS, GoS, CoS, Proposal, Quality, Network management ) 4. OSI 5. Network Configuration ( PSTN, ISDN, ATM, Packet switch, LAN switch, Internet, Ethernet, Wireless network ) 6. Network Analysis and Design 7. Trouble Shooting 8. Standardization 9. GLOSSARY

  4. 1. Preface Telecommunications play an ever-increasing role in improving the quality of life of all peoples. For example, advanced communications system and meteorology help to boost efficiency and safety of our transportation system. Telecommunication and postal services lessen the need to travel and thus reduce traffic congestion and thereby cut economic, social and environmental costs associated either directly or indirectly therewith. A reliable weather forecast and an effective natural-disaster warning system can prevent or mitigate damages and loss of lives. These are few examples of how telecommunications can better our lives. The 21st century will mark the entrance to what we call the "Information Age". As the world economy becomes increasingly competitive, the need for a modern and reliable telecommunications network is an compelling as ever. Access to high-quality communications services at competitive prices will bolster economic growth and boost efficiency and productivity in business, government and social services sectors. With the emergence of the "information highway', citizens across the world can exchange, access and learn from all sorts of information made available on line. Moreover, they will also be able to exchange and communicate ideas furthering what is becoming a borderless society. Thus, there need to be leaps in telecommunications development, be they in terms of quantity, quality, access, regulations or competition as well as privatization of state enterprises in order for the industry to meet the surging demand for communications services in keeping with the expansion of the economy, population and rising income level, as well as to prepare for the new era brought about by the "Information Revolution".

  5. 2. Overview of Telecommunication Systems 1883 - Transformer invented. 1891 - 60 cycle AC system introduced in U.S. 1897 - Electron discovered by J. J. Thomson. 1903 - Electric vacuum cleaner. Electric washing machine. 1906 - Lee deForest invents the vacuum tube. 1908 - A wireless message was sent long-distance for the first time from the Eiffel Tower in Paris. 1911 - Air conditioning. 1912 - first Strowger exchange opened in UK at Epsom in Surrey. 1915 - First automatic telephone exchange in Britain. 1919 - Wireless telephone invented, enabled air pilots to talk in flight. 1925 - Bell Telephone Laboratories founded. 1.5 million dial telephones in service out of 12 million phones in service. 1930 - The BBC begins regular TV transmissions. 1935 - First telephone call around the world. About 6700 telcos in operation. 1936 - About 200 hundred television sets are in use world-wide. 1947 - The Transistor is invented 1949 - AT&T introduces the famous black rotary Model 500 telephone. 1954 - Bell labs announced first solar battery. 1960 - AT&T installs first electronic switching system in Morris, IL. There are now 3299 telephone companies. 1965 - Texas Instruments develops the transistor-transistor logic (TTL). 1965 - AT&T Bell Laboratories develop Unix. 1967 - Most TV broadcasts are in color. 1970 - Bell Telephone Labs release design information to Western Electric for the production of Modular Telephone Cords and Jacks. 1971 - Intel develops the the first processor, the 4004 1974 - Intel’s improved microprocessor chip, the 8080 becomes a standard in the microcomputing industry. 1976 - The Intel 8086 is introduced. Alexander Graham Bell

  6. 1981 - VHDL is proposed and begins development. 1983 - The first hand held mobile phone becomes commercially available 1985 – Bellcore introduces “IN” (Intelligent Network).1986 - The first fibre-optic cable across the English Channel began service. 1987 - Bellcore introduces the Asymmetric Digital Subscriber Line (ADSL) concept which has the potential of multimedia transmission over the nation's copper loops. 1989 - AT&T reported first loss in 103 years; $1.67 billion in 1988. 1991 - The computer Monkey Virus is first discovered in Edmonton, Canada. The first GSM Mobile Phone network opened in Europe 1992 - The World Wide Web is born - the brain child of CERN physicist Tim Berners-Lee. 1994 - Worlds First Satellite Digital Television Service Launched 1995 - Microsoft Releases Windows 95, within four days the software sells more than 1 million copies. DVD, optical disc storage media format, is announced. 1998 - Microsoft Windows 98 is officially released on June 25, 1998. 1999 - IEEE introduced 802.11b. Bluetooth announced. 2001 - Microsoft Windows XP home and professional editions are released October 25, 2001. 2005 - Microsoft Windows XP Professional x64 Edition is released on April 24, 2005. 2007 - Microsoft releases Microsoft Windows Vista and Office 2007 to the general public January 30, 2007.

  7. First Generation Cellular standards: Advanced Mobile Phone System (AMPS) was first launched in the US. It is an analog system based on FDMA (Frequency Division Multiple Access) technology. Today, it is the most used analog system and the second largest worldwide. Nordic Mobile Telephone (NMT) was mainly developed in the Nordic countries. (4.5 million in 1998 in some 40 countries including Nordic countries, Asia, Russia, and other Eastern European Countries) Total Access Communications System (TACS) was first used in the UK in 1985. It was based on the AMPS technology. 2G Cellular With second-generation networks, or 2G, wireless technology progressed from analog to digital. These networks are still the most prevalent standard in use today. There are three main 2G network standards: CDMA (Code Division Multiple Access), GSM (Global System for Mobile Communications), and TDMA (Time Division Multiple Access). Each type has its own characteristics and features. For instance, GSM networks are global, and the mobile devices connecting to them can be used in the United States and abroad. But 2G networks were primarily intended for digital voice services. Under ideal circumstances, 2G networks are painfully slow at sending data, reaching 10 to 19 kilobits per second, which is much less than half the speed of a traditional 56-kbps dial-up modem. And unless they've been especially optimized, most Web pages accessed from a 2G network inch across a handheld screen, which makes surfing the Web on a 2G wireless device as efficient as running underwater. To date, network service providers have had a difficult time luring the masses onto the wireless Web PDC, PHS

  8. 3G Cellular Mobile telephony allowed us to talk on the move. The internet turned raw data into helpful services that people found easy to use. Now, these two technologies are converging to create third generation mobile services. In simple terms, third generation (3G) services combine high speed mobile access with Internet Protocol (IP)-based services. The 3G technology primarily consists of two standards, WCDMA (Wideband CDMA) and CDMA2000. Third generation: It is in the mid-1980s that the concept for IMT-2000, "International Mobile Telecommunications", was born at the ITU as the third generation system for mobile communications. After over ten years of hard work under the leadership of the ITU, a historic decision was taken in the year 2000 : unanimous approval of the technical specifications for third generation systems under the brand IMT-2000. The spectrum between 400 MHz and 3 GHz is technically suitable for the third generation. The entire telecommunication industry, including both industry and national and regional standards-setting bodies gave a concerted effort to avoiding the fragmentation that had thus far characterized the mobile market. This approval meant that for the first time, full interoperability and interworking of mobile systems could be achieved. IMT-2000 offers the capability of providing value-added services and applications on the basis of a single standard. The system envisages a platform for distributing converged fixed, mobile, voice, data, Internet and multimedia services. One of its key visions is to provide seamless global roaming, enabling users to move across borders while using the same number and handset. IMT-2000 also aims to provide seamless delivery of services, over a number of media (satellite, fixed, etc…). It is expected that IMT-2000 will provide higher transmission rates: a minimum speed of 2Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle. Second-generation systems only provide speeds ranging from 9.6 kbit/s to 28.8 kbit/s. Third generation standard: The basic 3G standards were developed largely by the private sector rather than formal standards organizations. However, the International Telecommunication Union has adopted International Mobile Telecommunications 2000 (IMT-2000) to formally standardize the already developed 3G-wireless flavors, to let them offer a consistent set of services throughout the world, and to provide a roadmap for upgrades

  9. 4G cellular WiMAX WiBro Smart antennas Multiple-Input-Multiple-Output Systems Space-Time Coding Dynamic Packet Assignment Wideband OFDM

  10. AMPS: Advanced Mobile Phone System EV-DO: Evolution-Data Optimized GPRS: General Packet Radio Service GSM: Global System for Mobile Communications HSDPA: High Speed Downlink Packet Access IMT: International Mobile Telecommunications NMT: Nordic Mobile Telephone OFDM: Orthogonal Frequency Division Multiplexing PDC: Personal Digital Cellular PHS: Personal Handy-phone System TACS: Total Access Communications System TDMA: Time Division Multiple Access UMB: Ultra Mobile Broadband W-CDMA: Wideband - Code Division Multiple Access

  11. Early telecommunications included smoke signals and drums. Drums were used by natives in Africa, New Guinea and South America, and smoke signals in North America and China. Contrary to what one might think, these systems were often used to do more than merely announce the presence of a camp. In 1792, a French engineer, Claude Chappe built the first visual telegraphy (or semaphore) system between Lille and Paris. This was followed by a line from Strasbourg to Paris. In 1794, a Swedish engineer, Abraham Edelcrantz built a quite different system from Stockholm to Drottningholm. As opposed to Chappe's system which involved pulleys rotating beams of wood, Edelcrantz's system relied only upon shutters and was therefore faster. However semaphore as a communication system suffered from the need for skilled operators and expensive towers often at intervals of only ten to thirty kilometres (six to nineteen miles). As a result, the last commercial line was abandoned in 1880.

  12. On September 11, 1940 George Stibitz was able to transmit problems using teletype to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe with remote dumb terminals remained popular throughout the 1950s. However it was not until the 1960s that researchers started to investigate packet switching — a technology that would allow chunks of data to be sent to different computers without first passing through a centralized mainframe. A four-node network emerged on December 5, 1969 between the University of California, Los Angeles, the Stanford Research Institute, the University of Utah and the University of California, Santa Barbara. This network would become ARPANET, which by 1981 would consist of 213 nodes. In June 1973, the first non-US node was added to the network belonging to Norway's NORSAR project. This was shortly followed by a node in London. ARPANET's development centred around the Request for Comment process and on April 7, 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet and many of the protocols the Internet relies upon today were specified through this process. In September 1981, RFC 791 introduced the Internet Protocol v4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today. A more relaxed transport protocol that, unlike TCP, did not guarantee the orderly delivery of packets called the User Datagram Protocol (UDP) was submitted on 28 August 1980 as RFC 768. An e-mail protocol, SMTP, was introduced in August 1982 by RFC 821 and HTTP/1.0 a protocol that would make the hyperlinked Internet possible was introduced on May 1996 by RFC 1945. However not all important developments were made through the Request for Comment process. Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent for the Token Ring protocol was filed by Olof Soderblom on October 29, 1974. And a paper on the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976.

  13. Telecommunication is the technique of transmitting a message, from one point or place to another with the typical additional attribute of being bi-directional. In practice it also recognizes that something may be lost in the process; hence the term 'telecommunication' covers all forms of distance communications, including radio, telegraphy, television, telephony, data communication and computer networking. The elements of a telecommunication system are a transmitter, a medium (line) and possibly a channel imposed upon the medium, and a receiver. The transmitter is a device that transforms or encodes the message into a physical phenomenon; the signal. The transmission medium, by its physical nature, is likely to modify or degrade the signal on its path from the transmitter to the receiver. The receiver has a decoding mechanism capable of recovering the message within certain limits of signal degradation. In some cases, the final "receiver" is the human eye and/or ear (or in some extreme cases other sense organs) and the recovery of the message is done by the brain. Telecommunication can be point-to-point, point-to-multipoint or broadcasting, which is a particular form of point-to-multipoint that goes only from the transmitter to the receivers. The art of the telecommunications engineer is to analyse the physical properties of the line or transmission medium, and the statistical properties of the message in order to design the most effective encoding and decoding mechanisms. When systems are designed to communicate through human sense organs (mainly vision and hearing), physiological and psychological characteristics of human perception will be taken into account. This has important economic implications and engineers will research what defects may be tolerated in the signal yet not affect the viewing or hearing experience too badly.

  14. Bell Labs scientist Claude E. Shannon published A Mathematical Theory of Communication in 1948. This landmark publication was to set the mathematical models used to describe communication systems called information theory. Information theory enables us to evaluate the capacity of a communication channel according to its bandwidth and signal-to-noise ratio. At the time of publication, telecommunication systems were predominantly based on analog electronic circuit design. The introduction of mass-produced digital integrated circuits has enabled telecom engineers to take full advantage of information theory. From the demands of telecom circuitry, a whole specialist area of integrated circuit design has emerged called digital signal processing. Claude Shannon's clever electromechanical mouse, which was one of the earliest attempts to "teach" a machine to "learn" and one of the first experiments in artificial intelligence. He designed and built chess-playing, maze-solving, juggling and mind-reading machines. These activities bear out Shannon's claim that he was more motivated by curiosity than usefulness. In his words "I just wondered how things were put together."

  15. A telecommunication system consists of three basic elements: 1.a transmitter that takes information and converts it to a signal; 2.a transmission medium that carries the signal; and, 3.a receiver that receives the signal and converts it back into usable information. For example, in a radio broadcast, the broadcast tower is the transmitter, free space is the transmission medium and the radio is the receiver. Often telecommunication systems are two-way, and a single device acts as both a transmitter and receiver or transceiver. For example, a mobile phone is a transceiver. Telecommunication over a phone line is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous receivers.

  16. During the next ten years great changes are expected in the ways of doing business on the telecommunications markets. The relative importance of traditional transmission and switching services will decrease. This is due both to liberalisation of telecommunications and to developments in the technology. We anticipate that in the future transmission and switching are done using generic technology based on international standards. In telecommunications there are currently several architectures that need proper services. 1) Intelligent Network (IN), 2) Telecommunications Management Network (TMN), 3) Telecommunications Information Networking Architecture (TINA), and 4) the third generation mobile systems (UMTS and FPLMTS/IMT-2000). TINA-C, a world wide consortium developing the Telecommunication Information Networking Architecture (TINA), has the goal to define and to validate an open architecture for future telecommunications services. The architecture is based on distributed computing, object orientation, and other standards and recommendations in telecommunications and distributed processing fields, especially Open Distributed Processing (ODP), Intelligent Networks (IN), Telecommunications Management Networks (TMN), Asynchronous Transfer Mode (ATM), and Common Object Request Broker Architecture (CORBA). TINA:The purpose of these principles is to insure interoperability, portability and reusability of software components and independence from specific technologies, and to share the burden of creating and managing a complex system among different business stakeholders, such as consumers, service providers, and connectivity providers.

  17. Examples of telecommunications systems: Telegraphy Public Switched Telephone Network (PSTN) Radio Television Communications satellites Mobile Phone Local Area Network (LAN), Ethernet : best effort type Internet : best effort type xDSL : ADSL、RADSL、SDSL、HDSL、VDSL

  18. 3. Network planning The Master Plan provides a realistic and achievable image of the County, both present and future, through a framework of goals and policies. The goals provide general statements reflecting the desires of county residents regarding the use of land and lay the groundwork for zoning and the land use decision making process. The policies provide the County's positions relating to the identified goals and establish guidelines for direction or action.The physical development of the County has direct and indirect effects on property rights, natural resources and property values. This Master Plan seeks a balance that respects both, in an effort to maintain the county residents' quality of life. Therefore, it is the intent to allow development that is responsible and consistent with the goals and objectives set out in this plan. Country Goal Demand Master Plan Quality of Life 5~10 Years Country Development Plan Execution Plan Several Projects Projects Plan Project Selection Vendor’s Proposal Proposal Specification P/D/C/A

  19. Example of Master Plan: e-Japan : Summary (2001~2005) (January 22, 2001) Japan must take revolutionary yet realistic actions promptly in order to create a "knowledge-emergent society,"where everyone can actively utilize IT and fully enjoy its benefits. We will strive to establish an environment where the private sector, based on market forces, can exert its full potential and make Japan the world's most advanced IT nation within five years. 1. Philosophy In order for Japan to continue its economic prosperity and raise the quality of life, it is vital to promptly establish a new national infrastructure, including legal frameworks and information infrastructures, suitable for a new society. The United States, European and Asian nations are aggressively developing their IT infrastructures as part of their national strategies, in order to secure world competitive leadership in the 21st century, in recognition of the importance of creating a "knowledge-emergent" environment. In order to implement necessary institutional reforms and measures quickly and steadfastly aiming at the world's most advanced IT environment, Japan must establish a national strategy and ensure its common and shared understanding among the nation's citizens. The government should promptly establish an infrastructure that functions according to market forces, so that the private sector can engage in various creative activities through free and fair competition. (1). Establishment of the ultra high-speed network infrastructure and competition policies Aim to provide high-speed constant access networks to at least 30million households and ultra high-speed (100Mbps) constant access to 10million households. Promote the shift to the Internet networks equipped with IPv6, FTTH/CATV/XDSL/FWA. (2). Facilitation of electronic commerce (3). Realization of electronic government (4). Nurturing high-quality human resources

  20. Wireless Telecommunications Master Plan A Wireless Telecommunications Master Plan must be fluid and capable of evolving to accommodate additional carriers, services, (i.e., wireless internet, a commodity that was not envisioned when Telecommunications Act of 1996 was written and is the next major impending technology), as well as population growth and future local infrastructure development. The Master Plan is more than a set of prepackaged guidelines for wireless development; rather it is a functional representation of the community's physical space and demonstrates existing and potential wireless facilities. With a Master Plan, the community will know how and where future telecommunications infrastructure deployments will occur, rather than reacting to demands from multiple service providers or tower owners. The Master Plan, as enabled by the Ordinance, lessens the burden on staff by streamlining the application process for those applicants who develop in accordance to the Ordinance and Master Plan, as well as shifting the technical review from staff to a third party who is certified in those disciplines necessary to conduct and certify such reviews. The Master Plan combines land-use planning strategies used in public policy with industry-accepted radio frequency engineering standards to create an illustrative planning tool that complements the Development Ordinance. The first step is to identify existing tower locations and their corresponding signal coverage conditions. Second, compare this information to the locations of public-owned land and existing public policy; followed by a series of evaluations founded on land use principles and engineering practices. The plan offers strategies to reduce tower infrastructure by improving efforts to "merge" wireless deployments from various service providers, thereby minimizing tower proliferation by increasing shared sites.

  21. On a worldwide market viewpoint, the next several phases of wireless are inevitable. In Japan and Europe, 3G is already being deployed and utilized by the citizens in those countries. According to the wireless telephone manufacture Nokia, the Nokia 3G network solution was available for operators in 2001 and 2002. The first locations to have 3G services were Japan in 2001 and Europe in 2002. The United States is starting to experience the first deployments of 3G; other parts of the world are being introduced to 4G. Proving to early skeptics that while the deployment of wireless services in the United States have slowed down, 3G services will continue to evolve and be sold here and abroad. The article below explains the type of wireless services now being promoted in Asia which will eventually be promoted in the United States.

  22. Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new network or service meets the needs of the subscriber and operator. The process can be tailored according to each new network or service. This is an extremely important process which must be performed before the establishment of a new telecommunications network or service. A traditional network planning methodology involves four layers of planning, namely: 1.business planning 2.long-term and medium-term network planning 3.short-term network planning 4.operations and maintenance. The network planning process begins with the acquisition of external information. This includes: 1.forecasts of how the new network/service will operate; 2.the economic information concerning costs; and 3.the technical details of the network’s capabilities. Before the network planning process begins, choices must be made, involving protocols and transmission technologies .

  23. Once the initial decisions have been made, the network planning process involves three main steps: Topological design: This stage involves determining where to place the components and how to connect them. The (topological) optimisation methods that can be used in this stage come from an area of mathematics called Graph Theory. These methods involve determining the costs of transmission and the cost of switching, and thereby determining the optimum connection matrix and location of switches and concentrators. Network-synthesis: This stage involves determining the size of the components used, subject to performance criteria such as the Grade of Service (GoS). The method used is known as "Nonlinear Optimisation", and involves determining the topology, required GoS, cost of transmission, etc., and using this information to calculate a routing plan, and the size of the components. Network realization: This stage involves determining how to meet capacity requirements, and ensure reliability within the network. The method used is known as "Multicommodity Flow Optimisation", and involves determining all information relating to demand, costs and reliability, and then using this information to calculate an actual physical circuit plan. These steps are interrelated and are therefore performed iteratively, and in parallel with one another. The planning process is highly complex, meaning that at each iteration, an analyst must increase his planning horizons, and in so doing, he must generate plans for the various layers outlined above.

  24. During the process of Network Planning and Design, it is necessary to estimate the expected traffic intensity and thus the traffic load that the network must support. If a network of a similar nature already exists, then it may be possible to take traffic measurements of such a network and use that data to calculate the exact traffic load. However, as is more likely in most instances, if there are no similar networks to be found, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity . The forecasting process involves several steps as follows : 1.Definition of problem; 2.Data acquisition; 3.Choice of forecasting method; 4.Analysis/Forecasting; 5.Documentation and analysis of results.

  25. In telecommunication, and in particular teletraffic engineering, the quality of voice service is specified by two measures: the grade of service (GoS) and the quality of service (QoS). Grade of service is the probability of a call in a circuit group being blocked or delayed for more than a specified interval, expressed as a vulgar fraction or decimal fraction. This is always with reference to the busy hour when the traffic intensity is the greatest. Grade of service may be viewed independently from the perspective of incoming versus outgoing calls, and is not necessarily equal in each direction or between different source-destination pairs. On the other hand, the quality of service which a single circuit is designed or conditioned to provide, e.g. voice grade or program grade is called the quality of service. Criteria for different qualities of service may include equalization for amplitude over a specified band of frequencies, or in the case of digital data transported via analogue circuits, include equalization for phase also. Criteria for mobile quality of service in cellular telephone circuits include the probability of abnormal termination of the call.

  26. When a user attempts to make a telephone call, the routing equipment handling the call has to determine whether to accept the call, reroute the call to alternative equipment, or reject the call entirely. Rejected calls occur as a result of heavy traffic loads (congestion) on the system and can result in the call either being delayed or lost. If a call is delayed, the user simply has to wait for the traffic to decrease, however if a call is lost then it is removed from the system. The Grade of Service is one aspect of the quality a customer can expect to experience when making a telephone call. In a Loss System, the Grade of Service is described as that proportion of calls that are lost due to congestion in the busy hour. For a Lost Call system, the Grade of Service can be measured using Equation 1. For a delayed call system, the Grade of Service is measured using three separate terms: The mean delay td – Describes the average time a user spends waiting for a connection if their call is delayed. The mean delay to – Describes the average time a user spends waiting for a connection whether or not their call is delayed. The probability that a user may be delayed longer than time t while waiting for a connection. Time t is chosen by the telecommunications service provider so that they can measure whether their services conform to a set Grade of Service. No. of lost calls GOS = No. of offered calls

  27. Class of Service : Different telecommunications applications require different Qualities of Service. For example, if a telecommunications service provider decides to offer different qualities of voice connection, then a premium voice connection will require a better connection quality compared to an ordinary voice connection. Thus different Qualities of Service are appropriate, depending on the intended use. To help telecommunications service providers to market their different services, each service is placed into a specific class. Each Class of Service determines the level of service required. To identify the Class of Service for a specific service, the network’s switches and routers examine the call based on several factors. Such factors can include: 1.The type of service and priority due to precedence 2.The identity of the initiating party 3.The identity of the recipient party

  28. Quality of Service: In broadband networks, the Quality of Service is measured using two criteria. The first criterion is the probability of packet losses or delays in already accepted calls. The second criterion refers to the probability that a new incoming call will be rejected. To avoid the former, broadband networks limit the number of active calls so that packets from established calls will not be lost due to new calls arriving. As in circuit-switched networks, the Grade of Service can be calculated for individual switches or for the whole network.

  29. Request for Proposal (RFP): Client Vendors (Bidders) RFP Proposal RFP: 1. Current network 2. Current system 3. Cabling Requirement 4. Required technology 5. Maintenance 6. Training 7. Equipment specification 8. Equipment quantities 9. Operating services 10. Routing map 11. Uninterrupted power system 12. Guarantee 13. System up grade and change 14. Pre-install cost and post-install cost

  30. Network Quality: The question is how to define “ network quality” . 1. Coverage 2. Voice quality 3. Mobility 4. Functionality and services 5. Speed 6. Phone number portability ( same phone number) 7. Easy operation and maintenance 8. Delay 9. Noisy 10. Costly 11. Emergency call 12. Fault tolerance (non stop services) 13. Easy grade up 14. Standardization

  31. Network management: Network management refers to the maintenance and administration of computer networks and telecommunications networks at the top level. Network management is the execution of the set of functions required for controlling, planning, allocating, deploying, coordinating, and monitoring the resources of a network, including performing functions such as initial network planning, frequency allocation, predetermined traffic routing to support load balancing, cryptographic key distribution authorization, configuration management, fault management, security management, performance management, bandwidth management, and accounting management. A large number of protocols exist to support network and network device management. Common protocols include SNMP, CMIP, WBEM, Common Information Model, Transaction Language 1, Java Management Extensions - JMX, and netconf. Note: Network management does not include user terminal equipment.

  32. Future’s Network Management System Network More Complexity More Diversification More High Performance Network Management System - Self- Restoration - Self- Control - Self- Judgment - Self- Planning Need More Intelligent System Autonomy System

  33. New-Built Up Grade Network Management Present Network -Master Plan -Forecast -Data Collection (Traffic, GoS, QoS) -Operation & Maintenance Reconfiguration

  34. Common Management Information Protocol (CMIP) : CMIP was designed in competition with SNMP, and has far more features than SNMP. For example, SNMP defines only "set" actions to alter the state of the managed device, while CMIP allows the definition of any type of action. CMIP was to be a key part of the Telecommunications Management Network vision, and was to enable cross-organizational as well as cross-vendor network management. On the Internet, however, most TCP/IP devices support SNMP and not CMIP. This is because of the complexity and resource requirements of CMIP agents and management systems. CMIP is supported mainly by telecommunication devices. CMOT is the Common Management Interface Protocol (CMIP) over TCP/IP as defined in RFC 1189 (a revised version of RFC 1095).It defines a network management architecture using the International Organization for Standardization's (ISO) Common Management Information Services/Common Management Information Protocol (CMIS/CMIP) over TCP/IP. This architecture provides a way by which control and monitoring information can be exchanged between a manager and a remote network entity. Web-Based Enterprise Management (WBEM): WBEM is a set of systems management technologies developed to unify the management of distributed computing environments. To understand the WBEM architecture, consider the components which lie between the operator trying to manage a device (configure it, turn it off and on, collect alarms, etc.) and the actual hardware and software of the device. Java Management Extensions (JMX): JMX technology provides the tools for building distributed, Web-based, modular and dynamic solutions for managing and monitoring devices, applications, and service-driven networks. By design, this standard is suitable for adapting legacy systems, implementing new management and monitoring solutions, and plugging into those of the future.

  35. Transaction Language 1 (TL1): TL1 is a widely used management protocol in telecommunications. It is a cross-vendor, cross-technology man-machine language, and is widely used to manage optical (SONET) and broadband access infrastructure in North America. It is defined in GR-831 by Bellcore (now Telcordia Technologies). TL1 was developed by Bellcore in 1984 as a standard man-machine language to manage network elements for the Regional Bell Operating Companies (RBOCs). It is based on Z.300 series man machine language standards. TL1 was designed as a standard protocol readable by machines as well as humans to replace the diverse ASCII based protocols used by different Network Element (NE) vendors. It is extensible to incorporate vendor specific commands. Common Information Model (CIM): CIM is an open standard that defines how managed elements in an IT environment are represented as a common set of objects and relationships between them. This is intended to allow consistent management of these managed elements, independent of their manufacturer or provider. Another frequently used way to describe CIM is to say that it allows multiple parties to exchange management information about these managed elements. However, this falls short in expressing that CIM not only represents these managed elements and the management information, but also provides means to actively control and manage these elements. By using a common model of information, management software can be written once and work with many implementations of the common model without complex and costly conversion operations or loss of information. The CIM standard is defined and published by the Distributed Management Task Force (DMTF). A related standard is Web-Based Enterprise Management (WBEM, also defined by DMTF) which defines a particular implementation of CIM, including protocols for discovering and accessing such CIM implementations.

  36. Netconf: Netconf is a network management protocol developed in the IETF by the Netconf working group. It was published as RFC 4741. The NETCONF protocol provides mechanisms to install, manipulate, and delete the configuration of network devices. It also can perform some monitoring functions. It uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. Wireshark (Ethereal): In computing, Wireshark (formerly known as Ethereal) is a free software protocol analyzer, or "packet sniffer" application, used for network troubleshooting, analysis, software and protocol development, and education. It has all of the standard features of a protocol analyzer. In June 2006 the project was renamed from Ethereal due to trademark issues. The functionality Wireshark provides is very similar to tcpdump, but it has a GUI front-end, and many more information sorting and filtering options. It allows the user to see all traffic being passed over the network (usually an Ethernet network but support is being added for others) by putting the network card into promiscuous mode.

  37. Network Management System (NMS): A Network Management System (NMS) is a combination of hardware and software used to monitor and administer a network. Individual network elements (NEs) in a network are managed by an element management system. An element management system (EMS) manages one or more of a specific type of network elements (NEs). An EMS allows the user to manage all the features of each NE individually, but not the communication between NEs - this is done by the network management system (NMS). NEs expose one or more management interfaces that the EMS uses to communicate with and to manage them. These management interfaces use a variety of protocols including SNMP, TL1, CLI, XML, and CORBA.

  38. General relationship of Telecommunications Management Network (TMN) Other TMN A TMN provides management functions for telecommunication networks and services and offers communications between itself and the telecommunication networks, services and other TMN. In this context a telecommunication network is assumed to consist of both digital and analogue telecommunications equipment and associated support equipment. A telecommunication service in this context consists of a range of capabilities provided to customers. The basic concept behind a TMN is to provide an organized architecture to achieve the interconnection between various types of Operations Systems (OS) and/or telecommunications equipment for the exchange of management information using an agreed architecture with standardized interfaces including protocols and messages.

  39. TMN physical architecture of ITU-T M.3010 DCN: Data Communication Network NE: Network Element OS: Operations System QA: Q-Adapter QMD: Q-Mediation Device WS: Workstation X/F/Q: X/F/Q Interface such as Q.811,Q.812, X.290

  40. The Intelligent Network, typically stated as its acronym IN, is a network architecture intended both for fixed as well as mobile telecom networks. It allows operators to differentiate themselves by providing value-added services in addition to the standard telecom services such as PSTN, ISDN and GSM services on mobile phones. In IN, the intelligence is provided by network nodes owned by telecom operators, as opposed to solutions based on intelligence in the telephone equipment, or in Internet servers provided by any part. IN is based on the Signaling System #7 (SS7) protocol between telephone network switching centers and other network nodes owned by network operators. The upcoming IP Multimedia Subsystem (IMS) standards can be seen as a hybrid of intelligent network services and Internet services for cellular multimedia communication. Service Management Service Control Point (SCP) No.7 Signaling System Service Switching Point (SSP) Switching Transmission

  41. The main concepts (functional view) surrounding IN services or architecture are following are connected with SS7 architecture: Service Switching Function (SSF) or Service Switching Point (SSP) This is co-located with the telephone exchange itself, and acts as the trigger point for further services to be invoked during a call. The SSP implements the Basic Call State Machine (BCSM) which is a Finite state machine that represents an abstract view of a call from beginning to end (off hook; dialling; answer; no answer; busy; hang up etc). As each state is traversed, the exchange encounters Detection Points (DPs) at which the SSP may invoke a query to the SCP to wait for further instructions on how to proceed. This query is usually called a trigger. Trigger criteria are defined by the operator and might include the subscriber calling number or the dialled number. The SSF is responsible for entertaining calls requiring value added services. Service Control Function (SCF) or Service Control Point (SCP) This is a separate set of platforms that receive queries from the SSP. The SCP contains service logic which implements the behaviour desired by the operator, i.e., the services. During service logic processing, additional data required to process the call may be obtained from the SDF. The logic on the SCP is created using the SCE. Service Data Function (SDF) or Service Data Point (SDP) This is a database that contains additional subscriber data, or other data required to process a call. For example, the subscribers prepaid credit which is remaining may be an item stored in the SDF to be queried in real time during the call. The SDF may be a separate platform, or is sometimes co-located with the SCP. Service Creation Environment (SCE) This is the development environment used to create the services present on the SCP. Although the standards permit any type of environment, it is fairly rare to see low level languages like C used. Instead, proprietary graphical languages have been used to enable telecom engineers to create services directly. The languages usually belong to 4G languages, the user can use Graphical Interface to manipulate between different functions to formulate a service. Specialized Resource Function (SRF) or Intelligent Peripheral (IP) This is a node which can connect to both the SSP and the SCP and delivers additional special resources into the call, mostly related to voice data, for example play voice announcements or collect DTMF (Dual-tone multi-frequency ) tones from the user.

  42. INAP: Intelligent Network Application Protocol The Service Switching Point (SSP) consists of the hardware switch in combination with the basic call control software and the added functionality for the support of IN. Signal Transfer Point (STP) In a switching network that contains a separate signalling network based on SS-7 the transactions between the SSP and SCP are achieved via the STP. The Service Control Point (SCP) is a real-time database that stores customer records. When accessed by an enquiry from the SSP, the SCP executes service logic that has been customised for a particular application. An Adjunct Processor (AP) is a local SCP. It is tightly coupled to, and co-located with, a single switch. It can use a proprietary protocol for communication with the switch or it can use the CS-1 standard, Intelligent Network Application Protocol (INAP). The Service Management System (SMS) operates off-line from the voice call network and enables an operator to create, update, and validate, such items as number translation and call charge tables, and download these together with service logic code, into the SCP and AP.

  43. The Service Creation Environment (SCE) is a high-level interface to the IN that allows TOs to interactively develop, debug, and provision new services using software engineering tools whose output is compatible with the IN systems. An Intelligent Peripheral (IP) is a stand-alone processor that is tightly coupled to a switch to provide additional functionality to the SSP in the switch. Such additional functionality could include: Access to signalling networks Recorded announcements Interactive Voice Response (IVR) Dual Tone Multi-Frequency (DTMF) translation Speech recognition FAX management

  44. SNMP Agent SNMP Manager SNMP LAN Switch Monitor Network MIB (Management Information Base) SNMP RMON (Remote Network Monitoring)

  45. SNMP Agent SNMP Manager Read Information Information Control (Add, Change, Delete) Information Report Event Generated Management Information Sequence MIB (Management Information Base)

  46. Traffic Measurement on MAC address Others MAC Address Sending Packet Sending Byte Receiving Packet Receiving Byte

  47. Traffic Measurement on IP address IP Address Sending Packet Sending Byte Receiving Packet Receiving Byte

  48. Mean Time Between Failure MTBF: MTBF is the mean (average) time between failures of a system, and is often attributed to the "useful life" of the device i.e. not including 'infant mortality' or 'end of life'. Calculations of MTBF assume that a system is "renewed", i.e. fixed, after each failure, and then returned to service immediately after failure. The average time between failing and being returned to service is termed mean down time (MDT) or mean time to repair (MTTR).

  49. Mean time to recovery (MTTR): MTTR is the average time that a device will take to recover from a non-terminal failure. Examples of such devices range from self-resetting fuses (where the MTTR would be very short, probably seconds), up to whole systems which have to be replaced. The MTTR would usually be part of a maintenance contract, where the user would pay more for a system whose MTTR was 24 hours, than for one of, say, 7 days. This does not mean the supplier is guaranteeing to have the system up and running again within 24 hours (or 7 days) of being notified of the failure. It does mean the average repair time will tend towards 24 hours (or 7 days). A more useful maintenance contract measure is the maximum time to recovery which can be easily measured and the supplier held accountable.

  50. 4. OSI Open Systems Interconnection (OSI): the term "OSI" came into use on 12 October 1979.

More Related