1 / 31

Unifying Communications with Advanced Telephony Computing Architecture

Explore Bicom Systems' innovative telephony computing architecture that maximizes redundancy and ensures seamless failover operation for unified communications. Discover how the system handles various scenarios and safeguards against network failures.

cdonnelly
Télécharger la présentation

Unifying Communications with Advanced Telephony Computing Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome to we unify communications WWW.BICOMSYSTEMS.COM

  2. Advanced Telephony Computing Architecture1st Year Los Angeles Controller New York Mirror MIRRORING Hardware 4 nodes x 2 vSWITCH = 8 nodes Hardware 4 nodes x 2 vSWITCH = 8 nodes vSWITCH Primary Controller Secondary Controller vSWITCH Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 New York Hot Spare Hot Spare Node 4 Node 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Los Angeles Storage Cluster Storage Cluster Node 7 Node 7 vSWITCH vSWITCH Storage Cluster Storage Cluster Node 8 Node 8 Los Angeles Normal Operation Capacity 1000 concurrent calls, 10000 extensions circa. Failover Operation Capacity 15000 extensions. New York Normal Operation Capacity 500 concurrent calls, 5000 extensions circa. Failover Operation Capacity 15000 extensions. LEGEND Primary Controller Monitors all nodes and ensure that services are working. Secondary Controller Monitors Primary Controller and mirror to itself. Should Primary Controller fail or if Central Office should become unavailable it will assume Primary Controller role. Live Host Working services. Hot Spare Assume services for unavailable or failed Live Host. Cold Spare Cold Sparesare switched off and are available as extra capacity or to become new Hot Spares. Storage Cluster Storage Cluster is Network Redundant storage from which all services are running from.

  3. Scenario 1 Primary Controller role

  4. Scenario 1 Los Angeles Controller New York Mirror Primary Controllermonitors all live nodes: Live Hosts, Hot Spares & Storage Nodes. Primary Controller also ensures that data is duplicated from Live Hosts to Storage Clusters. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  5. Scenario 2 Secondary Controller role

  6. Scenario 2 Los Angeles Controller New York Mirror Secondary Controller monitors only the Primary Controllerfor availability and mirror to itself. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  7. Scenario 3 Los Angeles Live Host becomes unavailable

  8. Scenario 3 Los Angeles Controller New York Mirror If Live Host in Los Angeles encounters physical failure and becomes unavailable. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  9. Scenario 3 Los Angeles Controller New York Mirror Primary Controller will instruct first available Los AngelesHot Spare to assume service. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Live Host 500cc Hot Spare Node 4 Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  10. Scenario 4 New York Live Hostbecomes unavailable

  11. Scenario 4 Los Angeles Controller New York Mirror If Live Host in New York encounters physical failure and becomes unavailable. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  12. Scenario 4 Los Angeles Controller New York Mirror Primary Controller will instruct first available New YorkHot Spare to assume service. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Live Host 500cc Node 3 Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  13. Scenario 5 New York becomes totaly unavailable

  14. Scenario 5 Los Angeles Controller New York Mirror If New York becomes totally unavailable …due to Network failure, Act of Terror, Natural Disaster or other cause of total loss of Location. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare3 Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  15. Scenario 5 Los Angeles Controller New York Mirror Primary Controller will instruct first available Hot Spare in Los Angeles to assume services which were running in New York. vSWITCH 1 Primary Controller Node 1 Live Host 500cc Node 2 Live Host 500cc Node 3 Hot Spare Live Host 500cc Node 4 Node 4 vSWITCH 2 Hot Spare Node 5 Cold Spare Node 6 Storage Cluster Node 7 Storage Cluster Node 8

  16. Scenario 6 Primary Controller failure

  17. Scenario 6 Los Angeles Controller New York Mirror If Primary Controller encounters physical failure and becomes unavailable. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  18. Scenario 6 Los Angeles Controller New York Mirror Secondary Controller assumes Primary Controller role. All other Los Angeles nodes continue uninterrupted. vSWITCH 1 vSWITCH 3 Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Live Host 500cc Node 3 Node 3 Node 3 Hot Spare Node 4 Node 4 Live Host 500cc Hot Spare Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  19. Scenario 7 Los Angeles becomes totaly unavailable

  20. Scenario 7 Los Angeles Controller New York Mirror If Los Angeles becomes totally unavailable …due to Network failure, Act of Terror, Natural Disaster or other cause of total loss of Location. vSWITCH 1 vSWITCH 3 Primary Controller Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Hot Spare Node 3 Node 3 Hot Spare Hot Spare Node 4 Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Hot Spare Node 5 Node 5 Cold Spare Cold Spare Node 6 Node 6 Storage Cluster Storage Cluster Node 7 Node 7 Storage Cluster Storage Cluster Node 8 Node 8

  21. Scenario 7 Los Angeles Controller New York Mirror Secondary Controllerwill instructavailable Hot Spares in New York to assume services which were running in Los Angeles. vSWITCH 1 vSWITCH 3 Secondary Controller Node 1 Node 1 Live Host 500cc Node 2 Node 2 Hot Spare Live Host 500cc Node 3 Node 3 Node 3 Node 4 Node 4 Hot Spare Live Host 500cc Node 4 vSWITCH 2 vSWITCH 4 Hot Spare Node 5 Node 5 Cold Spare Node 6 Node 6 Storage Cluster Node 7 Node 7 Storage Cluster Node 8 Node 8

  22. Advanced Telephony Computing Architecture2nd Year Los Angeles Controller New York Mirror MIRRORING Hardware 4 nodes x 4 vSWITCH = 16 nodes Hardware 4 nodes x 4 vSWITCH = 16 nodes vSWITCH Primary Controller Secondary Controller vSWITCH Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 Live Host 500cc Live Host 500cc Node 3 Node 3 Live Host 500cc Hot Spare Node 4 Node 4 Live Host 500cc Hot Spare Node5 Node 5 Hot Spare Hot Spare New York Node 6 Node 6 Hot Spare Hot Spare Node 7 Node 7 vSWITCH vSWITCH Hot Spare Hot Spare Node 8 Node 8 Hot Spare Hot Spare Node 9 Node 9 Los Angeles Cold Spare Cold Spare Node 10 Node 10 Cold Spare Cold Spare Node 11 Node 11 vSWITCH vSWITCH Cold Spare Cold Spare Node 12 Node 12 Storage Cluster Storage Cluster Node 13 Node 13 Storage Cluster Storage Cluster Node 14 Node 14 Storage Cluster Storage Cluster Node 15 Node 15 vSWITCH Storage Cluster Storage Cluster vSWITCH Node 16 Node 16 Los Angeles Normal Operation Capacity 2000 concurrent calls, 20000 extensions circa. Failover Operation Capacity 30000 extensions. New York Normal Operation Capacity 1000 concurrent calls, 10000 extensions circa. Failover Operation Capacity 30000 extensions. 24 port Infiniband SAN Switch 24 port Infiniband SAN Switch Switch 1 Switch 1 24 port Infiniband SAN Switch 24 port Infiniband SAN Switch Switch 2 Switch 2 Switch 2 is a backup for the Switch 1 in case of failure. Switch 2 is a backup for the Switch 1 in case of failure.

  23. Advanced Telephony Computing Architecture2nd Year LEGEND Primary Controller Monitors all nodes and ensure that services are working. Secondary Controller Monitors Primary Controller and mirror to itself. Should Primary Controller fail or if Central Office should become unavailable it will assume Primary Controller role. Live Host Working services. Hot Spare Assume services for unavailable or failed Live Host. Cold Spare Cold Sparesare switched off and are available as extra capacity or to become new Hot Spares. Storage Cluster Storage Cluster is Network Redundant storage from which all services are running from.

  24. Advanced Telephony Computing Architecture3rd Year Los Angeles Controller New York Mirror MIRRORING Hardware 4 nodes x 4 vSWITCH = 16 nodes Hardware 4 nodes x 4 vSWITCH = 16 nodes vSWITCH Primary Controller vSWITCH Secondary Controller Node 1 Node 1 Live Host 500cc Live Host 500cc Node 2 Node 2 vSWITCH vSWITCH Live Host 500cc Live Host 500cc Node 3 Node 3 Node 4 Live Host 500cc Live Host 500cc Node 4 Live Host 500cc Live Host 500cc Node5 Node5 Live Host 500cc Hot Spare Node 6 Node 6 Live Host 500cc Hot Spare Node 7 Node 7 vSWITCH vSWITCH Hot Spare Hot Spare Node 8 Node 8 Hot Spare Hot Spare Node 9 Node 9 Hot Spare Hot Spare Node 10 Node 10 Hot Spare Hot Spare Node 11 Node 11 New York Hot Spare Hot Spare Node 12 Node 12 vSWITCH vSWITCH Hot Spare Hot Spare Node 13 Node 13 Node 14 Node 14 Cold Spare Cold Spare Los Angeles Cold Spare Cold Spare Node 15 Node 15 Cold Spare Cold Spare Node 16 Node 16 vSWITCH vSWITCH Storage Cluster Storage Cluster Node 17 Node 17 Storage Cluster Storage Cluster Node 18 Node 18 Storage Cluster Storage Cluster Node 19 Node 19 Storage Cluster Storage Cluster Node 20 Node 20 vSWITCH vSWITCH Storage Cluster Storage Cluster Node 21 Node 21 Storage Cluster Storage Cluster Node 22 Node 22 Storage Cluster Storage Cluster Node 23 Node 23 Storage Cluster Storage Cluster Node 24 Node 24 Los Angeles Normal Operation Capacity 3000 concurrent calls, 30000 extensions circa. Melbourne Failover Operation Capacity 50000 extensions. New York Normal Operation Capacity 2000 concurrent calls 20000 extensions circa. Failover Operation Capacity 50000 extensions. 24 port Infiniband SAN Switch 24 port Infiniband SAN Switch Switch 1 Switch 1 24 port Infiniband SAN Switch 24 port Infiniband SAN Switch Switch 2 Switch 2 Switch 2 is a backup for the Switch 1 in case of failure.

  25. Advanced Telephony Computing Architecture3rd Year LEGEND Primary Controller Monitors all nodes and ensure that services are working. Secondary Controller Monitors Primary Controller and mirror to itself. Should Primary Controller fail or if Central Office should become unavailable it will assume Primary Controller role. Live Host Working services. Hot Spare Assume services for unavailable or failed Live Host. Cold Spare Cold Sparesare switched off and are available as extra capacity or to become new Hot Spares. Storage Cluster Storage Cluster is Network Redundant storage from which all services are running from.

  26. Failover Mechanism Primary Controller node failure If Primary Controller node only or complete vSWITCH with the with the Controller node goes down, tasks such as monitoring, replication and failover mechanism will be taken and executed instantly by Secondary Controller node, which basicaly is live backup of the main Controller node. Live Host failure If Live Host node goes down or is unavailable on the network, all data of that Live Host will be copied from the Storage Cluster node to the available Hot Swap node and continue to operate on that node.

  27. Hardware Specification Computer Node 1: PRIMARY CONTROLLER & SECONDARY CONTROLLER Interconnect: Dual Gigabit Ethernet (Intel 82576 Dual-Port) CPU: 2 x Intel Xeon E5504 Quad-Core 2.00GHz 4MB Cache, CPU Processor RAM: 6GB (6 x 1GB) Kingston 1GB DDR3-11066Mgz ECC REG Memory# KVR1066D3S8R7S/1G Management: Integrated IPMI with KVM over LAN LP PCIe x16 2.0: No Item Selected Hot-Swap Drive - 1: SOLID STATE DISK 60GB WD5000AAKS SATAII 7200RPM 3.5" HDD Extra Nodes : Live Hosts, Hot Spare, Cold Spare Interconnect: Dual Gigabit Ethernet (Intel 82576 Dual-Port) CPU: 2 x Intel Xeon E5504 Quad-Core 2.00GHz 4MB Cache, CPU Processor RAM: 6GB (6 x 1GB) Kingston 1GB DDR3-11066Mgz ECC REG Memory# KVR1066D3S8R7S/1G Management: Integrated IPMI with KVM over LAN LP PCIe x16 2.0: No Item Selected Hot-Swap Drive - 1: SOLID STATE DISK 60GB WD5000AAKS SATAII 7200RPM 3.5" HDD Extra Nodes : Storage Cluster Interconnect: Dual Gigabit Ethernet (Intel 82576 Dual-Port) CPU: 2 x Intel Xeon E5504 Quad-Core 2.00GHz 4MB Cache, CPU Processor RAM: 6GB (6 x 1GB) Kingston 1GB DDR3-11066Mgz ECC REG Memory# KVR1066D3S8R7S/1G Management: Integrated IPMI with KVM over LAN LP PCIe x16 2.0: No Item Selected Hot-Swap Drive - 1: RAID 5 3TB Storage Hot-Swap Drive - 2: RAID 5 3TB Storage Hot-Swap Drive - 3: RAID 5 3TB Storage

  28. SIP Proxy: Registration SIP Client registration for all users (Residential, Business, Hosted PBXware and Wholesale) happens over SIP Proxy, which authenticate user "username", "password" or "IP address" in order to determine where the user belongs to, then forwards SIP registration to the appropriate VPS, except when it comes to the Wholesale type of user which does not register to the VPS but only to the Client Database. VPS 1 Residential SIP Clients VPS 2 Residential Residential SIP Client Registration SIP Reg. Request Client Checking Business SIP Reg. Request SIP Proxy Client Database Client Checking VPS 3 SIP Client Registration Business SIP Reg. Request Client Checking Hosted PBXware Client Checking SIP Reg. Request Wholesale VPS 4 Business SIP Clients SIP Client Registration VPS 5 Hosted PBXware

  29. SIP Proxy: Outgoing/Incoming Calls for Residential & Business Users Outgoing/Incoming Calls for Residential & Business users, SIP Proxy will first send those type of users to their appropriate VPS in order to check for their Enhanced Services permissions. VPS 1 Residential 2. SIP Proxy sends the SIP Client to the appropriate VPS, to acquire specific SIP Client data. 3. VPS sends back SIP Client with to the SIP Proxy with SIP Client Data. SIP Proxy 4. SIP Proxy selects appropriate trunk for Outgoing call. 1. SIP Client Outgoing Call. VoIP/PSTN Trunk Residential 4. SIP Proxy sends Incoming call to the SIP Client. 1. Incoming call first comes to the SIP Proxy. 2. SIP Proxy first check for the Incoming DID and sends Incoming call to the VPS where DID related user is located. 3. VPS sends back Incoming call to the SIP Proxy. VPS 2 Residential Diagram shows example for Outgoing/Incoming Calls for Residential type of user.

  30. SIP Proxy: Outgoing/Incoming Calls for Hosted PBXware Users Outgoing/Incoming Calls for Hosted PBXware users, SIP Proxy will first send those type of users to their appropriate VPS in order to check for their Enhanced Services permissions. VPS 5 Hosted PBXware 2. SIP Proxy sends the SIP Client to the appropriate VPS, to acquire specific SIP Client data. 3. VPS sends back SIP Client with to the SIP Proxy with SIP Client Data. SIP Proxy 4. SIP Proxy selects appropriate trunk for Outgoing call. 1. SIP Client Outgoing Call. VoIP/PSTN Trunk Hosted PBXware 4. SIP Proxy sends Incoming call to the SIP Client. 1. Incoming call first comes to the SIP Proxy. 2. SIP Proxy first check for the Incoming DID and sends Incoming call to the VPS where DID related user is located. 3. VPS sends back Incoming call to the SIP Proxy. VPS 5 Hosted PBXware

  31. SIP Proxy: Outgoing/Incoming Calls for Wholesale users For Wholesale users, SIP Proxy sends the call straight through appropriate trunk as per client data which involve settings in LCR, Routing and Rating Engine. LCR 3. SIP Proxy uses LCR, Routing and Rating Engine to determine which trunk should be used for sending Outgoing calls. Routing Rating Engine SIP Proxy 4. SIP Proxy selects appropriate trunk for Outgoing call. 1. SIP Client Outgoing Call. VoIP/PSTN Trunk Wholesale 4. SIP Proxy sends Incoming call to the SIP Client. 1. Incoming call first comes to the SIP Proxy. 2. SIP Proxy first check for the Incoming DID and sends Incoming call to the SIP Client IP address.

More Related