Author Archives: derekpsneed
What is DSLAM?
- 30 Dec, 2015
- Posted by derekpsneed
- 10 Comment(s)
DSLAM Definition
DSLAM Definition: A DSLAM, or Digital-Subscriber-Line-Access-Multiplexer, is a network distribution device that aggregates individual subscriber lines into a high-capacity uplink. These high capacity uplinks, either ATM or Gigabit Ethernet, connect subscribers to their Internet service providers (ISPs). DSLAM units are typically located in telephone exchanges or distribution points. They allow for the high-speed transmission of DSL technology using legacy copper lines. Using advanced multiplexing techniques, these units salvage the utility of the millions of copper lines that were originally deployed for telephone usage in the 1950’s. DSLAMs also come with many advanced traffic management features to separate and prioritize voice, video, and data traffic.
How does a DSLAM connect to customer premise equipment?
DSLAMs are the intermediary units that link end-user equipment to ISP servers located in a central office (CO). ISPs provide end-users with customer premise equipment (CPE) such as routers or modems. These units forward a user’s digital data from their computer or client device to a local cabinet located in the vicinity of a customer’s premise. The data can then travel through a switch, a router, and finally a DSLAM unit.
The DSLAM unit will aggregate or collect individual subscriber lines and transfer data from all their subscribers onto a high-speed capacity uplink that connects to a carrier’s central office using fiber or twisted pair cabling. DSLAMs allow Internet service providers to build Hybrid networks such as fiber to the curb (FTTC) networks. By using fiber for backhaul traffic and twisted copper cables for the last mile of a deployment, ISPs are able to build cost-effective networks that offer high-speed transmission rates.
Once data arrives to a central carrier office, information is routed to a broadband remote access server (B-RAS). These units are responsible for authenticating subscriber credentials, validating user access policies, and routing data to their destinations.
DSLAM Classifications
DSLAMs can be classified by the type of xDSL technology they support, by form factor, by architecture, and by deployment location.
By xDSL Type: Single-Service DSLAMs vs. Multiservice DSLAMs
DSLAMs can be either classified as single-service or multiservice units. Single-service DSLAMs only have the capacity of supporting one xDSL technology. Most single-service system DSLAM units will boast backwards compatibility with previous versions of the xDSL type they support. An ADSL2+ DSLAM for example, will boast backwards compatibility with ADSL2 and ADSL, the two previous generations of the ADSL2+.
Multiservice DSLAMs have the capacity of supporting several xDSL technologies. Multiservice DSLAMs allows ISPs or carriers to address the different broadband needs of their customers. For example, a DSLAM chassis that supports VDSL and ADSL line cards gives service providers the advantage of delivering high-speed broadband to customers in short (using VDSL) and long distances ranges (using ADSL). To learn more about the difference between VDSL and ADSL, click here. Multiservice DSLAMs allow ISPs to address scalability, port density and redundant architecture requirements for large-scale deployments.
By Deployment Type
DSLAMs can also be classified by deployment location. A DSLAM designed for outside plant (OSP) deployment such as the VX-M208S, has a smaller subscriber capacity and a smaller form factor in comparison to a DSLAM designed for central office (CO) deployment. OSP DSLAMs are commonly deployed in multi-dwelling units such as apartment complexes or university campuses. These units reside closer to a subscriber’s location and terminate subscriber local loops to achieve faster data transmission rates. “Hardened” OSP DSLAMs provide protection against the elements.
CO DSLAMs are located in distribution points and can support up to 10,000 subscriber lines or more. CO DSLAMs typically reside in distributed shelf architectures. These shelf units can host a number of DSLAMs from different vendors and Internet service providers. CO DSLAMs need to fulfill stringent standards due the large number of subscribers they support. Many CO DSLAMs feature a chassis-type form factor with hot swappable line cards. These units allow ISPs to customize their DSLAMs into multiservice units.
By Form Factor
DSLAMs range in size and interface options. Single-service DSLAMs typically deployed in OSP environments, for example, offer a smaller footprint than CO DSLAMs. These OSP DSLAM units are sometimes referred to as pizza boxes to describe standalone units. CO DSLAMs are typically chassis DSLAMs with swappable line cards and uplink modules. Service providers can oftentimes customize these larger DSLAMs with line cards to support multiple xDSL services. This allows them to fulfill different bandwidth demands and subscribers located at varying distances.
By Architecture
DSLAMs can also be classified by architecture. Centralized models reserve a single central uplink card to perform complex traffic processing. Line cards in centralized models hand-off traffic to the uplink card. In comparison to distributed models, line cards in centralized models offer a more basic function. Centralized architectures are designed to support a high number of subscribers.
DSLAMs with distributed architectures reserve complex traffic processing for smart line cards that are based on programmable network processors such as linecard traffic processors (LTPs). Uplink cards can be in an Ethernet switch if the unit is used in conjunction with Ethernet backhaul or in a full-featured network processor.
What is DSL?
DSL provides internet subscribers with high-speed internet access using the same legacy copper lines originally deployed in the 1950’s by traditional telephone lines. DSL relies on DSLAM’s multiplexing capabilities to transmit digital data or analog signals of several subscriber lines using one uplink. Multiservice DSLAMs can support many DSL technologies, but there are currently no DSLAMs that support all xDSL types. DSLAMs can evenly (or symmetrically) or unevenly (asymmetrically) allocate bandwidth between downstream and upstream speeds. One of the major downsides of DSL is that speeds attenuate the farther away a subscriber is located from a telephone exchange or distribution point. But DSL continues to be a popular deployment option due to its low deployment cost and the option to pair with faster cabling options such as fiber.
ADSL- Asymmetrical Digital Subscriber Line (ADSL)
ADSL prioritizes downstream traffic and allocates only a small portion of bandwidth for upstream traffic. The original ADSL standard could achieve downstream rates of 8.0 Mbps and upstream rates of 1 Mbps. ADSL2/2+ are the improved version of ADSL. ADSL2 can achieve downstream rates of 12 Mbps and upstream rates of approximately 1.3 Mbps. ADSL2+ can achieve even faster downstream rates at around 24 Mbps and comparable upstream rates with ADSL2. The ADSL standard is normally used for distances of up to 18,000 ft.
G.lite
ADSL deployments originally required professionally installers to install splitters, or microfilters, to separate the DSL data lines from POTS (plain old telephone connection). G.lite is an ADSL substandard that uses different modulation profiles and does not explicitly require the installation of splitters. G.lite can achieve 1.5 Mbps downstream and 512 Kbps upstream rates (at 10,000 ft). With G.lite, splitters are installed locally inside a subscriber’s homes. The asymmetric standard can achieve distances of up to 18,000 ft.
VDSL- Very High Bit-Rate Digital Subscriber Line (VDSL)
VDSL is optimal for shorter distances and signals quickly attenuate after 6,600ft. VDSL can achieve downstream rates of 55 Mbps and upstream rates of 1.5-2.3 Mbps. VDSL2 can achieve downstream rates of 200 Mbps and upstream rates of 200 Mbps up in the first 1,000ft. While VDSL is considered asymmetric, VDSL is both symmetric and asymmetric.
SDSL- Symmetric Digital Subscriber Line (SDSL)
Unlike ADSL which unevenly or asymmetrically allocates bandwidth between downstream and upstream traffic, SDSL evenly or symmetrically allocates bandwidth between downstream and upstream rates. With the ability to reach up to 9,800ft, SDSL typically yields around 1.5 Mbps, depending upon the distance of a customer’s equipment. SDSL is ideal for small businesses with more intensive bandwidth use and offers an ‘always on’ connection.
ISDN- Integrated Services for Digital Network
ISDN was the first protocol to integrate data and voice over copper cables and was traditionally used to carry voice for landline communication purposes. The standard supports data transfer rates of 64 Kbps. B-ISDN is an uncommon version ISDN that utilizes broadband transmission and can achieve rates of 1.5 Mbps with fiber optic cables. Additional ISDN substandards include basic rate interface (BRI), primary rate interface (PRI), and narrowband ISDN (N-ISDN).
IDSL- ISDN Digital Subscriber Line
IDSL is a digital transmission-based technology that eliminates the need to travel to a carrier’s central office. The xDSL standard can achieve 128 Kbps over twisted pair copper. Even though IDSL is a subsidiary of ISDN, IDSL allows for always-on connections and offers a more cost-effective option that eliminates setup delays and per minute fees. Transmission of data occurs over the data network as opposed to the PSTN (public switching telephone network).
HDSL- High-bit-rate Digital Subscriber Line
Even though the HDSL standard was first introduced in 1994, HDSL is still widely used by telephone companies and carriers. Performance is comparable to a T1 line though it is more cost-effective. HDSL can travel up to 12,000 ft and deliver symmetric rates of up to 784 Kbps.
Other High-Speed Alternatives To Connect to The Internet
Besides DSL, high-speed broadband can be accessed via coaxial cables, fiber, or wireless connections. The following will overview the different benefits and drawbacks to different internet connectivity methods.
Coaxial Cable
Cable originally emerged as a means to deliver access to television programming in mountainous and remote areas. But subscription-based programming did not flourish until the deregulation of the industry in 1984 which spurred carriers to invest “more than $15 billion on the wiring of America” according to this CalCable. But with the widespread adoption of the internet, audiences began to consume content online using popular streaming sites such as Hulu and Netflix.
But carriers were able to salvage coaxial lines using DOCSIS standards (data over cable service interface specification). DOCSIS enables carriers to transmit high-bandwidth data using existing cable coaxial wiring used for cable television. DOCSIS standards have significantly evolved and now offer data speeds that are oftentimes faster than DSL.
DOCSIS 3.0, can achieve downstream speeds of up to 152 Mbps and upstream rates of up to 108 Mbps. But the newest iteration of DOCSIS 3.1 promises to deliver downstream rates of up to 10 G and upstream rates of up to 1 Gbps in laboratory environments. Real world rates tend to dramatically fluctuate, but improvements like these will continue aiding carriers in providing faster services for their customers.
A cable modem termination system (CMTS) in a coaxial network essentially performs the same function as a DSLAM unit in a DSL network. In the same way that a DSLAM feeds subscriber lines to the Internet service provider (ISP), a cable modem termination system (CMTS) feeds the data of hundreds of cable modems and connects users to their ISPs.
Cable relies on a shared line architecture and user speeds can drastically decrease during peak usage. However, cable will typically deliver faster rates than DSL. DSL speeds attenuate the farther away a customer is from a distribution point. With coaxial cable connections, however, the distance from a distribution point does not influence speed.
Many infrastructures already have coaxial cabling and like DSL, it is relatively inexpensive to connect.
Fiber
Fiber connections offer longer distances and faster transmission speeds in comparison to coaxial cable, wireless, and DSL. Fiber uses light technology to transmit data at up to 1Gbps speeds. Using light technology allows fiber to achieve higher frequencies and data capacities. In comparison to copper-based cabling like DSL and coaxial lines, fiber operates in a near noise-free networking environment with very little interference or energy loss.
Fiber optics is also more costly to deploy than DSL or coaxial cabling. Newly built buildings will include twisted pair copper in their infrastructure making it simple for ISPs to provide connectivity using DSL. But fiber is oftentimes deployed after the construction of a building and represents an additional investment. Fiber is also an intrusive medium to deploy—at times damaging subscriber’s property in the most extreme cases. High deployment costs influence carriers to only deploy fiber in high subscriber density areas such as metropolitan areas. To alleviate the high cost of fiber, carriers will oftentimes build hybrid deployments using fiber and twisted pair copper to create FTTC (fiber-to-the-curb) deployments.
WISP (Wireless Internet Service Providers)
Wireless Internet is supported by radio towers that transmit data in the following ranges: 900 MHz, 2.4 GHz, 4.9, 5.2, 5.4, 5.7, and 5.8 GHz. Wireless Internet service providers (WISP) are carriers responsible for providing Internet connectivity to mobile client devices such as cell phones and wireless hotspots.
Wireless Internet services are the least common types of deployments. Unfortunately, wireless coverage can be spotty and unreliable. Frequent travelers, for example, may note that performance varies by location during the commute of a train. There are several factors that can influence the performance of a wireless connection including altitude or the physical barriers of a building for example.
Why DSL Is Still In Use
Twisted copper pairs is a legacy cabling medium that deteriorates with time and can become a liability without proper maintenance. Verizon has been accused of allowing their DSL copper lines to deteriorate so as to pressure residents into adopting fiber. But broadband providers will continue to rely on DSL technology due to low start-up costs.
Twisted copper pairs can also be used with fiber to build FTTC (fiber to the curb) deployments using DSLAMs. The most expensive portion of fiber deployment occurs in the local subscriber loop where customer premises are located. To avoid some of the high deployment costs of fiber, carriers will oftentimes build hybrid deployments using copper in the local subscriber loop and fiber in the remaining portion of a network. This form of deployment is known as FTTP (fiber to the premises) or FTTH (fiber to the home).
Constant improvements in DSL equipment and chipsets in DSLAMs allow service providers to take advantage of the millions of copper telephone lines that have already been deployed. New chipsets such as G.Fast have been able to achieve up to 1 Gbps at its origin. Improvements such as this will continue to prolong the lifespan of copper pairs.
DSLAM Use Cases
Higher capacity central office (CO) DSLAMs are used in distribution points to continue forwarding packets to their destination. But smaller single-card DSLAMs are also used in customer premises in multi-dwelling units (MDU’s) such as campuses, hotels, businesses and enterprise network environments.
DSLAM Deployment Locations:
- libraries
- campuses
- schools
- apartments
- hotels
- rural areas
DSLAMs optimize high-speed transmission by terminating local subscriber loops and transferring traffic into a high capacity uplink. In other words, connecting a series of modems to a DSLAM allows a higher-quality link such as fiber to take over to connect customers to the Internet.
DSLAMs in Rural Areas
Broadband carriers find rural and remote areas unappealing due to low subscriber density. Areas with low subscriber density offer lower returns of investment in comparison to metropolitan areas that boast higher subscriber density per square mile. The Connect America Funds incentivize broadband service providers to bring high-speed connectivity to rural areas. According to the Federal Communications Commission’s (FCC) Connect America Fund (CAF), “approximately 19 million American still lack access” to high-speed broadband. DSL is the preferred type of method in these types of sparsely populated areas due to low startup costs.
Internet service providers (ISPs) are able to provide high-speed broadband to a low volume of subscribers using single card DSLAMs such as the VX-M208S or the VX-M2024S. These units are ideal for smaller scale deployments.
ATM DSLAMs and IP DSLAMs
DSLAMs rely on ATM and IP packet switching technology to transport data. The following will demystify how the different methods transport information.
Cell Relay
ATM DSLAMs use the ATM protocol to relay data using permanent virtual circuits (PVC’s) to relay data. These PVC’s require configuration to establish a permanent point to point (PPP) connection to a destination using a virtual circuit.
The ATM protocol splits data into cells made up of 53 bytes. These cells contain very little routing information due to the PPP nature of PVC connections. ATM networks can transport cells at rates of up to 155 Mbps and 622 Mbps.
The ATM protocol establishes a virtual circuit connection from a subscriber to a DSLAM, and then to a B-RAS. The B-RAS then terminates the PPP session and routes traffic to the core network.
As broadband began to add more complex data traffic, ATMs began to incorporate a rudimentary ATM switching fabrics, switched virtual circuits (SVCs), and a variety of other traffic management features.
Frame Relay
Broadband now includes many value-added services such as VoIP (voice-over-IP), IPTV (Internet protocol television), VoD (video on demand) and HDTV (high-definition TV). With new concerns for bandwidth, scalability and QoS requirements, IP DSLAMs have managed to consolidate network functions and simplify network deployments. Many IP DSLAMs now have routing capabilities, reducing the number of equipment needed when compared to ATM DSLAM deployments.
IP DSLAMs are a cost-effective alternative to ATM DSLAMs. Many service providers opt to build their networks using Ethernet for their backhaul uplinks. Ethernet, such as Metro Ethernet, can be used for both carrier backbone and access network segments.
Ethernet DSLAMs, or IP DSLAMs, transmit IP-based data known as frames as opposed to ATM-based packets, or cells. Unlike ATM cell relay, frame relay is a packet switching technology that transmits different sized frames. A frame carries more addressing and error handling identifier tags than ATM packets.
Unlike ATM DSLAMs that rely on virtual circuits to relay data to their destinations, IP DSLAMs rely on switches and relay data across constantly-shifting connection paths. However, the frame relay protocol can also be configured to use PVC to forward packets to their destination using permanent pathways as ATM cells do to achieve faster speeds.
The growing complexity of broadband traffic such as Triple Play services known as VoIP, IPTV, and HDTV, have made IP-based DSLAMs and IP-Based architectures popular to do their cost-efficiency and simplified network architecture.
IP-Based Architectures
Carrier Ethernet, such as Metro Ethernet, can be used for backbone and access network segments. Ethernet standards are constantly being expanded and improved. In fact, the Ethernet Alliance has recently announced new standards for the backhaul of networks:
- 25 Gbps Ethernet PMD(s) for Single Mode Fiber Study Group
- 50 GBPS Ethernet Over A Single Lane Study Group
- Next Generation 100Gbps and 200Gbps Ethernet Study Group
Be sure to visit the Ethernet Alliance website to learn more about these new standards.
With constantly evolving Ethernet standards, Ethernet has become an integral component that maintains IP-Based networks cost-effective.
Buying a DSLAM
There are several features that DSLAM buyers will need to take into consideration when weighing different DSLAM options. The main differentiating features are subscriber capacity, throughput, packet loss, latency and jitter.
Subscriber Capacity
DSLAMs provide a range of subscriber capacity. There are three main metrics that dictate subscriber capacity: line density, subscriber and session capacity. Throughput measurements overview a variety of network environment factors that may influence the overall sustainable throughput of a unit including packet sizes, session volumes, and other network environment features such as IGMP snooping, QoS, AAA, and other related features (depending on the capabilities of a DSLAM).
DSLAMs support anywhere between a single subscriber to tens of thousands, depending on the type of DSLAM and functionality needed. CO DSLAMs can provide sufficient support for thousands upon thousands of subscribers while OSP DSLAMs can provide sufficient support for as little as one subscriber.
Throughput
Throughput allows carriers to differentiate their service packages from their competitors and is one of the most important factors that carriers take into consideration when deciding which DSLAM to purchase. Though throughput is influenced by a variety of factors, the dominant factor that will determine the performance of a unit will depend on upon the type of xDSL technology used and the location of a customer’s premise. For example, a subscriber that is closer to a central office server of their ISP, will be able to experience faster rates using VDSL2 than a subscriber that lives farther away using the same equipment and xDSL technology. Robust QoS features further improves the accuracy of throughput in real-world settings.
Packet Loss, Latency and Jitter
Broadband has grown in complexity and supports more complex types of traffic such as VoIP, IPTV, and VoD (often known as Triple Play services). These more complex types of traffic are more sensitive to delays or latency and requires more advanced traffic management features to reduce packet loss, latency and jitter. These parameters will influence the performance of a DSLAM. Features such as QoS, Authentication via DHCP Relay, and IGMP Snooping alleviate packet loss. ISPs and network installers can also set the prioritization of voice, video and data traffic to optimize performance. Since voice is more sensitive to delays, incoming and outgoing voice traffic can take priority over data traffic.
Determining the Best DSLAM Units For Your Network
As mentioned before, network installers will need to assess the amount of subscribers they are seeking to serve and the distance ranges they are seeking to cover. DSLAM units come in a variety of sizes with different subscriber capacities. There are a myriad of DSLAM options built for large-scale deployments that can support several thousand subscribers. But there are also single-card DSLAMs that can support a handful of subscribers.
DSL performance rates will depend on upon the distance of a subscriber’s location to the central office (CO). DSL performance is mainly dictated by the type of DSL service a DSLAM supports. Installers will most likely choose VDSL/2 services for distances of up to 6,600ft and ADSL2/2+ for distances greater than 6,600 ft.
As broadband data has grown more complex, DSLAMs have had to account for value-added triple play services placing greater importance on traffic management features.
Common DSLAM features include:
- Traffic management
- QoS
- Authentication via DHCP Relay
- IGMP Snooping
To demonstrate the range of DSLAM options available, we’ve selected a few examples of different DSLAM equipment types from our product portfolio. These units support varying subscriber capacity and DSL service types.
24 Port VDSL2 IP DSLAM
Models such as the VX-MD4024 24 Port VDSL2 IP DSLAM are also suitable for small scale deployments. As a network grows in size, additional units from different vendors can be added to a network. These DSLAMs are ideal for multi-dwelling units or external cabinets.
48 Port ADSL2+ IP DSLAM
Devices such as the VX-1000HDx provide longer distances and are designed for access networks. These units are heftier in size measuring around 1.5U.
Chassis-type DSLAM
Chassis-type DSLAMs feature hot-swappable line cards. Service providers can customize these DSLAMs to build multi-service units. Units such as these can feature Gigabit Ethernet (GbE) trunk interfaces and SFP ports for fiber connectivity.
DSLAM Maps
Knowing the approximate location of your nearest DSLAM will help you more accurately gauge the expected speed of your Internet service. But DSLAM maps are very rarely found online. If you’re a potential DSL subscriber, and are searching for a DSLAM map to determine potential speeds, you can contact your Internet service provider. They should be able to give you approximate speeds based on your location.
Configuring a DSLAM
With value-added services such as IPTV (Internet protocol television), VoIP (voice over IP), and HDTV (high definition TV) configuring a DSLAM requires setting the traffic prioritization of voice, video and data traffic. Users will need to configure virtual local area networks (VLANs), QoS, and reserve a set amount of bandwidth for voice—the traffic type most sensitive to latency—on their switches. Uplinks will also need to be connected to the DSLAM Ethernet or fiber port.
Users will need to configure data traffic in the following order:
First tier: Voice
Second: Television
Third Tier: Data
To preview how to setup a DSLAM, click on the video below:
To view our full DSLAM product portfolio, click here.
Sources:
Understanding DSLAM and BRAS Access Devices by Agilent
https://www.quora.com/DSL/Whats-the-difference-between-a-DSLAM-and-an-IP-DSLAM
http://www.dslreports.com/forum/r25653574-Need-help-understanding-IP-DSLAM-vs-DSLAM
http://www.dslreports.com/faq/6995
https://www.youtube.com/watch?v=3VAmcN8VmIU
http://www.informit.com/articles/article.aspx?p=21317
What Net Neutrality Really Taught Us About Political Power and Technology
- 11 Feb, 2015
- Posted by derekpsneed
- 0 Comment(s)
Federal Communication’s Chairman (FCC) Tom Wheeler has finally proposed new rules that firmly protect net neutrality and prevent the internet from being reclassified as a telecommunications service. The FCC has published a new commission document upholding that the internet be preserved as a place of innovation and competition.
The release statement indicates that the new rules will be written so that they can withstand future legal challenges.
Net neutrality rules first became a topic of debate in May 2014, when the FCC chairman proposed a revision to the net neutrality rules that would allow ISPs to charge internet companies higher fees for using their faster network lanes. But the issue became a popular topic of discourse after John Oliver tackled the subject in his Last Week Tonight’s show and joked that having the former cable lobbyist run the FCC was the equivalent of having a dingo babysit children. John Oliver’s political skit influenced nearly 22,000 supporters of net neutrality to take to the FCC’s forum and consequently crashed their servers.
In addition, internet companies organized an ‘Internet Slowdown Day’, a virtual protest where some of the largest websites such as Netflix and other web giants donned an incessant loading signal to galvanize internet users against revising net neutrality rules. The FCC is still set to vote on the issue on February 26.
Though some companies, like Google and Facebook were notably absent from the online protest, the FCC forums accumulated nearly 4 million comments. The Daily Beast attributes the Chairman’s call to protect the internet a “political miracle” and credits the victory to the high volume of FCC comments, the President’s public plea to the FCC to protect net neutrality rules, and the pressure that technology companies put on the FCC to keep the internet free and open.
3 Unexpected Components That Influenced Tom Wheeler
Tom Wheeler’s sudden support for net neutrality distinguishes the internet as a dynamic political tool, especially when taking into account that Telcos and cable companies invested about $75 million to lobby against net neutrality.
One of the most unexpected components that popularized the issue occurred with John Oliver’s viral video, now boasting nearly 8 million views. The HBO Comedian proved that viral videos have a different utility besides sharing cat videos—viral videos hold political ramifications. As mentioned in our previous net neutrality post, Marketing Expert and Entrepreneur Seth Godin explains that “The most viral ideas ask for nothing more than a click from your mouse, a share, more attention gained”. Attention is a scarce resource in this day and age of information overload. Virality is yet another political tool in the arsenal of protesters.
In addition, internet companies positioned themselves as a powerful organizing force that throttled the efforts of Telco companies. The online benefits of a virtual protest offered the opportunity for convenient participation and yielded instant highly-visible feedback—proving that a virtual protest is just as efficient as a public demonstration. The ‘Internet Slowdown Day’ protest merely required internet companies to install a widget on their website and visitors to take a couple of moments to submit their opinions on FCC’s website.
But perhaps the success of this online campaign is specific to the issue of net neutrality because internet companies and their visitors shared a mutual interest. Internet users in support of net neutrality were able to ride on the political momentum that these website moguls had created.
This makes us wonder whether this success can be replicated for a humanitarian cause not directly tied to the interest of internet companies.
Do you think virtual online protests are a superficial veneer to activism? Let us know what you think. Start a discussion below!
Don’t forget to follow us on Facebook, Twitter, and Linkedin for the latest news on net neutrality and computer networking events.
Preparing for the 2nd Wave of 802.11ac
- 16 Dec, 2014
- Posted by derekpsneed
- 1 Comment(s)
The second phase of 802.11ac equipment, also referred to as Wave 2, is set to hit the shelves early next year and will undoubtedly give networks a performance boost that will force networks to invest in upgrades.
According to Tech Blogger Gina Narcisi, “Gigabit Ethernet edge switches can handle the flows produced by Wave 1 802.11ac, but Wave 2 will require new switches to handle faster traffic.” In addition, the second generation of the 802.11ac will create a market need for 2.5 and 5Gb Ethernet switches.
With a physical link rate of nearly 7Gbps, even 1GB Ethernet will no longer be able to support a network with several connected 802.11ac devices. This will force eager adopters to upgrade to 10Gb Ethernet or wait until the MBASE-T and NBASE-T alliances standardize 2.5 and 5G Ethernet specifications.
Wave 2 802.11ac equipment will prepare networks for the projected 212 billion wireless devices expected to connect to the internet by 2020.
Networks that want to benefit from a smaller performance boost without an additional investment in infrastructure upgrades might want to consider catching the first wave of 802.11ac.
Beamforming and MU-MIMO
To reach optimal speeds, instead of relying on an omnidirectional broadcasting signal, the second wave of 802.11ac access points will depend heavily on explicit beamforming to be able to support Multi-User MIMO (also known as Multi-Input and Multi-Output). In order to determine which direction to broadcast a signal, explicit beamforming requires both the transceiver and receiver to communicate with each to establish a strong signal path to form the beamform. Beamforming helps direct a transmission signal towards a specific location to strengthen the reception of a signal.
MU-MIMO relies on Spatial-Division Multiple Access (SDMA) “to allow multiple transmitters to send separate signals and multiple receivers to receive separate signals simultaneously in the same band”. This new ability to multitask within the 8 available spatial streams will make it possible to “serve multiple clients at the same time and support an increased number of VoIP calls, video streams, and other Jitter and Latency-sensitive applications.”
802.11ac can act as an overlay to an 802.11n network and will not necessarily be replacing 802.11n networks.
Will you be catching the first wave or second wave of 802.11? Leave us a comment below.
Follow us on Facebook and Twitter for the latest news in the Telco and computer networking news.
How are IEEE Standards Created?
- 11 Dec, 2014
- Posted by derekpsneed
- 0 Comment(s)
Have you ever wondered exactly what goes behind the Institute of Electrical and Electronics Engineers’ (IEEE’s) standardization process and why it’s important?
Standards set the protocols that helps keep bleeding edge technology within similar parameters so that final products remain interoperable with equipment manufactured by other brands.
The 6 Steps That Leads to Finalization
IEEE defines standards as “published documents that establish specifications and procedures designed to maximize the reliability of the materials, products, methods, and/or services people use every day.”
These standards force companies to adhere to protocols that “maximize product functionality and compatibility, facilitate interoperability and support consumer safety and public health.”
1. Idea
The initial germ for a standard idea must originate from within the IEEE Societies and Committees, Standards Coordinating Committees, or a Corporate Advisory Group. An idea can emerge from outside of these groups. IEEE specifies that “In this case, those seeking a sponsorship would approach the governing body for standards within an IEEE Society to see if they would be willing to sponsor the work.”
The sponsor will be responsible for overseeing the objective development of standards until they reach their end of their lifespan
2. Project Approval Process
After an idea receives support from a small community of supporters, the group can move forward and formally request approval from the New Standards Committee in the form of a Project Authorization Request (PAR). The form is a “small, structured, and highly detailed document that essentially states the reason why the project exists and what it intends to do.”
Even non-members can participate in the formation of the PAR.
3. Develop Draft standards
Once approved by the New Standards Committee, sponsors have up to four years to finish defining the protocols. Working groups will rely on the PAR to begin outlining a draft of the standard. The outline is then dismembered and assigned to individual members so that they can work on the sections separately.
The draft will be edited until it becomes a cohesive body of work and receives its final major review by the Mandatory Editorial Coordination (MEC).
4. Sponsor Ballot
After the working group produces a draft, sponsors need to form a ballot group to move the project forward. Only members can participate in the balloting process or pay a fee to participate in the ballot.
Members have an opportunity to make appeals before the IEEE-SA Standards Board and IEEE ensures that all appeals and comments will be reviewed.
Once sponsors have gathered a list of interested parties, IEEE requires that 75% of participants vote in order for the ballot to be valid. For the ballot to pass, IEEE also requires a 75% approval rate from the original voting pool.
5. IEEE-SA Standards Board Approval Process
If the ballot passes, the document then undergoes IEEE’s Standards Board Approval Process. The board relies on the recommendation of the Standards Review Committee (RevCom) to either approve or deny the document.
If successful, the standard is finally edited by an IEEE-SA editor for the final publishing phase.
6. Publish
Once a standard has been published the standard will remain valid for ten years.
Revisions can be made to the document but it will have to undergo through the ratification process again. If no revisions are made, at the end of its ten year life span, IEEE will revise the standard to determine if it needs to be withdrawn and archived. If the standard remains relevant, it will repeat the standardization process once more.
Even though the standardization process is effective at defining the protocols that will produce reliable networking products, the lengthy process has recently emerged as a problem for manufacturers that are ready to provide much needed networking solutions such as 2.5 G and 5G Ethernet to keep up with faster 802.11 standards.
Do you think the IEEE standards has become too cumbersome? Let us know what you think!
Leave us a comment below and follow us on Facebook and Twitter for the latest new in the Telco and networking industry.
Who is Responsible for Future-Proofing Networks?
- 04 Dec, 2014
- Posted by derekpsneed
- 0 Comment(s)
Cisco predicts there will be 50 billion devices connected to the internet by 2020. In the meantime, the Institute of Electrical Engineers, also known as the IEEE, prepares for the bandwidth overload by projecting future needs and specifying the standards that will optimize core networks and Local Area Networks (LAN). This community of scientists and engineers work behind the scenes to future-proof networks.
In the Telco industry, the IEEE is better known as the organization that oversees the development of 802 standards and maintains Local Area Networks (LAN) and Metropolitan Area Networks (MAN) running smoothly but the non-profit organization oversees projects in other industries as well.
The accompanying infographic describes the 802 standards in more detail

Infographic Sources: http://searchmobilecomputing.techtarget.com/definition/IEEE-802-Wireless-Standards-Fast-Reference http://en.wikipedia.org/wiki/IEEE_802
The Origins of IEEE
In 1964, the American Institute of Electrical Engineers (AIEEE) and the Institute of Radio Engineers (IRE) merged to form IEEE.
The organization originally supported the nascent profession of electrical and radio engineers but since then has grown into an organization that oversees the development of micro-and nanotechnologies, ultrasonics, bioengineering, robotics, electronic materials, and many other science fields.
The IEEE spectrum blog features the latest news on technological breakthroughs in the less visible fields of robotics, semiconductors, and new developments in energy production among other topics.
A Fledgling Electrical and Radio Industry
In 1884, the AIEE held its first meeting in Philadelphia. Most notable amongst the attendants were Thomas Edison and Alexander Graham Bell.
The organization set the groundwork for the commercialization of electricity and supported the growth of the electrical engineering profession.
Radio was another budding technology that needed a structured professional organization similar to the AIEE.
In its early days, the branch of technology consisted of vacuum tubes, electrical amplification systems and transistors.
In 1957, the organization managed to attract more members than the AIEE and upon merging with them in 1963, had a total of 150,000 members. Today, the IEEE boasts a roster of 400,000 members worldwide.
IEEE 802 Standards Committee
The IEEE 802 LAN/MAN Standards Committee (LMSC) oversees specifications in the Data Link and Physical Layer of the OSI model.
Most relevant for the Telco industry is the 802 standard.
In order for a proposed project to move forward, IEEE members hold a plenary in near-democratic fashion and members can cast votes on projects they deem appropriate to pursue. It’s not uncommon to see a standard take 2 years to finalize.
Why are IEEE’s Standards Important?
Standards streamline innovation. IEEE is responsible for ensuring that yournetworking equipment is interoperable with other brands and that new technology is backwards compatible. For example, IEEE’s backwards compatibility requirements ensure that equipment with the new wireless specification such as a router with 802.11ac capabilities is backwards compatible with 802.11n equipment.
But won’t structuring innovation hinder it? Actually quite the opposite.
Innovation can lead to proprietary technology.
And though proprietary technology promotes brand loyalty and rewards manufacturers for their investment in R&D, unbridled proprietary technology can set up impractical compatibility barriers. Imagine not being able to connect a Dell Printer with an Apple Desktop. The Internet of Things (IoT) aptly demonstrates this issue.
One of the obstacles holding back the mass adoption of IoT devices is the rapid but isolated invention of smart devices and an influx of proprietary platforms.
IEEE has even begun a project aiming to organize the chaos behind the IoT.
In addition to maintaining network protocols, the IEEE also pursues projects that promote the “scientific and educational…advancement of the theory and practice of electrical, electronics, communications and computer engineering, as well as computer science, the allied branches of engineering and the related arts and sciences.”
Did you find this article useful?
Don’t forget to follow us on Facebook and Twitter for more tech resources.
The New Wi-Fi Standard That Will Make the 802.11ac Obsolete
- 04 Dec, 2014
- Posted by derekpsneed
- 0 Comment(s)
The first wave of 802.11ac routers currently available on the market are based on earlier drafts of the 802.11ac standard and will no longer be the fastest standard on the market. The second wave of 802.11ac devices are based on the final ratified standard and are set to include new features that better optimize wireless networks.
802.11ac standard: Wave 1 vs. Wave 2
802.11ac https://www.networkcomputing.com/wireless-infrastructure/80211ac-wi-fi-part-2-wave-1-and-wave-2-products/d/d-id/1234650 Wave 2 is set to include MU-MIMO capabilities among other advances that will give routers a speed boost from the original 3.47 Gbps in first generation to 6.93 Gbps in the final iteration of the standard.
MU-MIMO or Multiple-user multiple input/multiple output “enables [routers] to send multiple spatial streams to multiple clients simultaneously”. With 160 MHz channel bonding (as opposed to 80 Mhz bonding over wave 1) and backwards compatibility with previous standards, the new standard boasts a performance boost over the first generation of 802.11ac routers. With a physical link rate of nearly 7 Gbps, users hoping to upgrade to 802.11ac should consider waiting to catch the second wave.
Market Trends
Dell’Oro Group has published a report that notes that the “Wireless LAN (WLAN) market grew eight percent in the third quarter 2014 versus the year-ago period” and that “Enterprise-class 802.11ac-based radio access points grew a robust 40 percent versus the second quarter 2014.”
The report forecasts that the WLAN market will be stimulated with the release of 802.11ac Wave 2 equipment along with government funding in the US meant to support wireless connectivity in schools and libraries.
802.11ax
But even the second generation of the 802.11ac standard cannot compare with the wireless speeds of a still newer specification. The 802.11ax standard is set to “not just increase the overall speed of a network”but to “quadruple wireless speeds of individual clients.” Huawei’s research and development labs , have reported to successfully reach wireless connections speeds of 10 Gbps utilizing the 5GHz frequency band.
The standard is set to be finalized in 2019, but manufacturers can be expected to release products based on the pre-standard as early as 2016.
While wireless connections keep getting faster, the options for internet users to connect to the internet keep expanding. In the near future, users can be expected to connect to the internet using LED lights, or gain wireless access to the internet by connecting to a micro-satellite orbiting the Earth.
Do you find this article useful? Don’t forget to follow us on Facebook and Twitter for the latest news on networking and telecommunications.
How Li-Fi Technology Will Make Wi-Fi Nearly Obsolete
- 24 Nov, 2014
- Posted by derekpsneed
- 3 Comment(s)
In the near-future, you may find yourself looking for the nearest LED light to connect the internet.
Researcher Harald Haas from the University of Edinburgh has made it possible to transform LED light into an electrical signal that can provide high-speed data streaming.
This means that pedestrians walking down a street illuminated with LED lights could very well be able to surf the internet without interruptions. Public infrastructure like hospitals, police stations, and libraries that utilize LED as their primary light source could easily be able to provide internet users with Li-Fi connections.
Researcher Harald Haas who works from the Alexander Graham Building states that “All the components, all the mechanisms exist already…You just have to put them together and make them work”. If his optimism proves true, you might be searching for the nearest Li-Fi hotspot to connect to the internet.
During the IEEE Phototonics Conference this past October, members of the consortium were able to “create a system that could both send and receive data at aggregate rates of 100 megabits per second. When transmitting in one direction only, they reached a rate of 155Mb/S. [They have also] created a better LED, which provides a data rate close to 4 gigabits per second operating on just 5 milliwatts of optical output power and using high-bandwidth photodiodes at the receiver.”
In his 2011 Ted Talk, the researcher highlights that we “have more than five billion [mobile devices]… [and] transmit more than 600 terabytes of data every month”. Since radio waves are scarce and expensive, the researcher suggests migrating to the visible light spectrum for wireless communication. The visible light spectrum is several times wider than the radio wave spectrum and would allow Li-Fi to offer faster connections than Wi-Fi.
These advancements however, do not entail that Li-Fi will replace Wi-Fi technology. The new technology comes with limitations. Physical barriers that block a direct light source can interfere with a connection and wireless communication in the dark is practically impossible.
The accelerated performance of Li-Fi exhibits the same downsides that 802.11ad wireless connections experiences–both modes of connections are fast but experience a limited physical range of a few meters. Therefore, Li-Fi will work alongside with Wi-Fi networks in the same way that 802.11ad wireless connections will work in conjunction with 802.11n connections.
IEEE spectrum reveals that “Haas…expects LEDs to evolve past just being light sources, much the same way the cell phone evolved from a communications device to a mobile computer. “In 25 years, every light bulb in your house will have the processing power of your cell phone today,” he says. Haas envisions that illumination will just be one of the many features that LED will offer.
It seems that the future of the internet will bring a myriad of ways to connect to the internet. Elon Musk announced just last week that his latest endeavor includes launching satellites that will provide “unfettered internet access to the masses”.
Would you utilize an LED internet connection?
Let us know what you think! Leave us a comment below.
IEEE to Pursue Enterprise Access Base-T and 25GBASE-T
- 20 Nov, 2014
- Posted by derekpsneed
- 1 Comment(s)
IEEE recently held a plenary this month in San Antonia, Texas that resulted in the addition of two more study groups–Enterprise Access BASE-T and 25GBASE-T for 25GE server connections. Out of 131 members who attended the plenary, 97 voted to move forward with the project and only 1 voted against it while 23 abstained to vote.
The Ethernet Alliance’s press release discloses that the 25GBASE-T standard logically follows efforts to standardize 25GbE Ethernet for data centers. The 25GbE Ethernet study group developed its own consortium in the interim that IEEE perceived little interest in the project briefly stalled the study group.
John D’Ambrosia, Chairman of the Ethernet Alliance, comments that Enterprise Access BASE-T “is a more recent example of Ethernet breaking from its “one size fits all” mentality and moving to a more pragmatic approach. It also nicely illustrates how a solution will be driven by the economics of the application it is targeting.” He refers to the fast adoption of the 802.11ac standard ratified on December 2013.
IEEE forecasts that in 5 years, 95% of 802.11 devices will support the 802.11ac standard and which are quickly replacing 802.11n devices.
Manufacturers even released 802.11ac based on the pre-standard. The market expects a second wave of 802.11ac to release in early 2015 and the new devices “will offer significant performance improvements for campus deployments.”
Enterprise campus access connections comprise of enterprise access links that connect with 1000BASE-T over Cat5 and higher UTP cable. However, as John D. Ambrosia explains, “The problem with this plan…is that 10GBASE-T is not specified to operate over CAT5e cabling, and the reach on CAT6 cabling is dependent on the cabling itself, as well as how it was deployed”.
Enterprise Access BASE-T will cover speeds between 1Gb and 10Gb over CAT5 and higher UTP cabling, and support backwards compatibility. The standard also needs to delineate Energy Efficient Ethernet specifications. John D. Ambrosia further relates that these new specifications will reinforce the necessity for four-pair PoE standards currently created by the IEEE 802.3bt Task Force which supports more than 25 watts of power.
Do you see a future for four-pair standards for home applications? Let us know what you think. Leave us a comment below.
The Only Wi-Fi Hotspot You’ll Need For Life
- 13 Nov, 2014
- Posted by derekpsneed
- 1 Comment(s)
Elon Musk, oftentimes compared to the real life version of Tony Stark and more popularly known as the CEO of Tesla, took to twitter to announce that his latest star-studded venture includes setting internet satellites into orbit.
The spaceflight company, SpaceX is a $1 billion investment aiming at bringing “unfettered internet access for the masses” at a “very low cost”. Or at least that’s how he responded to a twitter follower after making an announcement on the microblogging platform that the project is still in its early stages of development.
Elon Musk has joined forces with WorldVu Satellites Ltd. Founder Greg Wyler, to utilize one of his radio spectrum channels for a plan that will release 700 micro-satellites that weigh just under 250 pounds. Elon Musk teased followers stating that the real announcement would be made in 2 to 3 months.
When Greg Wyler left Google on early September, he allegedly “took with him the rights to certain radio spectrum that could be used to provide Internet access, according to a person familiar with the matter.” According to The Wall Street Journal, CEO of Mile Marker 101, Neil Mackay, comments that Wyler’s decision to leave the company could hurt Google, but that at the same time, Google is currently working on other projects aimed at expanding internet access such as Google’s Loon Project, Google Fiber, and has purchased two new ventures, Titan Aerospace and Skybox Imagining, in efforts to deliver internet over the sky.
Broadband delivered over the Ethers would free internet service consumers from Telecoms that are currently overcharging for sub-par broadband speeds.
If these companies manage to become Internet Service Providers and become the competitors of current Telco companies, how do you think the FCC would react to preserving net neutrality?
The Forgotten History of Video Communication
- 08 Nov, 2014
- Posted by derekpsneed
- 0 Comment(s)
Sometimes revolutionary technology sadly gets left behind in R&D labs because they lack commercial feasibility. Since the invention of the telephone, telecommunication companies yearned to make video and voice communication a possibility for everyday consumers. Bell Labs reports that it invested a hefty $500 million developing a technology that was too expensive for everyday users to adopt. This timeline of the videophone captures the enormous labor spent on a technology that materialized in a completely different platform—online.

Sources: http://www.hml.queensu.ca/telehuman http://blog.modernmechanix.com/your-telephone-of-tomorrow/5/#mmGal http://paleofuture.gizmodo.com/a-brief-history-of-the-videophone-that-almost-was-1214969187
What other features do you forecast for the ‘telephone of the future’? Leave us a comment below!