Your continued donations keep Wikipedia running!    

Ethernet

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Internet protocol suite
Layer Protocols
Application DNS, TLS/SSL, TFTP, FTP, HTTP, IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, BitTorrent, RTP, rlogin, ENRP, …
Transport TCP, UDP, DCCP, SCTP, IL, RUDP, …
Network IP (IPv4, IPv6), ICMP, IGMP, ARP, RARP, …
Link Ethernet, Wi-Fi, Token ring, PPP, SLIP, FDDI, ATM, DTM, Frame Relay, SMDS, …

Ethernet is a frame-based computer networking technology for local area networks (LANs). The name comes from the physical concept of ether. It defines wiring and signaling for the physical layer, and frame formats and protocols for the Media Access Control (MAC)/data link layer of the OSI model. Ethernet is mostly standardized as IEEEs 802.3. It has become the most widespread LAN technology in use during the 1990s to the present, and has largely replaced all other LAN standards such as token ring, FDDI, and ARCNET.

Contents

General description

Ethernet was originally based on the idea of peers on the network sending messages into a coaxial cable acting as a broadcast transmission medium. The methods used share some similarities to radio systems (though there are major differences like the fact its much easier to detect collisions in a cable broadcast system than a radio broadcast). The cable providing a broadcast channel was referred to as the ether (an oblique reference to the luminiferous aether) and it is from this that the name Ethernet comes.

The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted pair network. The advent of twisted-pair wiring enabled Ethernet to become a commercial success.

Addressing of packets is handled on the Ethernet, as on all IEEE 802 LANs, by giving each peer a unique 48-bit MAC address. Network adapters normally do not pass packets intended for other Ethernet cards to the host to improve performance but can also be placed in promiscuous mode where they pass every packet to the host. Adapters generally come programmed with a globally unique address but this can be overridden either to avoid an address change when an adapter is replaced or to use locally administrated addresses.

Despite the huge changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, the different variants remain essentially the same from the programmer's point of view and are easily interconnected using readily available inexpensive hardware. This is because the frame format remains the same, even though network access procedures are radically different.

Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards obviating the installation of a separate network card.

In detail

CSMA/CD shared medium Ethernet

A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governs the way the computers share the channel. Originally developed in the 1960s for the ALOHAnet in Hawaii using radio, the scheme is relatively simple compared to token ring or master controlled networks. When one computer wants to send some information, it obeys the following algorithm:

Main procedure

  1. Frame ready for transmission
  2. Is medium idle? If not, wait until it becomes ready and wait the inter-frame gap period (9.6 µs in 10 Mbit/s Ethernet).
  3. Start transmitting
  4. Does a collision occur? If so, go to collision detected procedure.
  5. End successful transmission

Collision detected procedure

  1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision
  2. Is maximum number of transmission attempts reached? If so, abort transmission.
  3. Calculate and wait random backoff period
  4. Re-enter main procedure at stage 1

This works something like a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current guest to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.

Ethernet originally used a shared coaxial cable winding around a building or campus to every attached machine. Computers were connected to an Attachment Unit Interface (AUI) transceiver, which in turn connected to the cable. While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work just fine while others would work slowly due to excessive retries or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running ping and shouted out reports as performance changed.

Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information was intended for just one destination. The network interface card filters out information not addressed to it, interrupting the CPU only when applicable packets are received unless the card is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.

Ethernet repeaters and hubs

For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 metres (1,640 feet). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at both ends. For coaxial cable based Ethernet, each end of the cable had a 50-ohm resistor and heat sink attached. Typically this was built into a male BNC or N connector and attached to the last device on the bus (or if vampire taps were in use to a socket mounted on the end of the cable just past the last device). If this was not done or if there was a break in the cable the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication could take place.

A greater length could be obtained by using an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. Repeaters were used to connect segments such that there were up to five Ethernet segments between any two hosts, three of these could have attached devices. Also repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working (though depending on which segment is broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless).

People recognized the advantages of cabling in a star topology (primarily that only faults at the star point will result in a badly partitioned network), and network vendors started creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point; multiport Ethernet repeaters became known as "hubs"". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE-2 thin coaxial segments.

There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. The best-known early example was DEC's DELNI. These devices allow multiple hosts with AUI connections to share a single transceiver. They also allow creation of a small standalone Ethernet segment without using a coaxial cable.

A Twisted pair 10BASE-T Cable is used to transmit 10BASE-T Ethernet
Enlarge
A Twisted pair 10BASE-T Cable is used to transmit 10BASE-T Ethernet

Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T was designed for point-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair based network with more than two machines had to use. This structure made Ethernet networks more reliable by preventing problems with one peer or its associated cable from affecting other devices on the network (though a failure of a hub or an inter hub link could still affect lots of users). Also as twisted-pair ethernet is point-to-point and terminated inside the hardware the total empty panel space required around a port is much reduced (making it easier to design hubs with lots of ports and to integrate ethernet onto computer motherboards).

Despite the physical star topology, hubbed Ethernet networks initially used half-duplex and CSMA/CD, with only minimal cooperation from the hub (primarily the introduction of a jam signal so a collision is detected network wide) in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed. This compromise was used in StarLAN in order to maintain compatibility with Classical Ethernet, but was abandoned with the advent of 10BASET and later twisted-pair Ethernets. (See "switching" below.)

Collisions also reduce total throughput, but only when the network is heavily loaded by multiple nodes all wishing to transmit, not simply because a few nodes are consistently transmitting large frames. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment. The results showed that, even for minimal Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.

Twisted-pair Ethernets no longer use collisions to limit network access, but instead buffer packets in the switch and use flow-control messages to prevent buffer overflow.

Bridging and Switching

While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. This creates significant limits on how many machines can communicate on an Ethernet network. To alleviate this, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.

Early bridges examined each packet one by one, and were significantly slower than hubs (repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed.

Initially, Ethernet bridges and switches work somewhat like Ethernet hubs, with all traffic being echoed to all ports. However, as the switch "learns" the end-points associated with each port, it ceases to send non-broadcast traffic to ports other than the intended destination. In this way, Ethernet switching can allow the full wire speed of Ethernet to be used by any given pair of ports on a single switch.

Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other and the elimination of the chaining limits inherent in hubbed Ethernet have made switched Ethernet the dominant network technology.

When only a single device (anything but a hub) is connected to a switch port, full-duplex Ethernet becomes possible. In full duplex mode both devices can transmit to each other at the same time and there is no collision domain. This doubles the aggregate bandwidth of the link and was sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used (collisions can occupy a lot of bandwidth as links get busy) and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).

Dual speed hubs

In the early days of Fast Ethernet, fast ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These effectively split the network into two sections, each acting like a hubbed network at its respective speed then acted as a two port switch between those two sections. This meant they allowed mixing of the two speeds without the cost of a Fast Ethernet switch.

More advanced networks

Simple switched Ethernet networks still suffer from a number of issues:

  • They suffer from single points of failure; e.g., if one link or switch goes down in the wrong place the network ends up partitioned.
  • It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
  • It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
  • They suffer from bandwidth choke points where a lot of traffic is forced down a single link.

Some Managed switches offer a variety of tools to combat these issues including:

  • spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
  • Various port protection features (as it is far more likely an attacker will be on an end system port than on a switch-switch link)
  • VLANs to keep different classes of users separate while using the same physical infrastructure.
  • fast routing at higher levels (to route between those VLANs).
  • Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy (though as the links connect the same pair of switches they won't protect against switch failure.

Autonegotiation

It is essential that both the switch port and the device connected to it use the same speed and duplex settings. To that end, autonegotiation was introduced in 1995 as an option for 100BASE-TX devices (802.3u). Although it worked correctly in many applications, it had two problems. One mistake was that its implementation was optional which led to some devices incapable of using autonegotiation. Secondly, a portion of the specification was not tightly written. Although most manufacturers implemented it one way, some including network giant Cisco, implemented it the other way. This unfortunately led to autonegotiation getting a bad name and, moreover, for Cisco to basically recommend to its customers and administrators to not use it.

The debatable portions of the autonegotiation specifications were eliminated by the 1998 release of 802.3z (1000BASE-X) followed by the negotiation protocols over twisted pair being significantly enhanced for 802.3ab (1000BASE-T). More notably, the new standard specified that to achieve gigabit speed over copper wiring, it was required for autonegotiation to be enabled. Now, all network equipment manufacturers—including Cisco[1]—recommend to use autonegotiation whenever possible.

Note that some switch OSes, as well as some card drivers, still have the option to disable autonegotiation and force a twisted pair connection to 1000Full or 1000Half, but doing that is against specification and should never be used as you won't properly negotiate any of the other parameters. Instead the proper way, for example, to force gigabit ethernet over a Cat 5 connection is to still specify autonegotiation, but limit the advertised capabilities to only 1000Base-T[2].

In cases of very old equipment that has trouble with autonegotiation it is recommended to disable autonegotiation and force the same speed and duplex on both sides of the connection. If this is not possible (e.g. one end is an unmanaged switch) the end with autonegotiation disabled should generally be set to half duplex.

Ethernet frame types and the EtherType field

Frames are the format of data packets on the wire.

There are several types of Ethernet frame:

In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This doubles the potential number of frame types.

The different frame types have different formats and MTU values, but can coexist on the same physical medium.

Ethernet Type II Frame format

The most common Ethernet Frame format, type II

It is claimed that some older (Xerox?) Ethernet specification had a 16-bit length field, although the maximum length of a packet was 1500 bytes. Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification, however, have a 16-bit sub-protocol label field called the EtherType, with the convention that values between 0 and 1500 indicated the use of the original Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the new frame format with an EtherType sub-protocol identifier.

IEEE 802.3 defined the 16-bit field after the MAC addresses as a length field again, with the MAC header followed by an IEEE 802.2 LLC header. The convention described earlier allows software to determine whether a frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards on the same physical medium. All 802.3 frames have an IEEE 802.2 logical link control (LLC) header. By examining this header, it is possible to determine whether it is followed by a SNAP (subnetwork access protocol) header. (Some protocols, particularly those designed for the OSI networking stack, operate directly on top of 802.2 LLC, which provides both datagram and connection-oriented network services.) The LLC header includes two additional eight-bit address fields (called service access points or SAPs in OSI terminology); when both source and destination SAP are set to the value 0xAA, the SNAP service is requested. The SNAP header allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private protocol ID spaces. In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type field.

Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell used this as a starting point to create the first implementation of its own IPX Network Protocol over Ethernet. They did not use any LLC header but started the IPX packet directly after the length field. In principle this is not interoperable with the other later variants of 802.x Ethernet, but since IPX has always FF at the first byte (while LLC has not), this mostly coexists on the wire with other Ethernet implementations (with the notable exception of some early forms of DECnet which got confused by this).

Novell Netware used this frame type by default until the mid nineties, and since Netware was very widespread back then (while IP was not) at some point in time most of the world's Ethernet traffic ran over "raw" 802.3 carrying IPX. Since Netware 4.10 Netware now defaults to IEEE 802.2 with LLC (Netware Frame Type Ethernet_802.2) when using IPX. (See "Ethernet Framing" in References for details)

Mac OS uses 802.2/SNAP framing for the AppleTalk protocol suite on Ethernet ("EtherTalk") and Ethernet II framing for TCP/IP.

The 802.2 variants of Ethernet are not in widespread use on common networks currently, with the exception of large corporate Netware installations that have not yet migrated to Netware over IP. In the past, many corporate networks supported 802.2 Ethernet to support transparent translating bridges between Ethernet and IEEE 802.5 Token Ring or FDDI networks. The most common framing type used today is Ethernet Version 2, as it is used by most Internet Protocol-based networks, with its EtherType set to 0x0800 for IPv4 and 0x86DD for IPv6

There exists an Internet standard for encapsulating IP version 4 traffic in IEEE 802.2 frames with LLC/SNAP headers. It is almost never implemented on Ethernet (although it is used on FDDI and on Token ring, IEEE 802.11, and other IEEE 802 networks). IP traffic can not be encapsulated in IEEE 802.2 LLC frames without SNAP because, although there is an LLC protocol type for IP, there is no LLC protocol type for ARP. IP Version 6 can also be transmitted over Ethernet using IEEE 802.2 with LLC/SNAP, but, again, that's almost never used (although LLC/SNAP encapsulation of IPv6 is used on IEEE 802 networks).

The IEEE 802.1Q tag, if present, is placed between the Source Address and the EtherType or Length fields. The first two bytes of the tag are the Tag Protocol Identifier (TPID) value of 0x8100. This is located in the same place as the EtherType/Length field in untagged frames, so an EtherType value of 0x8100 means the frame is tagged, and the true EtherType/Length is located after the tag. The TPID is followed by two bytes containing the Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The tag is followed by the rest of the frame, using one of the types described above.

Varieties of Ethernet

Main article: Varieties of Ethernet

The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with RJ-45 connectors.

Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and RJ-45 connectors. They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. 10-gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with discussions starting on 40G and 100G Ethernet. Higher speed Ethernets of today use fiber optic cable, and through its history there have also been RF versions of Ethernet, both wireline and wireless.

History

Ethernet was originally developed as one of the many pioneering projects at Xerox PARC. A common story states that Ethernet was invented in 1972, when Robert Metcalfe wrote a memo to his bosses at PARC about Ethernet's potential. But Metcalfe claims Ethernet was actually invented over a period of several years. In 1976, Metcalfe and his assistant David Boggs published a paper titled Ethernet: Distributed Packet-Switching For Local Computer Networks.

The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses weren't the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet, which specifies the protocol being used.

Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30, 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.

Metcalfe sometimes jokingly credits Jerry Saltzer for 3Com's success. Saltzer cowrote an influential paper suggesting that token-ring architectures were theoretically superior to Ethernet-style technologies. This result, the story goes, left enough doubt in the minds of computer manufacturers that they decided not to make Ethernet a standard feature, which allowed 3Com to build a business around selling add-in Ethernet network cards. This also led to the saying "Ethernet works better in practice than in theory," which, though a joke, actually makes a valid technical point: the characteristics of typical traffic on actual networks differ from what had been expected before LANs became common in ways that favor the simple design of Ethernet. Add to this the real speed/cost advantage Ethernet products have continually enjoyed over other (Token, FDDI, ATM, etc.) LAN implementations and we see why today's result is that "connect the PC to the network" means connect it via Ethernet.

Metcalfe and Saltzer worked on the same floor at MIT's Project MAC while Metcalfe was doing his Harvard dissertation, in which he worked out the theoretical foundations of Ethernet.

Related standards

  • Networking standards that are not part of the IEEE 802.3 Ethernet standard, but support the Ethernet frame format, and are capable of interoperating with it.
    • LattisNet — A SynOptics pre-standard twisted-pair 10 Mbit/s variant.
    • 100BaseVG — An early contender for 100 Mbit/s Ethernet. It runs over Category 3 cabling. Uses four pairs. Commercial failure.
    • TIA 100BASE-SX — Promoted by the Telecommunications Industry Association. 100BASE-SX is an alternative implementation of 100 Mbit/s Ethernet over fiber; it is incompatible with the official 100BASE-FX standard. Its main feature is interoperability with 10BASE-FL, supporting autonegotiation between 10 Mbit/s and 100 Mbit/s operation -- a feature lacking in the official standards due to the use of differing LED wavelengths. It is targeted at the installed base of 10 Mbit/s fiber network installations.
    • TIA 1000BASE-TX — Promoted by the Telecommunications Industry Association, it was a commercial failure, and no products exist. 1000BASE-TX uses a simpler protocol than the official 1000BASE-T standard so the electronics can be cheaper, but requires Category 6 cabling.
  • Networking standards that do not use the Ethernet frame format but can still be connected to Ethernet using MAC-based bridging.
    • 802.11 — A standard for wireless networking, often known as wireless Ethernet and usually operated with an Ethernet backbone.
  • Long Reach Ethernet
  • Avionics Full-Duplex Switched Ethernet

See also

Implementations

References

External links

Personal tools