Tuesday, March 31, 2009

Biggest mistake for IPv6

The Internet engineering community says its biggest mistake in developing IPv6 - a long-anticipated upgrade to the Internet’s main communications protocol - is that it lacks backwards compatibility with the existing Internet Protocol, known as IPv4.
At a panel discussion held here Tuesday, leaders of the Internet Engineering Task Force (IETF) admitted that they didn’t do a good enough job making sure native IPv6 devices and networks would be able to communicate with their IPv4-only counterparts when they designed the industry standard 13 years ago.
“The lack of real backwards compatibility for IPv4 was the single critical failure,” says Leslie Daigle, Chief Internet Technology Officer for the Internet Society. “There were reasons at the time for doing that…But the reality is that nobody wants to go to IPv6 unless they think they’re friends are doing it, too.”
Originally, IPv6 developers envisioned a scenario where end-user devices and network backbones would operate IPv4 and IPv6 side-by-side in what’s called dual-stack mode.
However, they didn’t take into account that some IPv4 devices would never be upgraded to IPv6, and that some all-IPv6 networks would need to communicate with IPv4-only devices or content.
IPv6 proponents say the lack of mechanisms for bridging between IPv4 and IPv6 is the single, biggest reason that most ISPs and enterprises haven’t deployed IPv6.
“Our transition strategy was dual-stack, where we would start by adding IPv6 to the hosts and then gradually over time we would disable IPv4 and everything would go smoothly,” says IETF Chair Russ Housley, who added that IPv6 transition didn’t happen according to plan.
In response, the IETF is developing new IPv6 transition tools that will be done by the end of 2009, Housley said.
“The reason more IPv6 deployment isn’t being done is because the people who are doing the job found that they needed these new transition tools,” Housley said. “These tools are necessary to ease deployment.”
IPv6 is needed because the Internet is running out of IPv4 addresses. IPv4 uses 32-bit addresses and can support approximately 4.3 billion individually addressed devices on the Internet. IPv6, on the other hand, uses 128-bit addresses and can support so many devices that only a mathematical expression - 2 to the 128th power - can quantify its size.
Experts predict IPv4 addresses will be gone by 2012. At that point, all ISPs, government agencies and corporations will need to support IPv6 on their backbone networks. Today, only a handful of U.S. organizations – including the federal government and a few leading-edge companies like Bechtel and Google - have deployed IPv6 across their networks.
Richard Jimmerson, chief information officer for the American Registry for Internet Numbers, says demand for IPv4 address space has not slowed down despite the global economic meltdown.
Jimmerson said he’s seen a shift among network operators during the last year as it has become clear that IPv4 addresses are truly running out. “They’re further along in moving towards acceptance of IPv6,” he said.
When IPv4 addresses run out, ISPs and enterprises will require several new transition mechanisms to bridge between IPv4 and IPv6 devices, IETF leaders say.
The transition mechanisms under development by the IETF are:* Dual-Stack Lite, a technique developed by Comcast that allows for incremental deployment of IPv6. With Dual-Stack Lite, a carrier would give new customers special home gateways that take IPv4 packets from their legacy PCs and printers and ship them over an IPv6 tunnel to a carrier-grade network address translator (NAT).
* NAT64, a mechanism for translating IPv6 packets into IPv4 packets and vice versa. A related tool, dubbed DNS64, allows an IPv6-only device to call up an IPv4-only name server. These two tools would allow an IPv6 device to communicate with IPv4-only devices and content.
The IETF also is considering work that would allow ISPs to share a single public IPv4 address among multiple users.
“We need to take a two-pronged approach,” says Alain Durand, director of IPv6 architecture and Internet governance in the Office of the CTO for Comcast. “We need to embrace IPv6, but we also need to build an IPv6 transition bridge that will allow for sharing of IPv4 addresses and IPv4 and IPv6 tunneling.”
Durand says these transition mechanisms are required so that IPv6 “can be deployed incrementally.”
Jari Arkko, an engineer with Ericsson Research, says the IETF community has shown “tremendous interest” in developing these IPv6 transition mechanisms.
These mechanisms aren’t about “extending IPv4 for eternity,” Daigle says. “We still need to be doing this to make sure that we can do a global transition from a primarily IPv4 network to a primarily IPv6 network.”
The overall message of the IETF panel was that network operators need to plan for IPv6 deployment whether they like it or not.
IETF leaders say the networking industry is starting to accept that they have to migrate to IPv6, even if it doesn’t offer any concrete business advantages.
“People are deploying, albeit slowly,” says Kurtis Lindqvist, CEO of Netnod Internet Exchange in Stockholm. “The core networks are already capable of IPv6…Our biggest challenge is to make the transition work.”
Daigle says the business case for IPv6 is simple: If companies want to have Internet applications that continue to work and scale, they need to deploy IPv6.
“IPv6 is the path forward,” Daigle says. “There are a lot of technologies being discussed and being promoted for extending the use of IPv4 but that really is a bridging mechanism because the path forward is IPv6.”
Daigle says the time for CIOs to start planning for IPv6 is now, before IPv4 addresses are depleted.
Network operators need “`to be aware of [IPv6] and accept it as coming and look forward to it as coming,” Daigle says. “If at this point, IPv6 is not in your refresh cycle planning, it should be. If you haven’t done a review of what applications would be impacted in a heavily NAT-ted world or in an IPv6 world, you should

Monday, March 30, 2009

Computer Security

http://www.snopes.com/computer/virus/conficker.asp

Last updated: 27 March 2009
Conficker

REAL VIRUS
Origins: Conficker.C (also known as Kido or Downadup) is the third iteration of a worm which first began slithering its way onto Windows-based PCs in November 2008, with each version growing more sophisticated than the last. Like many other forms of malware, after it has infected a target computer (by downloading a Trojan), it tries to prevent its removal by disabling anti-virus software and blocking access to security-related web sites. The Conficker worm's purpose is to create a "botnet" of infected computers that can be controlled by Conficker's creators, allowing them to engage in such activities as stealing stored information from those computers, launching attacks against particular web sites, or directing infected machines to send out spam e-mails. Although no one is quite sure how many computers have already been infected by Conficker, estimates place the number upwards of a couple of million. Beginning on 1 April 2009, infected computers will start attempting to "call home" (i.e., contact control servers in the botnet) in order to receive Conficker updates, which has led to claims that some apocalyptic cyber-event will occur on that date and result in millions of computers being wiped out or large portions of the Internet being disabled. Although no one really knows what's going to happen with Conficker on (or after) that date, security experts have opined that it likely won't be nearly as substantial as some of the wilder speculation would have it:
Security researchers say the reality is probably going to be more like what happened when the clocks on the world's computers turned to January 1, 2000, after lots of dire predictions about the so-called millennium bug. That is, not much at all. "It doesn't mean we're going to see some large cyber event on April 1," Dean Turner, director of the global intelligence network at Symantec Security Response, said. It's likely that the people behind Conficker are interested in using the botnet, which is comprised of all the infected computers, to make money by distributing spam or other malware, experts speculate. To do so, they would need the computers and networks to stay in operation. "Most of these criminals, even though they haven't done something with this botnet yet, are profit-driven," said Paul Ferguson, an advanced-threats researcher for Trend Micro. "They don't want to bring down the infrastructure. That would not allow them to continue carrying out their scams."
In February 2009, Microsoft announced it had formed a partnership with other technology agencies to coordinate a response to Conficker and was offering a $250,000 reward for information leading to the arrest and conviction of those responsible for launching the Conficker code on the Internet. In October 2008, Microsoft issued a patch to close a vulnerability in Windows-based systems that could be used for a wormable exploit, and in March 2009 it published an alert with instructions and tools for stopping the spread of Conficker and removing it from infected systems.
________________________________________

Sources
Mills, Elinor. "Conficker Time Bomb Ticks, But Don't Expect Boom."
CNEt News. 25 March 2009.
Potter, Ned. "Conficker Computer Worm Threatens Chaos."
ABC News. 25 March 2009.
Prince, Brian. "Conficker: The Windows Worm That Won't Go Away."
eWeek. 25 March 2009.
Worthen, Ben. "Conficker: Don’t Believe the Hype."
The Wall Street Journal. 26 March 2009.

Google Ads by Mavra

About Me

My photo
Jeddah, Makkah, Saudi Arabia
for more details www.inhousetoday.com