Nano Technology

Download Presentation

Introduction

Molecular nanotechnology or Nanotechnology is the name given to a specific sort of manufacturing technology to build things from the atom up, and to rearrange matter with atomic precision. In other words, we can say that nanotechnology is a three dimensional structural control of material and devices at molecular level. The nanoscale structures can be prepared, characterized, manipulated, and even visualized with tools.


Nanotechnology is a tool-driven field."


Other terms, such as molecular engineering or molecular manufacturing are also often applied when describing this emerging technology. This technology does not yet exist. But, scientists have recently gained the ability to observe and manipulate atoms directly. However, this is only one small aspect of a growing array of techniques in nanoscale science and technology. The ability to make commercial products may yet be a few decades away.


“Nanotechnology is Engineering, Not Science.”

The central thesis of nanotechnology is that almost any chemically stable structure that is not specifically disallowed by the laws of physics can in fact be built. Theoretical and computational models indicate that molecular manufacturing systems are possible — that they do not violate existing physical law. These models also give us a feel for what a molecular manufacturing system might look like. Melting pot of science combining applications of physics, chemistry, biology, electronics and computers. Today, scientists are devising numerous tools and techniques that will be needed to transform nanotechnology from computer models into reality.

Nanotechnology is often called the science of the small. It is concerned with manipulating particles at the atomic level, usually in order to form new compounds or make changes to existing substances. Nanotechnology is being applied to problems in electronics, biology, genetics and a wide range of business applications.

Matter is composed of small atoms that are closely bound together, making up the molecular structure, which, in turn determines the density of the concerned material. Since different factors such as molecular density, malleability, ductility and surface tension come into play, nanosystems have to be designed in a cost effective manner that overrides these conditions and helps to create machines capable of withstanding the vagaries of the environment.

Let us take the case of metals. Metals, solids in particular, consist of atoms held together by strong structural forces, which enable metals to withstand high temperatures. Depending upon the exertion of force or heat, the molecular structure bends in a particular fashion, thereby acquiring a definite space in the form of a lattice structure. When the bonding is strong, the metal is able to withstand pressure. Else it becomes brittle and finally breaks up. So, only the strongest, the hardest, the highest melting point metals are worth considering as parts of nanomachines.

The trick is to manipulate atoms individually and place them exactly where needed, to produce the desired structure. It is a challenge for the scientists to understand the size, shape, strength, force, motion and other properties while designing the nano machines. The idea of nanotechnology is therefore to master over the characteristics of matter in an intelligent manner to develop highly efficient systems.

The key aspect of nanotechnology is that nanoscale materials offer different chemical and physical properties than the bulk materials, and that these properties could form the basis of new technologies.

Read more


3G Vs. WiFi

Download Presentation


This paper compares and contrasts two technologies for delivering broadband wireless Internet access services: "3G" vs. "WiFi".

3G refers to the collection of third generation cellular technologies that are designed to allow mobile cellular operators to offer integrated data and voice services over cellular networks.

WIFI refers to the 802.11b wireless Ethernet standard that was designed to support wireless LANs.

Although the two technologies reflect fundamentally different service, industry, and architectural design goals, origins, and philosophies, each has recently attracted a lot of attention as candidates for the dominant platform for providing broadband wireless access to the Internet. It remains an open question as to the extent to which these two technologies are in competition or, perhaps, may be complementary. If they are viewed as in competition, then the triumph of one at the expense of the other would be likely to have profound implications for the evolution of the wireless Internet and service provider industry structure.

The two most important phenomena impacting telecommunications over the past decade have been the explosive parallel growth of the Internet and mobile telephone services. The Internet brought the benefits of data communications to the masses with email, the Web, and eCommerce; while mobile service has enabled ‘‘follow-me-anywhere/always on’’ telephony. The Internet helped accelerate the trend from voice- to data-centric networking. Now, these two worlds are converging. This convergence offers the benefits of new interactive multimedia services coupled to the flexibility and mobility of wireless.

The goal of the qualitative discussion of these two technologies is to provide a more concrete understanding of the differing worldviews encompassed by these technologies and their relative strengths and weaknesses in light of the forces shaping the evolution of wireless Internet services.

3G and WiFi, we are ignoringmany other technologies that are likely to be important in the wireless Internet such as satellite services, LMDS, MMDS, or other fixed wireless alternatives. We also ignore technologies such as Bluetooth or HomeRF, which have at times been touted as potential rivals to WiFi, at least in home networking environments

3G offers a vertically integrated, top–down, service-provider approach to delivering wireless Internet access; while WiFi offers (at least potentially) an end-user-centric, decentralized approach to service provisioning. Although there is nothing intrinsic to the technology that dictates that one may be associated with one type of industry structure or another, we use these two technologies to focus our speculations on the potential tensions between these two alternative world views.

Read more


GPRS-Global Packet Radio Service

Download Presentation

Introduction : -

The name, General Packet Radio Service (GPRS) doesn't convey much information to the non-technical user. Describing it as providing a direct link into the Internet from a GSM phone, is much clearer. GPRS is to mobile networks what ADSL (Asymmetric Digital Subscriber Line) is to fixed telephone networks - the favoured solution for providing fast and inexpensive Internet links.

GPRS will undoubtedly speed up a handset's Internet connection - but it remains to be seen exactly how much speed can be wrung out of the system. GPRS works by amalgamating (aggregating) a number of separate data channels. This is feasible because data is being broken down into small 'packets' which are re-assembled by the receiving handset back into their original format. The catch is that the number of receiving channels does not necessarily have to match the number of sending channels. On the Internet, it is assumed that you want to view more information (such as a complicated Web page) than you want to send (such as a simple Yes or No response). So GPRS is an asymmetric technology because the number of ‘down’ channels used to receive data doesn’t match the number of ‘up’ channels used to send data.

The task of defining GPRS has been the responsibility of the Special Mobile Group (SMG) - part of the 3GPP initiative (3rd Generation Partnership Project). Rather than wait for the final version of the SMG standard some manufacturers decided to go with GPRS handsets which conformed to an earlier version of the specifications known as SMG29. This basically offers two 'down' channels and a single 'up' channel. In practice each channel is offering around 12-13 Kbit/s so the top speeds works out to be around 26 Kbit/s. Most experts agree, however that full interoperability between products will come with SMG 31. This is capable of offering four 'down' channels which equates to a top speed of around 52 Kbit/s - the same as a high speed (V.90) landline modem.

GPRS is classified as a 2.5G (or 2G Plus) technology because it builds upon existing network infrastructure whereas with 3G networks it normally requires building an entirely new network. In order to compete against 3G networks, therefore, North- American operators have been looking to GPRS to provide high speed data links. Hence, manufacturers have been working on a related technology known as EDGE (Enhanced Data for Global Evolution). In order to compete with 3G, EDGE must offer links running at 384 Kbit/s and originally this equated to running GPRS three times faster. However, because GPRS has proved much slower than expected, it now needs to be seven times faster.

What is GPRS?


GPRS stands for General Packet Radio Service , and is a relatively low cost technology that offers packet-based radio service and allows data or information to be sent and received across mobile telephone networks.Designed to supplement the existing mobile technologies, like GSM, CDMA, TDMA etc.

What does GPRS do?


GPRS provides a permanent connection where information can be sent or received immediately as the need arises, subject to radio coverage. No dial-up modem connection is necessary. This is why GPRS users are sometimes referred to be as being anytime-anywhere "always connected".The GPRS tariff structure is based on a fixed cost, dependent on the quantity of data required. In other words customers will be able to fix their operating costs without the concerns of variable billing.

Why GPRS?


At present circuit switching technique like your telephone line, in order to send or receive emails, transfer files or browse WAP/Web sites.it is first necessary to make a 'data' call. The call is answered by a modem or an ISDN adapter owned either by the network operator itself (such as BT Cellnet) or by an Internet Service Provider (ISP). Next the caller is 'authenticated' by giving a user ID and password and then assigned an Internet address by the ISP or operator. The whole process can take up to sixty seconds or more and even at the end of this procedure the connection is slow - normally a mere 9.6 Kbit/s.


With packet switching technique GPRS, there is no call. Once the handset is powered on, by pressing a button the user is connected directly to the Internet. The link is only broken when the handset is turned off - hence GPRS is known as an 'always on' connection. The fact that the link is continuous has one major benefit. It enables the ISP/operator to know a handset's Internet address. So messages can be passed directly over the Internet from a PC, for example, down to your handset. Crucially this facility enables the Internet Service Provider to 'push' messages down to your handset - rather like an SMS message. The difference is that with GPRS the link is interactive. That means if you want to respond directly - such as instruct your broker to sell 500 shares - you can. One of the major criticisms aimed at WAP is that it lacked support for 'push' technologies. This failing has effectively been rectified via an update to the WAP standards (version 1.2) and the introduction of GPRS enabled WAP handsets.

Read more


Gigabit Networking

Download Presentation



Network bandwidth is also increasing concurrently with the CPU speeds.

When in 1980s 10Mbs Ethernet was considered fast, we now have 100 Mbs Ethernet. The bandwidth is approaching the speed on 1 billion bits per second

(1 Gbs), much due to the research in the field of fiber optic signalling.

The three main fields data communications, telecommunication and computing are undergoing a period of transition. The field of computing is rapidly advancing with processor speed doubling ever year. The latest RAID (Redundant Arrays of Inexpensive Disks) has given rise to file-systems with gigabit-bandwidth.

The field of data communications which facilitates the exchange of data between computing systems has to keep up with the pace of the growing computing technologies. In the past the data communications provided services like the e-mail. Now applications like virtual reality, video conferencing, video on demand services are present.

For a century the telecommunication industry has been carrying voice traffic. This scenario is changing with telephone networks carrying more data each year. The data being carried by telephone network is growing at 20% per year compared to voice traffic which is growing only at 3% per year. Soon the data traffic will overtake the voice traffic. All this, has made the telecommunication industry more interested in carrying data in their networks.

So the three communities are now converging with common interests of carrying more data at higher speeds. This has led to some joint activities. The most notable of these activities is that which has led to the setting up of gigabit testbeds in United States. Other joint activites are the standardization of ATM (Asynchronous Transfer Mode), a suite of communication protocol to support integrated voice, video and data networks. Some organizations which are doing research in gigabit networking are National coordination office for HPCC (High Performance Computing and Communications), The Corporation for National Research Initiatives, IEEE Communications Society Technical Committee on Gigabit Networking.

When the gigabit networking was in its horizon, many researchers felt that the current knowledge about networking would not apply to gigabit networks which are considerably faster than existing networks. Now, after several years of research it has been found that many of the strategies and techniques (like layering the protocol) still work in gigabit networks also.

There are many working Gigabit testbeds .In five to ten years Gigabit networks will become a reality. It is now unclear whether there will be a single gigabit technology with a specific standard protocol. But it looks like that there will be many competing gigabit networking technologies (like many LAN technology) and many protocols but eventually one of them will become most popular (like IP).

The next section deals with the key concepts and technologies in the Gigabit networking. The third section deals with more specific issues in gigabit networking. The fourth sections discusses the various potential gigabit applications. The last section overviews the current state of gigabit networking. The appendix A gives a list of gigabit testbeds. The appendix B is an annotated bibiliography of the http sites, articles, papers and books refered in this paper.

Read more


Bluetooth

Download Presentation


What is Bluetooth?

What is it - a technology, a standard, an initiative, or a product?

Bluetooth wireless technology is a de facto standard, as well as a specification for small form factor, low-cost, short range radio links between mobile PCs, mobile phones and other portable devices. The Bluetooth Special Interest Group is an industry group consisting of leaders in the telecommunications, computing, and networking industries that are driving development of the technology and bringing it to market.

Why Bluetooth ?

What will Bluetooth wireless technology deliver to end users?

It will enable users to connect a wide range of computing and telecommunications devices easily and simply, without the need to buy, carry, or connect cables. It delivers opportunities for rapid ad hoc connections, and the possibility of automatic, unconscious, connections between devices. It will virtually eliminate the need to purchase additional or proprietary cabling to connect individual devices. Because Bluetooth wireless technology can be used for a variety of purposes, it will also potentially replace multiple cable connections via a single radio link. It creates the possibility of using mobile data in a different way, for different applications such as "Surfing on the sofa", "The instant postcard", "Three in one phone" and many others. It will allow them to think about what they are working on, rather than how to make their technology work. The solution eliminates the annoying cable and its limitations regarding flexibility (often specific for a brand or pair of devices) and range. But, Bluetooth implies more than that. The technique provides the means for connecting several units to each other such as setting up small radio LANs between any types of Bluetooth devices. A number of user scenarios are described. They highlight more possibilities that reach far beyond just an elimination of the point-to-point cable.

Read more


Smart Cards

Internet technologies, through intranet and extranet applications, have proven themselves to be efficient and effective in streamlining existing processes from supply chain management to manufacturing logistics, from marketing to customer asset management, and by creating new value chains and businesses. Nevertheless, these changes and benefits signal only an evolutionary shift in the way we do business. The Internet-enabled economy resembles the conventional physical market in many aspects. Some of the new technologies and applications may even be unnecessary. American consumers, for example, regard smart cards as a redundant payment mechanism when checks, credit cards and ATM cards do an adequate job for current needs. What is the use of smart cards? Do we really need them? Will they ever take off?

Today, the SIM card’s basic functionality in wireless communications is subscriber authentication and roaming. Although such features may be achieved via a centralized intelligent network (IN) solution or a smarter handset, there are several key benefits that could not be realized without the use of a SIM card, which is external to a mobile handset. These benefits—enhanced security, improved logistics, and new marketing opportunities—are key factors for effectively differentiating wireless service offerings. This tutorial assumes a basic knowledge of the wireless communications industry and will discuss the security benefits, logistical issues, marketing opportunities, and customer benefits associated with smart cards.

The smart card is one of the latest additions to the world of information technology (IT). The size of a credit card, it has an embedded silicon chip that enables it to store data and communicate via a reader with a workstation or network. The chip also contains advanced security features that protect the card’s data.

Smart cards come in two varieties: microprocessor and memory. Memory cards simply store data and can be viewed as small floppy disks with optional security. Memory cards depend on the security of a card reader for their processing. A microprocessor card can add, delete, and manipulate information in its memory on the card. It is like a miniature computer with an input and output port, operating system, and hard disk with built-in security features.

Smart cards have two different types of interfaces. Contact smart cards must be inserted into a smart-card reader. The reader makes contact with the card module’s electrical connectors that transfer data to and from the chip. Contactless smart cards are passed near a reader with an antenna to carry out a transaction. They have an electronic microchip and an antenna embedded inside the card, which allow it to communicate without a physical contact. Contactless cards are an ideal solution when transactions must be processed quickly, as in mass transit or toll collection.

A third category now emerging is a dual interface card. It features a single chip that enables a contact and contactless interface with a high level of security.

Two characteristics make smart cards especially well suited for applications in which security-sensitive or personal data is involved. First, because a smart card contains both the data and the means to process it, information can be processed to and from a network without divulging the card’s data. Secondly, because smart cards are portable, users can carry data with them on the smart card rather than entrusting that information on network storage or a backend server where the information could be sold or accessed by unknown persons (see Figure).

Figure 1

A smart card can restrict the use of information to an authorized person with a password. However, if this information is to be transmitted by radio frequency or telephone lines, additional protection is necessary. One form of protection is ciphering (scrambling data). Some smart cards are capable of ciphering and deciphering, so the stored information can be transmitted without compromising confidentiality. Smart cards can cipher into billions of foreign languages and choose a different language at random every time they communicate. This process ensures that only authenticated cards and computers are used and makes hacking or eavesdropping virtually impossible.

The top five applications for smart cards throughout the world currently are as follows:

  1. public telephony—prepaid phone memory cards using contact technology
  2. mobile telephony—mobile phone terminals featuring subscriber identification and directory services
  3. banking—debit/credit payment cards and electronic purse
  4. loyalty—storage of loyalty points in retail and gas industries
  5. pay-TV—access key to TV broadcast services through a digital set-top box

The benefits of using smart cards depend on the application. In general, applications supported by smart cards benefit consumers where their lifestyles intersect with information access and payment-related processing technologies. These benefits include the ability to manage or control expenditures more effectively, reduce fraud and paperwork, and eliminate the need to complete redundant, time-consuming forms. The smart card also provides the convenience of having one card with the ability to access multiple services, networks, and the Internet.

Smart cards provide secure user authentication, secure roaming, and a platform for value-added services in wireless communications. Presently, smart cards are used mainly in the Global System for Mobile Communications (GSM) standard in the form of a SIM card. GSM is an established standard first developed in Europe. In 1998, the GSM Association announced that there are now more than 100 million GSM subscribers. In the last few years, GSM has made significant inroads into the wireless markets of the Americas.

Initially, the SIM was specified as a part of the GSM standard to secure access to the mobile network and store basic network information. As the years have passed, the role of the SIM card has become increasingly important in the wireless service chain. Today, SIM cards can be used to customize mobile phones regardless of the standard (GSM, personal communications service [PCS], satellite, digital cellular system [DCS], etc.).

Today, the SIM is the major component of the wireless market, paving the way to value-added services. SIM cards now offer new menus, prerecorded numbers for speed dialing, and the ability to send presorted short messages to query a database or secure transactions. The cards also enable greeting messages and company logotypes to be displayed.

Other wireless communications technologies rely on smart cards for their operations. Satellite communications networks (Iridium and Globalstar) are chief examples. Eventually, new networks will have a common smart object and a universal identification module (UIM), performing functions similar to SIM cards.

Read more


Mobile IP


Download Presentation


Many organizations utilize traditional wire-based networking technologies to establish connections among computers. These technologies fall into the following three main categories namely LAN, MAN & WAN.


These traditional networking technologies offer tremendous capabilities from an office, hotel room, or home. Activities such as communicating via e-mail with someone located in a faraway town or conveniently accessing product information from the World Wide Web are the result of widespread networking. But limitations to networking through the wire-based system exist because you can not utilize these network services unless you are physically connected to a LAN or a telephone system.


Wireless networks are stretching their legs day by day. With the increasing no. of mobile users wireless technology has become inevitable. Wireless networking is the first step towards the mobile communication system. As for wireless networking we use certain protocols for the communication thus definitely we need protocols for mobile communication. These protocols as in wireless networks are called Mobile IP or Mobile Internet Protocol.

The day will arrive, hastened by Mobile IP, when no person will ever feel “lost” or out of touch. As people move from place to place with their laptop, keeping connected to the network can become a challenging and sometimes frustrating and expensive proposition. The goal is that with widespread deployment of the mobile networking technologies described here automatic communications with globally inter-connected computing resources will be considered as natural for people on the move as it is for people sitting at a high performance workstation in their office. In the near future communicating via laptop should be as natural as using telephone.

Although the Internet offers access to information sources worldwide, typically we do not expect to benefit from that access until we arrive at some familiar point --whether home, office, or school. However, the increasing variety of wireless devices offering IP connectivity, such as personal digital assistants, handhelds, and digital cellular phones, is beginning to change our perceptions of the Internet.

Mobile IP is a proposed standard protocol that builds on the Internet Protocol by making mobility transparent to applications and higher-level protocols like TCP. This paper aims at discussing the design principles of Mobile IP and how it can be incorporated with the already existing Internet architecture.


Mobile Internet Protocol is a new recommended Internet protocol designed to support the mobility of a user (host). Host mobility is becoming important because of the recent blossoming of laptop computers and the high desire to have continuous network connectivity anywhere the host happens to be. The development of Mobile IP makes this possible.


There are mainly three processes in Mobile IP:

1. Agent Discovery: The process by which a Mobile node determines its current location and obtains the care of address.

2. Registration: The process by which a Mobile node request service from a foreign agent on foreign link and informs its home agent of its current care-off address.

3. Tunneling: The specific mechanism by which packets are routed to and from a Mobile node that is connected to a foreign link.

Mobile Computing is becoming increasingly important due to the rise in the number of portable computers and the desire to have continuous network connectivity to the Internet irrespective of the physical location of the node. The Internet infrastructure is built on top of a collection of protocols, called the TCP/IP protocol suite. Transmission Control Protocol (TCP) and Internet Protocol (IP) are the core protocols in this suite. IP requires the location of any host connected to the Internet to be uniquely identified by an assigned IP address. This raises one of the most important issues in mobility, because when a host moves to another physical location, it has to change its IP address. However, the higher level protocols require IP address of a host to be fixed for identifying connections.

The Mobile Internet Protocol (Mobile IP) is an extension to the Internet Protocol proposed by the Internet Engineering Task Force (IETF) that addresses this issue. It enables mobile computers to stay connected to the Internet regardless of their location and without changing their IP address.

Mobile IP specifies enhancements that allow transparent routing of IP datagrams to mobile nodes in the Internet. Each mobile node is always identified by its home address, regardless of its current point of attachment to the Internet. While situated away from its home, a mobile node is also associated with a care-of address, which provides information about its current point of attachment to the Internet. The protocol provides for registering the care-of address with a home agent. The home agent sends datagrams destined for the mobile node through a tunnel to the care-of address. After arriving at the end of the tunnel, each datagram is then delivered to the mobile node.

Regardless of the movement between different networks connectivity at the different points is achieved easily. Roaming from a wired network to wireless or wide area network is also done with ease. Mobile IP is a part of both IPV4 and IPV6.

The description of the core differences between the present protocol Ipv4 and the future protocol Ipv6 such as scalability, security, realtimeness, Plug and Play, Clear spec. and optimizations are looked. Covered next is the difference between the headers schemes of the IPV4 the currently used Protocol Vs IPV6 the up-coming sensation in the Internet World. Well you are using it then you should be aware of what are the advantages of the thing and thus here it covers the Advantages of IPV6 over IPV4.




Read more


Global Mobile Satelite System

Global Mobile Satelite System

To make a satellite phone call today from a location that does not offer terrestrial wire line or wireless coverage requires the use of a large, costly terminal, and entails very high per minute charges. Further, the quality of service is relatively poor because of annoying echoes, large transmission delays, over talk associated with satellite communications using geostationary satellites.


There is a trend for mobile satellite system architectures aimed at the deployment of multi-satellite constellations in Non-Geostationary Earth Orbits. This allows the user terminals to be small size, low cost and having low power demand. In present and next generation satellite systems, CDMA has been proposed as the multiple access technique for a number of mobile satellite communication systems. To enhance the coverage and quality of service, Low Earth Orbiting (LEO) constellations are usually selected. Here, we analyze the performance of the downlink of a LEO satellite channel. The provision of such a service requires that the user have sufficient link quality for the duration of service. To have sufficient link quality, the user must have an adequate power to overcome the path loss and other physical impairments to provide acceptable communication and improve the performance of the system.


Thus, in many parts of the world, the demand for communications mobility can be met effectively only through global mobile satellite services. Handheld satellite phones are therefore forecast as the emerging mobile communications frontier with growth that could parallel recent growth in cellular mobile industry.
In order to guarantee the service quality and reliability for mobile satellite communication systems, we have to take into account outages due to obstruction of the line-of-sight path between a satellite and a mobile terminal as well as the signal fluctuation caused by interference from multipath radio waves. Thus, we need a good characterization for the satellite propagation channel. It is commonly accepted that satellite communications systems (in particular, low earth orbit LEO systems) are the de facto solution for providing the real personal communications services (PCS’s) to the users either stationary or on the move anywhere, anytime and in any format (voice, data, and multimedia).


The satellite segment is a network of GEO or LEO satellites arranged in orbital planes (i.e. different parts of the sky) in such a way that they have a communications link with end-user equipment, ground gateways and other satellites. The gateway connects the satellites to the local telephone network. The gateway also transmits signals to the satellites and receives transmission from the satellites. Due to the high mobility of low earth orbit (LEO) satellites, there is a significant number of handover attempts in a LEO-based mobile satellite communication system, causing a high handover failure rate. This paper proposes to extend the period of which a handover request is valid, and thus rendering higher probability of successful handover.


Satellite communication service can be provided by geostationary earth orbit (GEO), medium earth orbit (MEO) or low earth orbit (LEO) satellites. Because of its much shorter distance from earth, lower power requirement and thus smaller mobile terminal (MT) size, LEO satellite system is a preferable choice. Differences between satellite and terrestrial systems exist in spite of common objectives for high quality services and excellent spectrum efficiency. Some differences arise because:- user costs are closely related to satellite transmit power the satellite propagation channel is highly predictable satellite paths introduce significant propagation delays and Doppler shifts frequency co-ordination has to be on a global basis frequency re-use options are more limited, hence bandwidth is a tight constraint satellite beam shaping and sizing opportunities are limited.


The most significant attribute of any satellite communication system is the wide area coverage that can be provided with very high guarantees of availability and consistency of service. Satellite communication systems are designed to provide voice, data, fax, paging, video conferencing and internet services to users worldwide. Through satellite based systems, users will be able to make a phone call from an African safari or while sailing around the world. No matter where users are, they will be able to communicate with clients, customers, associates, friends, and family anywhere in the world. In addition, satellite communications will allow countries to provide phone services without large investments in landline or wireless systems. Satellite communications will be one of the fastest growing areas within the communications industry.



Download Presentation

Read more


Biometrics

Biometrics


Bio-metrics is modern security system. It uses a person’s biological features to give access rights. The biological features like finger print, voice print, iris pattern, face print, signature(dynamic), retina, hand geometry, ear form , DNA, odor, keyboard stroke, finger geometry, vein structure of back of hand etc. are used. So, in this any unauthorized person cannot trap the information or assets. Today, to prevent illegal operations this technique is widely used. It is a user-friendly technique, which is accepted by almost all fields.

The problem of the personal identification has become a great matter in today’s world. Biometrics, which means biological features based identity recognition, has provided a convenient and reliable solution to this problem. This recognition technology is relatively new with many significant advantages, such as speed, accuracy, hardware, simplicity and applicability.

Biometrics is a means of identifying a person by measuring a particular physical or behavioral characteristic and later comparing it to a library of characteristics belonging to many people. Biometric systems have two advantages over traditional ID methods. First, the person to be identified does not have to present anything but himself. Second, the critical variable for identification cannot be lost or forged. Retinal identification is the most accurate of the biometric methods used at this time. It will replace traditional ID methods such as P.I.N. numbers for accessing A.T.M.s and virtually every other electronic device used for conducting business where identification is a requirement and prerequisite.

After the arrival of IT(Information Technology) this technique is used along with computer and this embedding gives the perfect result.


Download Presentation


Read more


Dna Computers

Dna Computers


DNA Computer can store billions of times more information then your PC hard drive and solve complex problems in a less time. We know that computer chip manufacturers are racing to make the next microprocessor that will more faster. Microprocessors made of silicon will eventually reach their limits of speed and miniaturization. Chips makers need a new material to produce faster computing speeds.


To understand DNA computing lets first examine how the conventional computer process information. A conventional computer performs mathematical operations by using electrical impulses to manipulate zeroes and ones on silicon chips. A DNA computer is based on the fact the information is “encoded” within deoxyribonucleic acid (DNA) as patterns of molecules known as nucleotides. By manipulating the

how the nucleotides combine with each other the DNA computer can be made to process data. The branch of computers dealing with DNA computers is called DNA Computing.


The concept of DNA computing was born in 1993, when Professor Leonard Adleman, a mathematician specializing in computer science and cryptography accidentally stumble upon the similarities between conventional computers and DNA while reading a book by James Watson. A little more than a year after this, in 1994, Leonard M. Adleman, a professor at the University of Southern California, created a storm of excitement in the computing world when he announced that he had solved a famous computation problem. This computer solved the traveling salesman problem also known as the “Hamiltonian path" problem, which is explained later. DNA was shown to have massively parallel processing capabilities that might allow a DNA based computer to solve hard computational problems in a reasonable amount of time.

There was nothing remarkable about the problem itself, which dealt with finding the shortest route through a series of points. Nor was there anything special about how long it took Adleman to solve it — seven days — substantially greater than the few minutes it would take an average person to find a solution. What was exciting about Adleman’s achievement was that he had solved the problem using nothing but deoxyribonucleic acid (DNA) and molecular chemistry.

Download Presentation

Read more


Internet Protocol Version 6

Internet Protocol Version 6



The paper discusses on the up coming trends and the technologies used in the Internet World. It will explore the topic on IPV6 i.e. Internet Protocol Version 6.

We say Internet Protocol Version 6 but what is this Protocol stuff; get familiarized with what is Protocols first and then we start with the in-depth study of IPV6. Like its previous versions IPV6 is all set to make waves in the Internet World. Here we will see what IPV6 is, why IPV6 is selected and why we need IPV6.The paper justifies what are the problems of the current IP we are using and how it is wearing out with time and increase in number of users.

In addition this will explore a bit of the history of the Internet Protocols and the forums, which are anticipating and sparing their attentions to these development. It covers the description of all the rules used for the addressing methods in Ipv6. Included here is also the compatibility of the Ipv4 address with that of Ipv6. How an Ipv6 address can be converted in to the present Ipv4 address to identify the node is also covered.

This will give a brief idea on some of the key features of IPV6 where it discusses IPV6 as a re-engineering effort against IP technology. Features includes firstly Larger IP address and it use for end-to-end communication. Deploy more technologies the future prospects and covers within it Auto Configuration describes and stateless host like mandatory facilities, Security tells about this mandatory feature, friendliness to traffic technologies describes about the compatibleness of Ipv6 with future technologies. Then it continues with multicast becoming mandatory, better support to ad-hoc feature facilitating anycast, it also a cure to routing tables, it specifies the simpler header structure, allows flexible protocol extensions which provides hardware acceleration, smooth transition form Ipv4 showing its compatibility with Ipv4, follows key design principles of Ipv4 justifies its proper structure.

Then it describes about some of the Objectives of this Protocol. Included will be a detailed look on the addressing scheme and the notation for the Protocols address.

Before we get to anything else we need to learn the most important and the only working part of any Protocol used. The Header, which bears all the working and the terminology along with the characteristics of the Protocol, will be covered next. Different layers that are present in the Header Format of the Protocol are discussed with their description and functionalities. The first in the topic comes Version, Class, Flow label covering the first set of 32 bits. The next comes Payload Length, Next Header and Hop Limit that also covers 32 bits of the Header space. The next is the mandatory and inevitable source address of 128 bits. At the last of the Header comes the Destination Header of 128 bits.

Covered next is the difference between the header schemes of the IPV4 the currently used Protocol Vs IPV6 the up-coming sensation in the Internet World. The topic presents all the basic differences between the two headers and also clarifies which one is simpler. It also shows how the avoidable things in the present header are removed in Ipv6.

Then a brief look at what is Extension Header and its use. Here will be the discussion of Ipv6 Extension Header including TCP Header, Routing Header, Fragment Header and others. There will be need to discuss all the types of Extension Headers needed for transmission or implementations of Ipv6. The issues related to the size of packets for transmission using Ipv6 are then described.

Security being one of the important features the Authentication Header is then explained in the topic and how this is used to prevent users from Internet Spoofing. Then there is description of Encapsulating Security Payload (ESP) header and how it helps in Confidentiality and Authentication. Then of course the problems faced by the IPv6 Security Features.

The world is going mobile and thus the mobile communication draws a deep attention for its Internet Development. There is a brief description about the Mobile IP and its functionalities that are very different from our Internet. The functionalities differ because we use processes such as Tunneling, Registration and Agent Discovery for the change of subnet by any IP holder for the Home Network. There is a description of how Ipv4 is used for Mobile Communication and how can the features of Ipv6 aid to this technology or how is Ipv6 better than Ipv4 in Mobile Communication. Then there will be description of Mobile Ipv4 and Mobile Ipv6 used for roaming node in the network. Protocols are not limited to the type of technology used for transmission and thus there is coverage of the Protocol as a service to GPRS and WCDMA Mobile Networks.

Then the topic covers the Ipv6 mobility in the 2G and 3G networks in the Mobile Networks. There is adaptability and thus there is a need to convert our data from IPV6 networks to the Ipv4 networks. And hence there is a description of the 6to4 migration. How this conversion takes place and what are the benefits of the conversion are covered in detail. Then this tells about Ipv6’s support to 3G networks the only known IP to be capable to do this. Then next in the Mobile Connections comes the tunneling of Ipv6 over Ipv4. The process is tedious for our traditional Ipv4 networks and holds a lot of importance as far as communication from different places for a single user or can be said as roaming in our mobile devices is concerned. There is coverage of Ipv6 in the Mobile Packet Networks. The size of Packets in the Mobile Networks and their scheduling and switching holds importance for communication purposes and thus covered in the topic.

Well you are using it then you should be aware of what are the advantages of the thing and thus here it covers the Advantages of IPV6 over IPV4. The description of the core differences between the present protocol Ipv4 and the future protocol Ipv6 such as scalability, security, realtimeness, Plug and Play, Clear specification and optimizations are given a look around.

There are two sides of each coin and thus this will cover the dark sides of this Protocol in brief, which will include incompatibility-which is the alternativeness of the Ipv6 address inspite of it being an extension to our conventional Ipv4, incoherence-which is the designers don’t have a transition plan and the Ipv6 addresses cannot work as well as the Ipv4 addresses and distractions-that is the Ipv6 addresses cannot be generalized or be made public, they cannot be applied for talking to all the same cities as that of Ipv4 addresses are being applied.

The future of the Industry cannot be changed by a single Protocol all along itself and it needs some other technologies to accompany it and change the trends and traditions. One such friend of Ipv6 is Internet2, which is discussed in detail. What is Internet2 and how it is different from the present Internet? Also the use of Internet2 and Ipv6 together is discussed. The process of Symbiosis between these two and the benefits with their use are briefly aggregated. The connectivity between the Internet2 and IPV6 will be covered

The next of the friendly hand is the QoS or the Quality Of Service, which covers the Quality with which this data is transmitted over the network. Learn the facilities of the present IP with this feature and the flaws in it. Also this shows the prospects and the facilities provided for the Quality to the end-user of the data services. These include the services as that of the Satellite Communication for data transmission.

The paper briefly discusses about the technical giants in the industry looking forward to use the technology. The Asian giants like India, Japan, and China already looking forward to clutch the technology in their hands and get on the top of the market with it. Their latest interests in the technology and researches in the area are described. The reasons behind using the technique and the need for Ipv6 to these countries are mentioned.

Then the actual or the practical implementation of this system is not very easy. We are talking about re-connecting the entire globe which already took decades for Ipv4 to establish it reign on. IPv6 poses some daunting questions for net managers which are considered as the biggest hurdles in the ways of Ipv6’s reign over he market.. For instance, what’s the best way to make the transition while maintaining backward compatibility with all those systems still running IPv4? What about renumbering networks—not to mention buying, installing, and configuring all of that new IP software?

Then at last we end our discussion specifying the future prospects and implementations and the time left for the industrial establishment of the technology. The future developments and the area of improvements still needed till it gets fully generalized. It requires a huge time to remove or change any of the conventional protocols. It requires some of the help from other technologies such as Internet2 to completely overcome all the flaws in it.


Read more