Iboc Technology
Iboc Technology
Iboc Technology
IBOC TECHNOLOGY
The engineering world has been working on the development and evaluation of IBOC transmission for some time. The NRSC began evaluation proceedings of general DAB systems in 1995. After the proponents merged into one, Ibiquity was left in the running for potential adoption. In the fall of 2001,the NRSC issued a report on Ibiquity's FM IBOC. This comprehensive report runs 62 pages of engineering material plus 13 appendices. All of the system with its blend-to analog operation as signal levels changes. The application of the FM IBOC has been studied by the NRSC and appears to be understood and accepted by radio engineers. AM IBOC has recently been studied by an NRSC working group as prelude to its adoption for general broadcast use .Its was presented during the NAB convention in April. The FM report covers eight areas of vital performance concerns to the broadcaster and listener alike .If all of these concerns can be met as successfully by AM IBOC, and the receiver manufactures rally to develop and produce the necessary receiving equipment. The evaluated FM concerns were audio quality, service area, acquisition performance, durability, auxiliary data capacity, and behavior as signal degrades, stereo separation and flexibility. The FM report paid strong attention to the use of SCA services on FM IBOC. About half of all the operating FM stations employ one or more SCAs for reading for the blind or similar services. Before going to the description of FM IBOC system, it is important to discuss the basic principles of digital radio, and IBOC technology. In the foregoing sections we see the above-mentioned topics
2.SPINTRONICS
Spintronics can be fairly new term for you but the concept isn't so very exotic .This technological discipline aim to exploit subtle and mind bending esoteric quantum property of electron to develop a new generation of electronics devices. The ability to exploit spin in semiconductor promise a new logical devices as spin transistor etc with enhanced functionality higher speed and reduction power conception and might have a spark revolution in semiconductor industry. so far the problem of injecting electron with controlled spin direction has held up the realization of such spintronics
Spintronics is an emergent technology that exploits the quantum propensity of the electrons to spin as well as making use of their charge state. The spin itself is manifested as a detectable weak magnetic energy state characterised as "spin up" or "spin down".
Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Device engineers and physicists are now trying to exploit the spin of the electron rather than its charge. Spintronic-devices combine the advantages of magnetic materials and semiconductors. They are expected to be non-volatile, versatile, fast and capable of simultaneous data storage and processing, while at the same time consuming less energy. Spintronic-devices are playing an increasingly significant role in high-density data storage, microelectronics, sensors, quantum computing and bio-medical applications, etc.
3.FINFET TECHNOLOGY
Since the fabrication of MOSFET, the minimum channel length has been shrinking continuously. The motivation behind this decrease has been an increasing interest in high-speed devices and in very large-scale integrated circuits. The sustained scaling of conventional bulk device requires innovations to circumvent the barriers of fundamental physics constraining the conventional MOSFET device structure. The limits most often cited are control of the density and location of dopants providing high I on /I off ratio and finite sub threshold slope and quantum-mechanical tunneling of carriers through thin gate from drain to source and from drain to body.
The channel depletion width must scale with the channel length to contain the off-state leakage I off. This leads to high doping concentration, which degrade the carrier mobility and causes junction edge leakage due to tunneling. Furthermore, the dopant profile control, in terms of depth and steepness, becomes much more difficult. The gate oxide thickness tox must also scale with the channel length to maintain gate control, proper threshold voltage VT and performance. The thinning of the gate dielectric results in gate tunneling leakage, degrading the circuit performance, power and noise margin.
Alternative device structures based on silicon-on-insulator (SOI) technology have emerged as an effective means of extending MOS scaling beyond bulk limits for mainstream high-performance or low-power applications .Partially depleted (PD) SOIwas the first SOI technology introduced for high-performance microprocessor applications. The ultra-thin-body fully depleted (FD) SOI and the non-planar FinFET device structures promise to be the potential "future" technology/device choices. In these device structures, the short-channel effect is controlled by geometry, and the thin Si film limits the off-state leakage. For effective suppression of the offstate leakage, the thickness of the Si film must be less than one quarter of the channel length. The desired VT is achieved by manipulating the gate work function, such as the use of midgap material or poly-SiGe. Concurrently, material enhancements, such as the use of a) high-k gate material and b) strained Si channel for mobility and current drive improvement, have been actively pursued. As scaling approaches multiple physical limits and as new device structures and materials are introduced, unique and new circuit design issues continue to be presented. In this article, we review the design challenges of these emerging technologies with particular emphasis on the implications and impacts of individual device scaling elements and unique device structures on the circuit design. We focus on the planar device structures, from continuous scaling of PD SOI to FD SOI, and new materials such as strained-Si channel and highk gate dielectric.
The goal of this investigation is to use Optical Coherence Tomography to image epileptic lesions on cortical tissue from rats. Such images would be immensely useful for surgical purposes. They would detail how deep the lesion is, allowing for precise removal that neither removes an insufficient amount of damaged tissue nor extracts too much healthy tissue.
Though commerical OCT systems already exist, they typically do not scan very deeply beneath sample surfaces. For the purpose of this study, a system must be constructed that scans up to 2 millimeters into tissue1. Unfortunately, an increase in axial depth necessitates a decrease in transverse (along the surface of the sample) resolution due to focal restrictions of the objective lenses2. However, this loss is acceptable for this investigation, as the main goal is to determine lesion depth and not to achieve perfect image clarity.
The ability to detect the positional delay of light reflecting from a tissue sample is at the heart of OCT. Low-coherence interferometry provides just that. A low-coherence light source has the potential to produce interference fringes only when integrated with light from the same source that has traveled nearly exactly the same distance3. This means that if light from such a source is split by a beam splitter into two equal parts, both of them reflect off of different objects, and combine to form one beam again, they will produce an interference fringe pattern only if thedistance they traveled while split was exactly the same.
5.ROBOTIC SURGERY
The field of surgery is entering a time of great change, spurred on by remarkable recent advances in surgical and computer technology. Computer-controlled diagnostic instruments have been used in the operating room for years to help provide vital information through ultrasound, computer-aided tomography (CAT), and other imaging technologies. Only recently have robotic systems made their way into the operating room as dexterity-enhancing surgical assistants and surgical planners, in answer to surgeons' demands for ways to overcome the surgical limitations of minimally invasive laparoscopic surgery. The Robotic surgical system enables surgeons to remove gallbladders and perform other general surgical procedures while seated at a computer console and 3-D video imaging system acrossthe room from the patient. The surgeons operate controls with their hands and fingers to direct a robotically controlled laparoscope. At the end of the laparoscope are advanced, articulating surgical instruments and miniature cameras that allow surgeons to peer into the body and perform the procedures. Now Imagine : An army ranger is riddled with shrapnel deep behind enemy lines. Diagnostics from wearable sensors signal a physician at a nearby mobile army surgical hospital that his services are needed urgently. The ranger is loaded into an armored vehicle outfitted with a robotic surgery system. Within minutes, he is undergoing surgery performed by the physician, who is seated at a control console 100 kilometers out of harm's way. The patient is saved. This is the power that the amalgamation of technology and surgical sciences are offering Doctors. Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line. We even have robotic lawn mowers and robotic pets now. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence,coming to resemble the humans that create them. They will eventually become self-aware and conscious, and be able to do anything that a human can. When we talk about robots doing the tasks of humans, we often talk about the future, but the future of Robotic surgery is already here.
6.CELLONICS TECHNOLOGY
In digital communication , CellonicsTM offers a fundamental change to the way modem solutions have traditionally been designed and built. CellonicsTM technology introduces a simple and swift Carrier - Rate DecodingTM solution to the receiving and decoding of a modulated signal. It encodes and decodes signals at one symbol per cycle-a feature not found elsewhere. Its simplicity will obsolete the super heterodyne receiver design that has been in use since its invention by Major Edward Armstrong in 1918.In fact, according to one estimate,98 % of the worlds radio systems are still based on this superhet design.
Cellonics Inc. has invented and patented a number of circuits that mimic the above biological cell behavior. The CellonicsTM circuits are incredibly simple with advantages of low-cost, low power consumption and smallness of size. When applied in communication, the CellonicsTM technology is a fundamental modulation and demodulation technique. The CellonicsTM receivers are used as devices that generate pulses from the received analog signal and perform demodulation based on pulse counting
9. E-INTELLIGENCE
As corporations move rapidly toward deploying e-business systems, the lack of business intelligence facilities in these systems prevents decision-makers from exploiting the full potential of the Internet as a sales, marketing, and support channel. To solve this problem, vendors are rapidly enhancing their business intelligence offerings to capture the data flowing through e-business systems and integrate it with the information that traditional decisionmaking systems manage and analyze. These enhanced business intelligence-or e-intelligencesystems may provide significant business benefits to traditional brick-and-mortar companies as well as new dot-com ones as they build e-business environments. Organizations have been successfully using decision-processing products, including data warehouse and business intelligence tools, for the past several years to optimize day-to-day business operations and to leverage enterprise-wide corporate data for a competitive advantage. The advent of the Internet and corporate extranets has propelled many of these organizations toward the use of e-business applications to further improve business efficiency, decrease costs and increase revenues - and to compete with new dot.com companies appearing in the marketplace. The explosive growth in the use of e-business has led to the need for decision-processing systems to be enhanced to capture and integrate business information flowing through ebusiness systems. These systems also need to be able to apply business intelligence techniques to this captured-business information. These enhanced decision processing systems, or EIntelligence, have the potential to provide significant business benefits to both traditional bricks-and-mortar companies and new dot.com companies as they begin to exploit the power of e-business processing. E-INTELLIGENCE FOR BUSINESS E-intelligence systems provide internal business users, trading partners, and corporate clients rapid and easy access to the e-business information, applications, and services they need in order to compete effectively and satisfy customer needs. They offer many business benefits to organizations in exploiting the power of the Internet. For example, e-intelligence systems give the organization the ability to:
1.Integrate e-business operations into the traditional business environment, giving business users a complete view of all corporate business operations and information.
2.Help business users make informed decisions based on accurate and consistent e-business information that is collected and integrated from e-business applications. This business information helps business users optimize Web-based offerings (products offered, pricing and promotions, service and support, and so on) to match marketplace requirements and analyze business performance with respect to competitors and the organization's business-performance objectives. 3.Assist e-business applications in profiling and segmenting e-business customers. Based on this information, businesses can personalize their Web pages and the products and services they offer. 4.Extend the business intelligence environment outside the corporate firewall, helping the organization share internal business information with trading partners. Sharing this information will let it optimize the product supply chain to match the demand for products sold through the Internet and minimizes the costs of maintaining inventory. 5.Extend the business intelligence environment outside the corporate firewall to key corporate clients, giving them access to business information about their accounts. With this information, clients can analyze and tune their business relationships with other organization, improving client service and satisfaction. 6.Link e-business applications with business intelligence and collaborative processing applications, allowing internal and external users to seamlessly move among different systems.
INTELLIGENT E-SERVICES The building blocks of new, sophisticated, intelligent data warehousing applications are now intelligent e-services. An e-service is any asset made available via the Internet to drive new revenue streams or create new efficiencies. What makes e-services valuable is not only the immediacy of the service, but also the intelligence behind the service. While traditional data warehousing meant simple business rules, simple queries and pro-active work to take advantage of the Web, E-Intelligence is much more sophisticated and enables the Web to work on our behalf. Combining intelligence with e-services promises exciting business opportunities.
10.FPGA IN SPACE
A quiet revolution is taking place. Over the past few years, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 500,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-on-a-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design. As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. So language differences alone are not enough of a distinction. Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software? Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic.
Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to
huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size there is also much variation in architecture. In this section, I'll introduce you to the most common types of programmable logic and highlight the most important features of each type.
PLDs
At the low end of the spectrum are the original Programmable Logic Devices (PLDs). These were the first chips that could be used to implement a flexible digital logic design in hardware. In other words, you could remove a couple of the 7400-series TTL parts (ANDs, ORs, and NOTs) from your board and replace them with a single PLD. Other names you might encounter for this class of device are Programmable Logic Array (PLA), Programmable Array Logic (PAL), and Generic Array Logic (GAL).
11.FEMTOCELLS TECHNOLOGY
Femtocells, a technology little-known outside the wireless world, promise better indoor cellular service. In telecommunication, a Femtocell is a small cellular base station, typically designed for use in a home or small business. It connects to the service providers network via broadband. Current designs typically support 2 to 4 active mobile phones in a residential setting, and 8 to 16 active mobile phones in enterprise settings. A Femtocell allows service providers to extend service coverage indoors, especially where access would otherwise be limited or unavailable. For a mobile operator, the attractions of a Femtocell are improvements to both coverage and capacity, especially indoors. This can reduce both capital expenditure and operating expense. A Femtocell is typically the size of a residential gateway or smaller, and connects into the end-users broadband line. Once plugged in, the Femtocell connects to the MNOs mobile network, and provides extra coverage in a range of typically 30 to 50 meters for residential Femtocells. The end-user must declare which mobile phone numbers are allowed to connect to his/her Femtocell, usually via a web interface provided by the MNO. When these mobile phones arrive under coverage of the Femtocell, they switch over from the Macrocell (outdoor) to the Femtocell automatically. Most MNOs provide means for the end-user to know this has happened, for example by having a different network name appear on the mobile phone. All communications will then automatically go through the Femtocell. When the end-user leaves the Femtocell coverage (whether in a call or not), his phone hands over seamlessly to the macro network.
12.WIMAX
WiMAX, meaning Worldwide Interoperability for Microwave Access, is a telecommunications technology that provides wireless transmission of data using a variety of transmission modes, from point-to-point links to portable internet access[citation needed]. The technology provides up to 75 Mbit/s symmetric broadband speed without the need for cables. The technology is based on the IEEE 802.16 standard (also called Broadband Wireless Access). The name WiMAX was created by the WiMAX Forum, which was formed in June 2001 to promote conformity and interoperability of the standard. The forum describes WiMAX as a standardsbased technology enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL. The terms fixed WiMAX, mobile WiMAX, 802.16d and 802.16e are frequently used incorrectly Correct definitions are the following: 802.16-2004 is often called 802.16d, since that was the working party that developed the standard. It is also frequently referred to as fixed WiMAX since it has no support for mobility. 802.16e-2005 is an amendment to 802.16-2004 and is often referred to in shortened form as 802.16e. It introduced support for mobility, amongst other things and is therefore also known as mobile WiMAX.
The Blu-ray Disc technology can store sound and video while maintaining high quality and also access the stored content in an easy-to-use way. Adoption of the Blu-ray Disc in a variety of applications including PC data storage and high definition video software is being considered.
15.FERROELECTRIC MEMORY
Ferroelectric memory is a new type of semiconductor memory, which exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demands many memory write operations.
A ferroelectric memory technology consists of a complementary metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors. A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one transistor that provide access to the capacitor or amplify its content for a read operation. Once a cell is accessed for a read operation, its data are presented in the form of an analog signal to a sense amplifier, where they are compared against a reference voltage to determine their logic level.
Ferroelectric memories have borrowed many circuit techniques (such as folded-bitline architecture) from DRAMs due to similarities of their cells and DRAMs maturity. Some architectures are reviewed here.
19. HELIODISPLAY
The Heliodisplay is a free-space display developed by IO2 Technology. A projector is focused onto a layer of mist in mid-air, resulting in a two-dimensional display that appears to float. This is similar in principle to the cinematic technique of rear projection. As dark areas of the image may appear invisible, the image may be more realistic than on a projection screen, although it is still not volumetric. Looking directly at the display, one would also be looking into the projectors light source. The necessity of an oblique viewing angle (to avoid looking into the projectors light source) may be a disadvantage. Heliodisplay can work as a free-space touchscreen when connected to a PC by a USB cable. A PC sees the Heliodisplay as a pointing device, like a mouse. With the supplied software installed, one can use a finger, pen, or another object as cursor control and navigate or interact with simple content. The mist is formed by a series of metal plates, and the original Heliodisplay could run for several hours on one litre of tap water. 2008 model Heliodisplays use 80 mml to 120 ml of water per hour, depending on screen size and user settings, and can be built with any size water tank. The Heliodisplay was invented by Chad Dyner, who built it as a five-inch prototype in his apartment before patenting the free-space display technology, and founding IO2 Technology LLC to further develop the product.
21. Microphotonics This paper presents an overview of our study on the subject Microphotonics which is related with optical science and also behaves as one of the technologies used in Wireless systems. Microphotonics has also proven to be one of the successors to the field of electronics in this 21st century or the third millennium. Photonics has revolutionized communications and holds similar potential for computing, sensing and imaging applications. . The Photon Belt has reasonable probability of occurrence, judging by the ongoing innovations in Microphotonics. In this paper we are going to examine the introduction to our topic, its vision and evolution. Along with the evolution we are also going to see the great contribution of our scientists to this budding field. The paper also includes the applications of the field in the technical as well as medical world. Different devices using the Microphotonics technology are also included in the paper. Finally the future of the Microphotonics concludes the topic over here.