Thursday, 21 April 2011

Super-Small Transistor


A University of Pittsburgh-led team has created a single-electron transistor that provides a building block for new, more powerful computer memories, advanced electronic materials, and the basic components of quantum computers.The researchers report in Nature Nanotechnology that the transistor's central component -- an island only 1.5 nanometers in diameter -- operates with the addition of only one or two electrons. That capability would make the transistor important to a range of computational applications, from ultradense memories to quantum processors, powerful devices that promise to solve problems so complex that all of the world's computers working together for billions of years could not crack them.
In addition, the tiny central island could be used as an artificial atom for developing new classes of artificial electronic materials, such as exotic superconductors with properties not found in natural materials.Using the sharp conducting probe of an atomic force microscope, Levy can create such electronic devices as wires and transistors of nanometer dimensions at the interface of a crystal of strontium titanate and a 1.2 nanometer thick layer of lanthanum aluminate. The electronic devices can then be erased and the interface used anew.
The SketchSET -- which is the first single-electron transistor made entirely of oxide-based materials -- consists of an island formation that can house up to two electrons. The number of electrons on the island -- which can be only zero, one, or two -- results in distinct conductive properties. Wires extending from the transistor carry additional electrons across the island.
One virtue of a single-electron transistor is its extreme sensitivity to an electric charge.Another property of these oxide materials is ferroelectricity, which allows the transistor to act as a solid-state memory. The ferroelectric state can, in the absence of external power, control the number of electrons on the island, which in turn can be used to represent the 1 or 0 state of a memory element. A computer memory based on this property would be able to retain information even when the processor itself is powered down, Levy said. The ferroelectric state also is expected to be sensitive to small pressure changes at nanometer scales, making this device potentially useful as a nanoscale charge and force sensor.
The research in Nature Nanotechnology also was supported in part by grants from the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. Army Research Office, the National Science Foundation, and the Fine Foundation.

Mining Data from Electronic Records


Recruiting thousands of patients to collect health data for genetic clues to disease is expensive and time consuming. But that arduous process of collecting data for genetic studies could be faster and cheaper by instead mining patient data that already exists in electronic medical records, according to new Northwestern Medicine research.
In the study, researchers were able to cull patient information in electronic medical records from routine doctors' visits at five national sites that all used different brands of medical record software. The information allowed researchers to accurately identify patients with five kinds of diseases or health conditions -- type 2 diabetes, dementia, peripheral arterial disease, cataracts and cardiac conduction.
"The hard part of doing genetic studies has been identifying enough people to get meaningful results," said lead investigator Abel Kho, M.D., an assistant professor of medicine at Northwestern University Feinberg School of Medicine and a physician at Northwestern Memorial Hospital. "Now we've shown you can do it using data that's already been collected in electronic medical records and can rapidly generate large groups of patients."
To identify the diseases, Kho and colleagues searched the records using a series of criteria such as medications, diagnoses and laboratory tests. They then tested their results against the gold standard -- review by physicians. The physicians confirmed the results, Kho said. The electronic health records allowed researchers to identify patients' diseases with 73 to 98 percent accuracy.
The researchers also were able to reproduce previous genetic findings from prospective studies using the electronic medical records. The five institutions that participated in the study collected genetic samples for research. Patients agreed to the use of their records for studies.Sequencing individuals genomes is becoming faster and cheaper. It soon may be possible to include patients genomes in their medical records. This would create a bountiful resource for genetic research.
The larger the group of patients for genetic studies, the better the ability to detect rarer affects of the genes and the more detailed genetic sequences that cause a person to develop a disease.The study also showed across-the-board weaknesses in institutions' electronic medical records. The institutions didn't do a good job of capturing race and ethnicity, smoking status and family history, all which are important areas of study.
The institutions participating in the study are part of a consortium called the Electronic Medical Records and Genomics Network.The research was supported by the National Human Genome Research Institute with additional funding from the National Institute of General Medical Sciences.

3-D Towers


Using well-known patterned media, a team of researchers in France has figured out a way to double the areal density of information by essentially cutting the magnetic media into small pieces and building a "3D tower" out of it.Using well-known patterned media, a team of researchers in France has figured out a way to double the areal density of information by essentially cutting the magnetic media into small pieces and building a "3D tower" out of it.
"Over the past 50 years, with the rise of multimedia devices, the worldwide Internet, and the general growth in demand for greater data storage capacity, the areal density of information in magnetic hard disk drives has exponentially increased by 7 orders of magnitude," says Jerome Moritz, a researcher at SPINTEC, in Grenoble. "This areal density is now about 500Gbit/in2, and the technology presently used involves writing the information on a granular magnetic material. This technology is now reaching some physical limits because the grains are becoming so small that their magnetization becomes unstable and the information written on them is gradually lost." Therefore, new approaches are needed for magnetic data storage densities exceeding 1Tbit/in2.
Our new approach involves using bit-patterned media, which are made of arrays of physically separated magnetic nanodots, with each nanodot carrying one bit of information. To further extend the storage density, it's possible to increase the number of bits per dots by stacking several magnetic layers to obtain a multilevel magnetic recording device.
The best way to achieve a 2-bit-per-dot media involves stacking in-plane and perpendicular-to-plane magnetic media atop each dot. The perpendicularly magnetized layer can be read right above the dot, whereas the in-plane magnetized layer can be read between dots. This enables doubling of the areal density for a given dot size by taking better advantage of the whole patterned media area.

Ultra-Fast Magnetic Reversal


A newly discovered magnetic phenomenon could accelerate data storage by several orders of magnitude.With a constantly growing flood of information, we are being inundated with increasing quantities of data, which we in turn want to process faster than ever. Oddly, the physical limit to the recording speed of magnetic storage media has remained largely unresearched. In experiments performed on the particle accelerator BESSY II of Helmholtz-Zentrum Berlin, Dutch researchers have now achieved ultrafast magnetic reversal and discovered a surprising phenomenon.
In magnetic memory, data is encoded by reversing the magnetization of tiny points. Such memory works using the so-called magnetic moments of atoms, which can be in either "parallel" or "antiparallel" alignment in the storage medium to represent to "0" and "1."The alignment is determined by a quantum mechanical effect called "exchange interaction." This is the strongest and therefore the fastest "force" in magnetism. It takes less than a hundred femtoseconds to restore magnetic order if it has been disturbed. One femtosecond is a millionth of a billionth of a second. Ilie Radu and his colleagues have now studied the hitherto unknown behaviour of magnetic alignment before the exchange interaction kicks in. Together with researchers from Berlin and York, they have published their results in Nature.
For their experiment, the researchers needed an ultra-short laser pulse to heat the material and thus induce magnetic reversal. They also needed an equally short X-ray pulse to observe how the magnetization changed. This unique combination of a femtosecond laser and circular polarized, femtosecond X-ray light is available in one place in the world: at the synchrotron radiation source BESSY II in Berlin, Germany.
In their experiment, the scientists studied an alloy of gadolinium, iron and cobalt (GdFeCo), in which the magnetic moments naturally align antiparallel. They fired a laser pulse lasting 60 femtoseconds at the GdFeCo and observed the reversal using the circular-polarized X-ray light, which also allowed them to distinguish the individual elements. What they observed came as a complete surprise: The Fe atoms already reversed their magnetization after 300 femtoseconds while the Gd atoms required five times as long to do so. That means the atoms were all briefly in parallel alignment, making the material strongly magnetized.This is as strange as finding the north pole of a magnet reversing slower than the south pole.
With their observation, the researchers have not only proven that magnetic reversal can take place in femtosecond timeframes, they have also derived a concrete technical application from it: Translated to magnetic data storage, this would signify a read/write rate in the terahertz range. That would be around 1000 times faster than present-day commercial computers.

3-D Gesture


Touch screens such as those found on the iPhone or iPad are the latest form of technology allowing interaction with smart phones, computers and other devices. However, scientists at Fraunhofer FIT has developed the next generation non-contact gesture and finger recognition system. The novel system detects hand and finger positions in real-time and translates these into appropriate interaction commands. Furthermore, the system does not require special gloves or markers and is capable of supporting multiple users.
Touch screens such as those found on the iPhone or iPad are the latest form of technology allowing interaction with smart phones, computers and other devices. However, scientists at Fraunhofer FIT has developed the next generation non-contact gesture and finger recognition system. The novel system detects hand and finger positions in real-time and translates these into appropriate interaction commands. Furthermore, the system does not require special gloves or markers and is capable of supporting multiple users.
With touch screens becoming increasingly popular, classic interaction techniques such as a mouse and keyboard are becoming less frequently used. One example of a breakthrough is the Apple iPhone which was released in summer 2007. Since then many other devices featuring touch screens and similar characteristics have been successfully launched -- with more advanced devices even supporting multiple users simultaneously, e.g. the Microsoft Surface table becoming available. This is an entire surface which can be used for input. However, this form of interaction is specifically designed for two-dimensional surfaces.
Fraunhofer FIT has developed the next generation of multi-touch environment, one that requires no physical contact and is entirely gesture-based. This system detects multiple fingers and hands at the same time and allows the user to interact with objects on a display. The users move their hands and fingers in the air and the system automatically recognizes and interprets the gestures accordingly.Cinemagoers will remember the science-fiction thriller Minority Report from 2002 which starred Tom Cruise. In this film Tom Cruise is in a 3-D software arena and is able to interact with numerous programs at unimaginable speed, however the system used special gloves and only three fingers from each hand.The FIT prototype provides the next generation of gesture-based interaction far in advance of the Minority Report system. The FIT prototype tracks the user's hand in front of a 3-D camera. The 3-D camera uses the time of flight principle, in this approach each pixel is tracked and the length of time it takes light to be filmed travelling to and from the tracked object is determined. This allows for the calculation of the distance between the camera and the tracked object.
A special image analysis algorithm was developed which filters out the positions of the hands and fingers. This is achieved in real-time through the use of intelligent filtering of the incoming data. The raw data can be viewed as a kind of 3-D mountain landscape, with the peak regions representing the hands or fingers. In addition plausibility criteria are used, these are based around: the size of a hand, finger length and the potential coordinates.A user study was conducted and found that the system both easy to use and fun. However, work remains to be done on removing elements which confuses the system, for example reflections caused by wristwatches and palms which are positioned orthogonal to the camera.

Artificial intelligence

Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times, and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess player, and countless other feats never before possible. Find out how the military is applying AI logic to its hi-tech systems, and how in the near future Artificial Intelligence may impact our lives.
 To understand what exactly artificial intelligence is, we illustrate some common problems. Problems dealt with in artificial intelligence generally use a common term called 'state'. A state represents a status of the solution at a given step of the problem solving procedure. The solution of a problem, thus, is a collection of the problem states. The problem solving procedure applies an operator to a state to get the next state. Then it applies another operator to the resulting state to derive a new state. The process of applying an operator to a state and its subsequent transition to the next state, thus, is continued until the goal (desired) state is derived. Such a method of solving a problem is generally referred to as state space approach. We will first discuss the state-space approach for problem solving by a well-known problem, which most of us perhaps have solved in our childhood.
Researchers in artificial intelligence have segregated the AI problems from the non-AI problems. Generally, problems, forwhich straightforward mathematical / logical algorithms are not readily available and which can be solved by intuitive approach only, are called AI problems. The 4-puzzle problem, for instance, is an ideal AI Problem. There is no formal algorithm for its realization, i.e., given a starting and a goal state, one cannot say prior to execution of the tasks the sequence of steps required to get the goal from the starting state. Such problems are called the ideal AI problems. The well known water-jug problem, the Travelling Salesperson Problem (TSP) , and the n-Queen problem are typical examples of the classical AI problems. Among the non-classical AI problems, the diagnosis problems and the pattern classification problem need special mention. For solving an AI problem, one may employ both artificial intelligence and non-AI algorithms. An obvious question is: what is an AI algorithm? Formally speaking, an artificial intelligence algorithm generally means a non-conventional intuitive approach for problem solving. The key to artificial intelligence approach is intelligent search and matching. In an intelligent search problem / sub-problem, given a goal (or starting) state, one has to reach that state from one or more known starting (or goal) states.
The question that then naturally arises is: how to control the generation of states. This, in fact, can be achieved by suitably designing some control strategies, which would filter a few states only from a large number of legal states that could be generated from a given starting / intermediate state. As an example, consider the problem of proving a trigonometric identity that children are used to doing during their schooldays. What would they do at the beginning? They would start with one side of the identity, and attempt to apply a number of formulae there to find the possible resulting derivations. But they won't really apply all the formulae there. Rather, they identify the right candidate formula that fits there best, such that the other side of the identity seems to be closer in some sense (outlook). Ultimately, when the decision regarding the selection of the formula is over, they apply it to one side (say the L.H.S) of the identity and derive the new state. Thus they continue the process and go on generating new intermediate states until the R.H.S (goal) is reached. But do they always select the right candidate formula at a given state? From our experience, we know the answer is "not always". But what would we do if we find that after generation of a few states, the resulting expression seems to be far away from the R.H.S of the identity. Perhaps we would prefer to move to some old state, which is more promising, i.e., closer to the R.H.S of the identity. The above line of thinking has been realized in many intelligent search problems of AI. Some of these well-known search algorithms are:

    * Generate and Test
    * Hill Climbing
    * Heuristic Search
    * Means and Ends analysis

Generate and Test Approach: This approach concerns the generation of the state-space from a known starting state (root) of

the problem and continues expanding the reasoning space until the goal node or the terminal state is reached. In fact after

generation of each and every state, the generated node is compared with the known goal state. When the goal is found, the

algorithm terminates. In case there exist multiple paths leading to the goal, then the path having the smallest distance from

the root is preferred. The basic strategy used in this search is only generation of states and their testing for goals but it

does not allow filtering of states.

(b) Hill Climbing Approach: Under this approach, one has to first generate a starting state and measure the total cost for

reaching the goal from the given starting state. Let this cost be f. While f = a predefined utility value and the goal is not

reached, new nodes are generated as children of the current node. However, in case all the neighborhood nodes (states) yield

an identical value of f and the goal is not included in the set of these nodes, the search algorithm is trapped at a hillock

or local extrema. One way to overcome this problem is to select randomly a new starting state and then continue the above

search process. While proving trigonometric identities, we often use Hill Climbing, perhaps unknowingly.

(c) Heuristic Search: Classically heuristics means rule of thumb. In heuristic search, we generally use one or more heuristic

functions to determine the better candidate states among a set of legal states that could be generated from a known state.

The heuristic function, in other words, measures the fitness of the candidate states. The better the selection of the states,

the fewer will be the number of intermediate states for reaching the goal. However, the most difficult task in heuristic

search problems is the selection of the heuristic functions. One has to select them intuitively, so that in most cases

hopefully it would be able to prune the search space correctly. We will discuss many of these issues in a separate chapter on

Intelligent Search.

(d) Means and Ends Analysis: This method of search attempts to reduce the gap between the current state and the goal state.

One simple way to explore this method is to measure the distance between the current state and the goal, and then apply an

operator to the current state, so that the distance between the resulting state and the goal is reduced. In many mathematical

theorem- proving processes, we use Means and Ends Analysis.

IPoint 3D



The "iPoint 3D" allows people to communicate with a 3-D display through simple gestures – without touching it and without 3-D glasses or a data glove. What until now has only been seen in science fiction films will be presented at CeBIT from March 3-8 by experts from the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, (HHI).The heart of iPoint 3D is a recognition device, not much larger than a keyboard, that can be suspended from the ceiling above the user or integrated in a coffee table. Its two built-in cameras detect hands and fingers in real time and transmit the information to a computer.
The system responds instantly, as soon as someone in front of the screen moves their hands. No physical contact or special markers are involved. The small device is equipped with two FireWire cameras – inexpensive, off-the-shelf video cameras that are easy to install.
In addition to its obvious appeal to video gamers, iPoint 3D can also be useful in a living room or office, or even in a hospital operating room, or as part of an interactive information system. Since the interaction is entirely contactless, the system is ideal for scenarios where contact between the user and the system is not possible or not allowed, such as in an operating room.
The HHI invention can thus be used not only to control a display but also as a means of controlling other devices or appliances. Someone kneading pastry in the kitchen, whose hands are covered in dough, can turn down the boiling potatoes by waving a finger without leaving sticky marks on the stove. In an office, for example, an architect can peruse the latest set of construction drawings and view them from all angles by gesture control. The finger is the remote control of the future.

Touchless iPoint


Master chef Johann Lafer is a virtuoso in the kitchen -- and with modern technology too. At his cookery school the TV celebrity adopts a high-tech approach to make things easier in the kitchen with the touchless iPoint-Presenter.The dining area boasts a special technological highlight. A 70-inch Full-HD-display which can be operated just by pointing a finger. When Johann Lafer wants to present the menu sequence to his pupils, call up a short film, play music, change the lighting mood or show pictures of meals, a brief movement of the finger is enough to start the selected program.
This is possible thanks to technology from the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI. The researchers in Berlin have developed a computer control system which is operated by gestures. The iPoint Presenter consists of two digital cameras which register the movement of the finger and transfer this to the computer. Our software calculates the 3D coordinates of the finger from the video data and recognizes simple hand gestures in real time.
The recognition device about the size of a keyboard is housed in a drawer on the front of the large dining table. When the drawer is opened, the gesture recognition system automatically switches on. The iPoint Presenter tracks the finger and the cursor moves on the display as if worked by an invisible hand. To open a program you just keep your finger pointing at the relevant button.
At this year's CeBIT (in Hall 9, B36) visitors will be able to try out the gesture control system for themselves. It can also be used to operate lights and domestic appliances and therefore fits in nicely with the trade show's keynote "Connected Worlds" theme. The researchers are now working on new applications. As the interaction takes place without anything having to be touched, the system is ideal for scenarios in which contact between the user and the computer needs to be avoided, such as in operating theaters.
In collaboration with medical technology company Storz the engineers are developing an innovative operating theater control system.

Touchpad

A touch pad is a device for pointing (controlling input positioning) on a computer display screen. It is an alternative to

the mouse. Originally incorporated in laptop computers, touch pads are also being made for use with desktop computers. A

touch pad works by sensing the user's finger movement and downward pressure.

The first touch pad was invented by George E. Gerpheide in 1988. Apple Computer was the first to license and use the touch

pad in its Powerbook laptops in 1994. The touch pad has since become the leading cursor controlling device in laptops. Many

laptops use a trackball. IBM ThinkPad laptops use a "pointing stick" (called a TrackPoint) that is set into the keyboard.

The touch pad contains several layers of material. The top layer is the pad that you touch. Beneath it are layers (separated

by very thin insulation) containing horizontal and vertical rows of electrodes that form a grid. Beneath these layers is a

circuit board to which the electrode layers are connected. The layers with electrodes are charged with a constant alternating

current (AC). As the finger approaches the electrode grid, the current is interrupted and the interruption is detected by the

circuit board. The initial location where the finger touches the pad is registered so that subsequent finger movement will be

related to that initial point. Some touch pads contain two special places where applied pressure corresponds to clicking a

left or right mouse button. Other touch pads sense single or double taps of the finger at any point on the touch pad.

lot of features available for this touchpad are:

# Media Controller (Front Row, VLC, Boxee, etc.).
# Modifier keys (Ctrl, Option/Alt, Cmd, Shift).
# Tab, Esc keys included.
# 1, 2, 3 and 4 finger multitouch gestures supported.
# Pinch to zoom your computer screen
# Vertical, horizontal scrolling.
# Swipe left or right with 3 fingers (Back and Forward).
# Landscape, portrait orientation supported.
# Trackpad still works while keyboard is visible.
# Works with Wake-On-Demand (Snow Leopard).

To avoid running into such situations, it may not be a bad to disable the touchpad at least during the time when you are

using a mouse or are typing a long document.

Some laptops don’t have dedicated buttons but you can use Function keys (like Fn + F5 on Dell computers) to toggle the state

of your touch pad.  In the case of HP laptops, you can hold the top-left corner of the touchpad for few seconds and it will

disable the touch pad - repeat this to re-activate it.

New laptop computers either have a physical on/off button to easily disable the touch pad or there’s an icon in the system

tray that lets you manage the various settings of the touchpad. If you don’t have that icon, you can go to Control Panel –>

Mouse Properties –> Touch Pad  to enable or disable the touchpad.

The touch pad can also be disabled through the device manager. Type devmgmt.msc in the Windows Run box to start the device

manager, expand “Mice and other Pointing devices”, right-click and disable the driver entry that says Touch pad or likewise.

If none of the above solutions work for your brand of laptop, try TouchPad Pal – it’s a free Windows utility that will

temporarily disable the touchpad of your laptop as you go into typing mode. The utility runs in the system tray and requires

no configuration.

Finally, if you would like to get rid of the touchpad completely, you can consider disabling it through the BIOS itself. The

exact path that you need to follow to reach the Pointing Devices section in your BIOS may however vary for different laptops.

Robot Therapy


Therapy in which robots manipulate paralyzed arms, combined with standard rehabilitation, can improve arm and shoulder mobility in patients after stroke, according to research presented at the American Stroke Association's International Stroke Conference 2011.Patients on robotic therapy showed marked improvement in two measures of upper extremity function: the Fugl-Meyer flexor synergy score, a 0 to 12 scale with higher numbers reflecting recovery of voluntary arm movement; and the Fugl-Meyer shoulder/elbow/forearm score, a 0 to 36 scale with higher numbers reflecting recovery of motor function in the shoulder, elbow and forearm.Combining robotic exercise with regular rehabilitation may be the key to successful intervention.Robots could allow therapists to focus on helping patients master daily activities while maintaining repetitive training.
The new study involved 60 stroke survivors with hemiplegia (paralysis on one side of the body) treated at six rehabilitation centers in Japan. The patients, average age 65, had suffered a stroke in the previous four to eight weeks. All received standard rehabilitation therapy from an occupational therapist.Half the group received robotic therapy every day for six weeks, in sessions lasting 40 minutes. The other half spent the same amount of time working through a standard self-training program for hemiplegic patients, performing stretches and passive-to-active exercises of their affected arm.
With a recent trend in helping patients function with one arm, many post-stroke patients have given up hope of recovery of their affected arms.Participating in such robotic exercise is therefore expected to give patients insights about their future ability and a more positive image regarding their affected arm, increasing their self-efficacy and motivation toward rehabilitation.
The group assigned to robotic therapy used a Reo Therapy System by Motorika Ltd. in Israel. For the therapy, the patient's forearm, either resting on or strapped to a platform, is moved in multiple directions based on pre-programmed exercise movements.Researchers selected five such pre-programmed movements. For instance, in one of the movements, "forward reach," the robot helps patients extend their arms forward as if reaching for something in front of them.Therapists also selected from five levels of robotic assistance according to what was most appropriate for the patient, from movement entirely guided by the robot and passive on the patient's part, to movement actively performed by the patient.
The successful test of robots adds a new wrinkle to stroke rehabilitation strategies.While repetitive movement is an essential therapy, physical and occupational therapists aren't always available to provide care, and self-training, if not done correctly, can result in pain and disability.
Robots, on the other hand, can carry out the repetitive movement exercise with exactly the right movement pattern to prevent misuse.Based on initial mobility scores, patients with severe hemiplegia were more likely to benefit from the robotic therapy. The finding is consistent with the notion that higher-functioning patients already can correctly carry out self-training programs, while patients with lower function -- only reflex and minor voluntary movement -- are more likely to benefit from the support and aid of robots.Further research using larger groups of patients is necessary to investigate the efficacy of such robotic exercise in more detail.

New Technique for Robot Navigation Systems


Researchers from the European Centre for Soft Computing and the UPM's Facultad de Informática have developed an antonym-based technique for building maps for mobile robots. This technique can be applied to improve current robot navigation systems. Another advantage of the technique is that the low-cost ultrasonic sensors that it uses are built into almost all robotic platforms and produce a smaller volume of data for processing.
An autonomous mobile robot is a robot that is able to navigate its environment without colliding or getting lost. Unmanned robots are also able to recover from spatial disorientation. Conducted by Sergio Guadarrama, researcher of the European Centre for Soft Computing, and Antonio Ruiz, assistant professor at the Universidad Politécnica de Madrid's Facultad de Informática, and published in the Information Sciences journal, the research focuses on map building. Map building is one of the skills related to autonomous navigation, where a robot is required to explore an unknown environment (enclosure, plant, buildings, etc.) and draw up a map of the environment. Before it can do this, the robot has to use its sensors to perceive obstacles.
The main sensor types used for autonomous navigation are vision and range sensors. Although vision sensors can capture much more information from the environment, this research used range, specifically ultrasonic, sensors, which are less accurate, to demonstrate that the model builds accurate maps from few and imprecise input data.
Once it has captured the ranges, the robot has to map these distances to obstacles on the map. Point clouds are used to draw the map, as the imprecision of the range data rules out the use of straight lines or even isolated points. Even so, the resulting map is by no means an architectural blueprint of the site, because not even the robot's location is precisely known, and there is no guarantee that each point cloud is correctly positioned. In actual fact, one and the same obstacle can be viewed properly from one robot position, but not from another. This can produce contradictory information -obstacle and no obstacle- about the same area of the map under construction.
We can infer that an occupied space is not vacant, but we cannot infer that an unoccupied space is empty. This space could be unknown or ambiguous, because the robot has limited information about its environment. Also the contradictions between "vacant" and "occupied" are also explicitly represented.This way, the robot is able to make a distinction between two types of unknown spaces: spaces that are unknown because information is contradictory and spaces that are unknown because they are unexplored. This would lead the robot to navigate with caution through the contradictory spaces and explore the unexplored spaces. The map is constructed using linguistic rules, such as "If the measured distance is short, then assign a high confidence level to the measurement" or "If an obstacle has been seen several times, then increase the confidence in its presence," where "short," "high" and "several" are fuzzy sets, subject to fuzzy sets theory. Contradictions are resolved by a greater reliance on shorter ranges and combining multiple measures.
Compared with the results of other methods, the outcomes show that the maps built using this technique better capture the shape of walls and open spaces, and contain fewer errors from incorrect sensor data. This opens opportunities for improving the current autonomous navigation systems for robots.

robot with facial expression

A latest invention by MIT Media Lab is a new robot that is able to show various facial expressions such as 'slanting its

eyebrows in anger', or 'raise them in surprise', and show a wide assortment of facial expressions while communicating with

people.

This latest achievement in the field of Robotics is named NEXI as it is framed as the next generation robots which is aimed

for a range of applications for personal robots and human-robot teamwork.

The head and face of NEXI were designed by Xitome Design which is a innovative designing and development company that

specializes in robotic design and development. The expressive robotics started with a neck mechanism sporting 4 degrees of

freedom (DoF) at the base, plus pan-tilt-yaw of the head itself. The mechanism has been constructed to time the movements so

they mimic human speed. The face of NEXI has been specially designed to use gaze, eyebrows, eyelids and an articulate

mandible which helps in expressing a wide range of different emotions.

The chassis of NEXI is also advanced. It has been developed by the Laboratory for Perceptual Robotics UMASS (University of

Massachusetts), Amherst. This chassis is based on the uBot5 mobile manipulator. The mobile base can balance dynamically on

two wheels. The arms of NEXI can pick up a weight of up to 10 pounds and the plastic covering of the chassis can detect any

kind of human touch.

This project was headed by Media Lab's Cynthia Breazeal, a well known robotics expert famous for earlier expressive robots

such as Kismet. She is an Associate Professor of Media Arts and Sciences at the MIT. She named her new product as an MDS

(mobile, dextrous, social) robot.

Dragon x6

Draganflyer X6 is an advanced helicopter that can be operated remotely without any pilot. It is designed mainly to carry

wireless video cameras and still cameras. The Draganflyer X6 helicopter can be operated very easily with its hand held

controller.

The Draganflyer X6 helicopter is based on a unique 6-rotor design that has been under development since early 2006. It uses

11 sensors and thousands of lines of code to self-stabilize during flight which makes it easier to fly than any other

helicopter in its class. The on-board software of Draganflyer X6 is developed after extensive testing and development.

Draganflyer X6 helicopter is a revolution in the field of Unmanned Aerial Vehicle (UAV).

It can be used very efficiently for various applications and it is ideal for spying on the enemy in a safe and reliable

manner.
The new Draganflyer X6 can be used in various field such as Industrial Constructions, Government Applications and Educational

needs.

Draganflyer X6 can be used very efficiently in Bridge Constructions, Building Construction, Pipeline / Hydro-Transmission

Line Inspection, Road Construction. With the help of this aircraft you can get videos and images of any site from various

angels.

Equipped with a high resolution still camera (with remote zoom, shutter control and tilt) it can capture great images. And

its high definition video recorder can record videos very efficiently. It has a range of 500 meters and have a flight time of

20 to 30 minutes.

It is designed specifically with easy controlling system for ease of use it. So, it is easy to fly, needs very minimal

training, and provides an extremely stable aerial platform from where you can get photographs and video. It's small size and

portability makes it suitable to carry it to any construction site and have it ready to fly in minutes.

Draganflyer X6 can be used in many government applications such as Law Enforcement, Fire, Emergency Measures, Wildlife

Management, Environment and Transportation. You can use this advanced machine for Disaster Response, Conservation

Enforcement, Crime Scene Investigation, Crowd Control, Explosive Disposal Unit, Search and Rescue Missions, Traffic

Congestion Control, Criminal Intelligence Applications, Fire Damage Assessment, Fire Scene Management any many more.

Draganflyer X6 is very useful in educational applications such as Advanced RC Flight Research, Aerial Archeology,

Environmental Assessment, and Geological Exploration.

New Robot to Walk Again With People


Cognitive skills for a new robot which will help people with damaged limbs to walk again are being developed by researchers at the University of Hertfordshire.Dr Daniel Polani and a team at the University's School of Computer Science have just received a European grant of €780,800 for the four-year research project Cognitive Control Framework for Robotic Systems (CORBYS) to build the cognitive features of these robots.There are already some robots which help people to walk, but the issue is that they need constant attention and monitoring by therapists and they cannot effectively monitor the human.In CORBYS, the aim is to have robots that understand what humans need so that they can operate autonomously.
Dr Polani and his team will contribute in particular to the high-level cognitive control of these robots and their synergy with human behavior. This is based on biologically-inspired principles and methodologies that have been developed at the School of Computer Science over the last years.
We believe that all organisms optimise information and organize it efficiently in their niche and that this shapes their behaviour -- in a way, it tells them to some extent what to do. We believe it will help our system to take decisions similar to organisms and to better 'read' the intentions of the human it supports.Furthermore, we will use these techniques to balance the lead-taking between robot and human.
Over the four-year period, the project will produce two demonstrators, among them a novel mobile robot-assisted gait rehabilitation system which will be a self-aware system capable of learning to enable it to optimally match the requirements of the user at different stages of rehabilitation.

Wednesday, 20 April 2011

Acquisition of Robotic Technology For Prostate Cancer Surgery


A new study conducted by researchers at NYU Langone Medical Center and Yale School of Medicine shows that when hospitals acquire surgical robotic technology, men in that region are more likely to have prostate cancer surgery.The use of the surgical robot to treat prostate cancer is an instructive example of an expensive medical technology becoming rapidly adopted without clear proof of its benefit.Policymakers must carefully consider what the added-value is of costly new medical devices, because, once approved, they will most certainly be used.
This is the first study determining the impact of surgical robot acquisition on the rate of surgery to treat prostate cancer and concludes that it increases surgical volume. Surgical robotic technology for the treatment of prostate cancer has been rapidly adopted across the United States since FDA approval in 2001. By 2009, over 85% of men undergoing prostatectomy had robotic surgery.
This retrospective cohort study surveyed the regional and hospital rates of radical prostatectomy surgeries between 2001 and 2005 and looked at whether they were affected by the acquisition of surgical robotic technology. During this early adoption phase, 36 of 71 regions studied had at least one hospital with a surgical robot and 67 of the 554 hospitals studied had a surgical robot. According to the study, regions and hospitals with robots had higher increases in radical prostatectomy than those without. Additionally, hospitals with surgical robots increased surgery cases an average of 29.1 per year while those without robots experienced a decline in radical prostatectomy by -- 4.8 cases.
Patients should be aware that if they seek care at a hospital with a new piece of surgical technology, they may be more likely to have surgery and should inquire about its risks as well as its benefits.Hospitals administrators should also consider that new technology may increase surgical volume, but this increase may not be sufficient to compensate for its cost.The study author suggests that adoption of new surgical robotic technology either attracted patients from other hospitals without robots to undergo surgery at their hospital or that hospitals may have offered robotic surgery to prostate cancer patients who would have otherwise opted for alternate management approaches like active surveillance and radiation.
The lessons learned by studying the adoption of the surgical robot for prostate cancer will be important for policy makers to understand as they consider the purchase and implementation of future medical technology. Especially in the current policy climate where control of healthcare costs is increasingly important.While the observed effect of technology acquisition was very strong across a number of regions and hospitals, NYU Langone Medical Center did not itself experience an increase in volume of prostate cancer surgeries after acquiring robotic technology in 2003. While NYU Langone performed 276 prostatectomies in 2001 the number of cases actually declined to 223 by 2005, suggesting that the effects of technology acquisition may not be the same at every hospital.

latest technology develpoment

Technology makes the world go around today, and with each passing year, the latest developments in technology are becoming

more and more widespread. These are means to make our lives easier, but many also argue that technology is having a very

negative impact on our lifestyles. The repercussions of this are open to interpretation, and it is all a matter of choice.
For some people technology and the latest gadgets signify something far more important than just buying products to improve

their image and self esteem. Amongst us there are true lovers of technology who just cannot wait to get their hands on the

next best gadget that comes rolling out. Needless to say, some of these gadgets cost a mini fortune, and for some other

people, it is incomprehensible why someone would spend so much money on such purchases.

When we look around us today, we see technology all around. Our lives are being taken over by these modern gizmos and

gadgets, but we must take this with a pinch of salt. New developments in technology and consumer electronics are created with

the sole purpose of simplifying our lives, and it is entirely in our hands to decide what we do with these products. With

that being said, let's look at some new developments in the field of science and technology. You may also be interested to

know how does technology affect society and how can technology help the environment.

Technology means different things to different people, and everyone has their own set of preferences when it comes to such

things. There are plenty of areas where one can look for the newest gadgets, and here we try and provide a detailed analysis

in all of these fields.

The latest fad amongst people all over the world seem to be smartphones. The cell phones that we use have certainly come a

long way since the first wireless models that started hitting the stores in the 1980's. These phones have become smaller as

time has passed by, and now suddenly they seem to be getting bigger again. Smartphones perform a variety of computing tasks,

in addition to your regular telephony services, and these are mini computers that are extremely advanced today. All the tasks

that you perform on your PC today can also be performed on your cell phone, and this is a fact that is not missed on people.

Pretty much everyone in the world today is feeling the need for a smartphone now.

Video games are something that appeal to people of all ages, and this is a fact that has been exploited completely by

developers, who regularly dish out the latest developments in technology. The graphics that we see on our video games today

have certainly come a long way. The crispness and the details that we find are unbelievable, and this has led to the massive

demand for the best video game consoles in the market.

The latest developments in computers is something that does not cease to amaze us. Ever since Microsoft arrived on the scene,

we have seen the market flooded with Windows based machines, even though Macs and Linux based machines are also widespread.

It seems that every month we see machines and computer hardware getting more and more powerful than before. Whether we are

using desktop computers, laptop computers, or netbooks in our homes and offices, it always seems to get obsolete in no time.

People involved in the creation of 3D animation and graphics have so many great tools to play with today, and software

developers are having a field day as well. The advent of the Internet has opened up the world of computers to almost every

family all around the world, and this is an industry that will never get old now. New software languages are also constantly

arising to make the task of website design more intricate and simple than ever before. These computers and the technology

they offer is helping us out in numerous areas like health sciences, medicine, education, business, defense, production,

entertainment, space exploration, nanotechnology and plenty of other research oriented fields where computer uses come in

handy. To imagine a world without computers today is nigh impossible.

The supercomputers that we heard so much about in the past are now firmly set in the future, because we can now bring the

most advanced machines right into our homes and perform infinite tasks on them. Social networking, blogging, mass media and

online shopping are just some of the areas that have seen vast improvements thanks to the latest developments in technology

as far as computers are concerned.

Our television sets are not averse to the latest developments in technology as well, and they have come a long way since the

mundane black and white TV sets of old. Today we view our content on HDTV (High Definition Television) sets. These offer us

tremendous amounts of TV resolutions, and they have taken TV viewing to whole new levels. You must read about the new 1080p

resolution to understand this, and this will also explain to you which is better, 1080i or p.

Memory Mission Explores In Neuroscience


Astrophysicists peer into the far corners of deep space for dark matter, but for neuroscientists at the Queensland Brain Institute (QBI) exploring the unknown is much closer to home.They have discovered a mechanism vital to the development of the hippocampus * – a region of the brain crucial to the formation of memories, and the lifelong production and integration of new nerve cells.To say the hippocampus is important is a bit like saying breathing is optimal.
The crucial role performed by the hippocampus throughout life, knowledge of this region's early development remains surprisingly scant. Her research team is looking at how the brain forms during embryonic and foetal development.They identified a gene that regulates the development of glial cells in the hippocampus. Their research shows that the hippocampus contains different populations of glial cells that are essential for the structural integrity of the hippocampus.
Glial cells are an important part of the building blocks of the brain.They provide an essential scaffold for the migration of neurons in the developing brain. It is vital we understand how glial cells provide this structural scaffold because if the hippocampus is not formed correctly it cannot perform all the functions required of it in the developing and adult brain.The hippocampus plays an integral role in spatial navigation, learning and memory, and is a major site for adult neurogenesis.Mice lacking the gene that regulates glial cell differentiation exhibit major developmental irregularities, including catastrophic structural deformities of the hippocampus.
Equipped with this knowledge, researchers studying the hippocampus now have a better understanding of the genes that help control the development of this vital brain region. Fundamental scientific knowledge of this kind is an essential step in understanding brain function and repair.The term hippocampus is derived from the Greek words "hippos" (horse) and "campus" (sea monster). The brain region known as the hippocampus has the characteristic shape of a sea-horse's tail.

Mapmaking in the Brain



"Grid cells," which help the brain map locations, have been found for the first time outside of the hippocampus in the rat brain, according to new research from the Norwegian University of Science and Technology (NTNU). The finding should help further our understanding of how the brain generates the internal maps that help us remember where we have been and how to get to where we want to go.
Five years ago, researchers at NTNU's Kavli Institute for Systems Neuroscience were the first to discover the intricacies of how the brain creates internal maps using grid cells in a coordinate system. Grid cells provide geometric coordinates for locations and help the brain generate an internal grid to help in navigation. Along with place cells, which code for specific locations, head direction cells, which act like a compass, and border cells, which define the borders of an environment, grid cells enable to brain to generate a series of maps of different scales and help with recognition of specific landmarks.
However, place cells had only been found in the hippocampus and grid and border cells in the medial entorhinal cortex. But in the August issue of Nature Neuroscience, Kavli researchers report finding many grid cells intermingled with head direction and border cells in the presubiculum and parasubiculum areas of the brain, which are locations that are the source of some of the major inputs of medial entorhinal cortex.
This finding will help in particular scientists who are trying to understand the mechanisms that actually generate grid signals in the brain. The presubiculum and the parasubiculum are not the same as the medial entorhinal cortex but share some properties and connections. It is in this direction that we should look for further explanations.

nano medicine

Nanomedicine is the one of the most valuable  medical application of nanotechnology.as the name specifies naomedicine involves the use of nano aprticles in the surgical and medical treatements of pateint.
Or
we can say nanomedicine is the nanotechnology application which is used for engineering or bliding molecular or tomic machines for the traetment of diseases in living organisms.
Nanomedicine as we know is the appliction which has diverse dimensions.Many  intelligent and effiecient instruments are helping dcotors for  the cure of diseases.It works a at molecular or atomic  scale ,it designs the medical appratus at extremely small scale to provide speed and high performance with low maintainace.many devices such as bio sensor ,nao electronic instruments,pace makers,monitoring  apparatus  and advanced Ecg machines,all thses teriffic machines are the invention of nanomedicine.the most advanced form of nanomedicine uses the nanorobots and nanoinstruments as surgeons. These kinds of machines might repair damaged cells, or get into the cells and replace or assist damaged intracellular structures at individual stage.
There is a huge range of  Nanomedicine devices  which involves medical applications of medical applications of nanomaterials, nanoelectronic biosensors, and even more useful and practical future applications of molecular nanotechnology. In the field of medical and biological world nanomedicine has great  significant as this application has facilitated the mankind so well.There are many instruments,devices and machines which nanomedicine has introduced.Naomedicine have significant applictions in the wolrd of medical sciences.  Its one of the major technologies which has higlty supported the entire field of medicine. One of the biggest advantage of naonoemdicne is that it can transform common medical procedures into  faster one with 90 percent accuracy rate. Some of the examples of such implicational procedures are given below:
Diagnostic nanoapparatus could be attached to keep check of the internal chemistry of the body. Mobile nanorobots, with wireless transmitters, could easily circulate in the blood and lymph systems and can, and send out alerts when chemical imbalances appear within the blood.
Nanomedicine has also helped doctors to better understand the phenomenal changes in the human nervous systems. Fixed nanomachines could be inserted in the nervous system of the human body to monitor pulse rate, brain activity, and other important functions.
Live saving drugs are one of the important ingredients in the latest medicines but its unusual and excess usage could cause death. Nanomedicine also has successful applications for the reduction of extra drugs from human body. Implantation of nanomedicine devices could disperse drugs or hormones as required in people with chronic imbalance or deficiency states.
Advanced and fully equipped nanomedical   heart trackers and present in the famous hospitals to accurately track the heart beat and it’s down falls and also for treating it as needed in the body.. In human heart defibrillators and pacemakers, nanodevices could influence the behavior of individual cells.
Nanomedicine was the first to conceptualize the artificial red and white blood cells and late on it successfully showed the positive results. Cancer patients are now treated by injecting artificial red blood cells to balance the human body blood level. Artificial antibodies, white & red blood cells and antiviral nanorobots could be considered as successful applications of nanomedicine.





 




The Grid technology


A European consortium has brought the power of grid computing to bear on problems ranging from the genetic origins of heart disease to the management of fish stocks and the reconstruction of ancient musical instruments.A 'grid' is a network of high-powered computing and storage resources available to researchers wishing to carry out advanced number-crunching activities. Resources belong to individual universities, national and international laboratories and other research centres but are shared between them by mutual agreement.
In Europe the data is carried over the GÉANT grid network but the organisation that makes this possible is managed by EGEE-III, the third phase of an EU-funded project to create an infrastructure supporting European researchers using grid computing resources.Although grid computing began in the high-energy physics community -- and EGEE will be on hand to process the long-awaited data from the Large Hadron Collider -- many other disciplines are now using EGEE to access the world's most powerful computing facilities.What many of the applications have in common is the simulation of experiments that would take years or decades to do in the laboratory. A common theme is to study how complex molecules interact with each other, with many applications in the search for new vaccines and other drugs.Scientists from the EU's Cardiogenics project were able to find four out of more than 8.1 million possible combinations of genetic markers that were strongly associated with the disease.
A group in Taiwan is using EGEE to model the effects of earthquakes on urban areas in the hope of learning how to keep damage to a minimum. It's combining physical sciences and social sciences to do something really practical.Another project, AquaMaps, is using the grid to model the worldwide distribution of fish species. Because climate change is affecting the patterns of where you might find marine species, fish stock management is quite an issue.With everything changing so rapidly, the AquaMaps project is mapping where you can find particular species of fish at any one time.
EGEE is also helping doctors to treat rare diseases through a project to create a worldwide image library.It gives them almost instant access to medical images spread around the world but in a secure manner.The benefits of EGEE have spread beyond the hard sciences and medicine into the humanities. The multidisciplinary ASTRA team in Italy used the grid to construct a digital model of an epigonion, a harp-like instrument used in ancient Greece. The virtual instrument was played in a concert in Naples last December.
One of the original motivations for this grid activity was that all this computing power could change the way that scientists do their research.We want to move away from the short-term project model that has happened within EGEE to a model which is both more sustainable financially but also more sustainable and longer term for the users that increasingly depend upon this infrastructure.Newhouse likens the grid to other scientific instruments that have changed the way we look at the world. It's like the invention of the microscope or the telescope. The grid is actually changing the way scientists think about doing their research and the questions they can pose.

nano-robotics

Nanorobotics is one of the technology which came into being with the advancement in nanotechnology.Its a technology for creating automatic machines,respondent devices  and robots at the atomic scale of 10-9 nanometers.Nanorobotics is one of the major discipline of largest and most complex engineering and designing nanorobots.Nanorobots are constructed with the molecular components ranging from 0.1 to 10 micrometers and with the help of nanorobotics ninety nine percent human like non biological robot can be constructed.At present nanorobotics has emerging  applications in the field of medicine and technology.
Nanorobotics plays vital role in the development of efficient robots.It uses nano components and there objects to build the structure of robots.Its nano nature allows scientists and engineers to engineer the mimic of human beings.Most complex parts of robots can be constructed well with the help of nanorobotics. The devices which are created with the help of nanorobotics is known as hypothetical devices, names such as nanobots,nanoids,nanites or nanomites are also used to explain the machines by nanorobotics. Nanorobotics permits robots for presicions and interactions  of different function with nano scale objects,all the robots with nanoscaling are operated at nanoscale resolution..Each part and component of a robot  from infra structure chip to external body is configured at atomic scale.Although nanorootics makes structure of the robot complex but it facilitate the device with extra ordinary intelligence and efficiency.
Nanorobotics has incredible applications in the feild of science and technology.With the assistance of this diverse technology, world is now able to see and utilize the instruments which were nerver seen before. Some of the most famous nanorobotics instruments and applications are as under.

Atomic scope microscope is on the instrument which could be considered as nanorobotic instrument.it is configured and manupulated at nanoscale and also used to vie the particle of an element or material at the smallest level.in the field of medical sciences atomic scope microscope is used to diagnose the cancer and othe fatal bacteria.

Nano macro and micro sale robots are also the invention of nanorobotics.these robots can move with the nanosclae presicion and can detect and scan the objects and obstacles in the way at completely without leaving a single particle. Nanotechnology delivered exellent  applications such as microscopic robots that automatically gathers the other devices or travel inside the human body to transfer drugs or do microsurgery.these robots are so fast that they can shake the most viscous fluids just in matter of seconds.

Nanomachines are widely in research these days.Researchers have developed some of the testifying samples,one of the example of  these molecular machines is the sensor having capability of counting particlular molecule in the chemical compounds.There is no implementary application present in the medical field.but these machines if properly developed for the medical applications, they could greatly help the doctors to destroy the cancer cells.

Another useful application is the detection of toxic chemicals and the measurement of concentrated substances in the envoirnment.these detectors will be very useful and beneficial for the chemists in order to manage and reduce the toxicity of  chemicals.

Recently, another demostration of nanorobotics is the single molecule car which has nano infra structure.This car is devleoped by chemical process nad have buckyball wheels.It is configured by controlling the tempertaure in the air and also by positioning the scanning tunnel microscope.

Scientific field  has given given new type of robots to the world which are known as nubots.Nubot is the abbrevation of “nuclic Acid Robots.”thes devices are operated at nanoscale and are higly benificail for demstrataing the DNA test and bloddcell detection.

nano-photonics

Nanophotonics is the  the branch of nanotechnology that deals with the study and behaviour of light and optics at nano meter scale.It directly deals with the optics and widely used in optic engineering.Interactions and sub wavelength of various substances are calculated with the help of nanophotonics.It include all the phenomenas that are used in optical sciences for the development of optical devices.
Nanophotonics works collectively with many components each for different purpose but ones output beocmes input of the other component ,hence forming the chain of procedure.Follwoing are the components that are directly involved in nanophotonic systems.
Components of a nanophotonic system


    * Waveguides
    * Couplers
    * Fiber to waveguide couplers
    * Optical switches
    * photo detectors/solar cells
    * Electro-optic modulators
    * Wavelength division multiplexors
    * Amplifiers
    * Laser
    * Isolators
    * Optical circulators
    * Saturable absorber

All of the above components work collectively.Coupler snad fiber couplers are responsible for carring the light wave to optical switches ,amplifiers acts a the converter and controllers ,detectors directly detects the image and laser beam.All the phenomena starts wit the ultra voilet radiations with approximate wavelength of 300 to 1200 nano meters.During the interactions of light typical electromagnetic field is formed.This effect is also known as Max well effect.In this feild the nano sturcutures are adjusted with the help of topography. This means that the electromagnetic field is totally  dependent on the shape and size of the light interatcions.

The core concepts of the nanophotics were originated in mid 1990s during the pheonomna for testing the nanotechnology strength to catter light.Two main concepts are involved in photonics.


    * Revealing and exploring  the properties of light at the nanometer scale
    * Enhancing efficient devices power to be used in   engineering applications.
    * Nano photonics has the potential to revolutionize the industrial sector in order to improve and create the new strcutural  components for the sake of sciences.

Because of the instant interactive properties nanoopotics are used in many applications such as Scanning tunneling microscope for  capturing the element image through laser strike,optical switches and electromagnetic chips  circuits, trasistor filaments, surface plasma, optical microscopy and for propagating light from one part to another part for various purposes in electronics and mechanical enginnering.High power optical structures, wave length meters,I nteraction chips and many other tiny devices are developed with the help of nanophotonics. Because of its diffraction capabilities it is considered to be one of the most advanced form of optical nanotechnology that would be of great use to the mankind in coming future.They are also used in semi conduction materials and also for navigating the optical properties of extremly small materials such as colloidal gold with 10 to 100 nm ,red in color,this kind of nano gold allows free electron flow which must be controlled in order to prevent the excess flow.Nano photonics introduce the technologies for neutralizing some of the following electrons.

    * Like other nano technologies nanophotinics also has some advantages and disadvantages.
    * One of the major advantage of nanophotonics is its extremely powerful interactive ability with almost every particle that deals with optics.
    * Improved the application diversity and explored the major concepts of light resources that one cannot even imagine.
    * Uses light at its best for treating optics.
    * Nanophotonics applications are not cost effective which is basic drawback of this technology
    * High scale fabrications and critical light beams are dangerous for the human health because they can penetrate into nervous system and can affect brain and spinal cord.
    * Huge laser consumption encourages skin diseases.





 

molecular electronics

Molecular electronics is the branch of nanotechnology which deals with the applications and construction of nano building

blocks that are used in electronic circuit manufacturing and desgin.It is sometimes called as moletronics.All the major

electronic fabrications are supported by molecular electronics.
The concept of molecular electronics is not new  in th field of technology and science.its core concepts were originated 

with the concept of nano electronics.In 1997 the first ever theory for the moelcular electronics was given by by Mark Reed

and co-workers.Its one of the important discpline that provides us way for dealing with all the electronic characteristics of

conductors,insulators and semi conductors.molecular electronics has two sub disciplines

    * molecular materials for electronics
    * molecular scale electronics

 Both of them collectively  give high tech and ultra intelligent devices to be used in various kind of applications.This

field is progressing rapidly  and supporting nano electronics in order to develop excellent applications.

molecular electronics works at the smallest scale involving single molecules, its characteristics and sub properties.It deals

with the substance which are considered to be un explorable in the field of electronics.It also implements all the major laws

of electronics in the practical ground basis so that original implementation of theoratical frame work can be seen. Many

major components in the feild of chemistry, physics, biological science and electronics are the gift of molecular

electronics.With the help of this wonderful technology the bulding blocks of structures can be used in intense and complex

fabrications of intergrated circuits.It can control the molecular scale properties of individual atom of matter according to

the use.he direct measurment of material properties is one of the most vital feature of molecular electronics which cannot be

found in any other technology.

Molecular electornics has wide range of appllications in the work areas of chemistry, physics, electronics and nano

electronics, technology,artificial intelligence and medical equipements. Almost all the fabricated chips in the intelligent

machinery that is used on large scale has molecular electronic involved in its constrcution.For example resistors and

transistors that are used in producing electricity,capacitors in space crafts, automation circuits of robots, strategic plant

temperature handlers and CT scan for displaying the infected areas of body.
STM and AFM both have molecular electronics involved in their construction.In the field of chemistry it is used to see the

chemical reactions in stimulated models of nuclear reactors, and also for measuring the acidic and reactive properties of

individual element.

# Molecular electronics is not at all cost effective.complete fabricated components developed by it are quite expensive and

their mantainace cost is also high.
# This technology is not easily understandable.For proper knowledge the basic concepts of nanotechnology and nano electronics

must be studied first.
# Difficult error recovery because of high integration at smallest scale, it is hard to detect the physical error in the

device.
# The components developed because of high manufacturing cost, its components are not readily available.
# Especialized engineers and scientists are required to handle and control the risks factors of molecular electronics.

Keeping An Eye On Flash Floods


An European Geosciences Union General Assembly in Vienna, Austria, researchers using grid technology will present their work to reduce the hazards of flash floods.After the extreme European floods of 2002, which heavily affected southern France, the French government reformed and consolidated their flood warning systems. Now the European project CYber-Infrastructure for CiviL protection Operative ProcedureS (CYCLOPS) is using the Enabling Grids for E-sciencE (EGEE) Grid infrastructure to model flooding to help forecasters and authorities make decisions in emergency situations.
The framework provided by CYCLOPS has been used to create a grid powered flood forecasting platform called G-ALHTAIR. By combining data of many types and sources, the software allows researchers to examine possible future flooding. Instead of running each scenario separately on their own personal computer they can use the resources provided by the Grid to examine up to 500 different hydrological situations simultaneously and examine the effect of various conditions on the potential flooding.
Currently the work is focussing on the Grand Delta region, the area around the Rhone in Southern France. However the work as well as running a demonstration of G-ALHTAIR on the EGEE booth at the conference, is confident that the technology could be used for any area under threat from flooding.
This platform could be tested in operational situations in the Grand Delta flood forecasting service and then extended to the other French flood forecasting service managing other kinds of flooding, like plain floods.It would then be possible to integrate more sophisticated meteorological forecasting and get the system included into a fully integrated decision making architecture.

Wireless Solution that needs in Emergency Situations


Recent emergency situations that have arisen in the UK, including severe flooding, extreme weather, and even terrorist attacks have highlighted repeatedly just how vulnerable some sections of society can be in such circumstances. UK researchers, writing in the International Journal of Emergency Management suggest that wireless technology could hold the key to remedying this problem.
The public, including elderly and vulnerable people, were at risk as a result of two types of communications difficulties during these events and, apart from general broadcast media, many only received communications from rescuers on the ground.They have now surveyed the currently available technologies for emergency communication in the UK and assessed it with respect to three aspects of its use: whether and to what degree the technology is suitable for broadcast or point-to-point communications, whether the technology is based on wireless or fixed wired networks, and the timeline requirement of the emergency, from initial alert, through emergency response communication requirements, to information and communication provision for those immediately involved and finally to the general public.
The researchers explain that there is in place high resilience, mobile communication networks and devices that use satellite and secure radio networks, which can be used during major emergencies. These systems improve civilian access to mobile technology during emergencies, which they explain is critical for allowing people to contact and help family and friends.In addition, the Mobile Telecommunication Privileged Access Scheme allows the emergency services to use mobile phones without their connectivity affecting or being interfered with by emergency calls from the public or simply the huge volumes of public calls that are made during such periods. They also point out that "emergency aware" communications technology is coming online that can avoid overload and allow emergency management and advice to the public.
The team points out that the effectiveness of any communication technology for informing and alerting the public during emergencies is dependent to some extent on the system's ability to resist disruption due to loss of power, extreme weather and other catastrophic events. Also, it must also incorporate inexpensive and widely available devices that can be used by vulnerable and aging individuals regardless of perceptual, cognitive or physical impairments.At one time, traditional broadcast networks -- radio and TV -- were adequate for alert services and information dissemination, but they obviously do not allow communication between individuals. Modern mobile devices provide both a challenge and an opportunity, the team says, Programmable mobile technologies might prove increasingly resilient in emergencies and could be the most accessible platform for the majority of people, including those in vulnerable groups.

A 'P4P' System For Efficient Internet Usage


A Yale research team has engineered a system with the potential for making the Internet work more efficiently, in which Internet Service Providers (ISPs) and Peer-to-Peer (P2P) software providers can work cooperatively to deliver data.
The way people use the Internet has changed significantly over the past 10 years, making computers seem to run less efficiently and putting strain on the available bandwidth for transmitting data.Since 1998, the percentage of Internet traffic devoted to the download and upload of large blocks of information using P2P software has increased from less than 10 percent to greater than 70 percent in many networks. By contrast, Web browsing now accounts for 20 percent and e-mail less than 5 percent of total Internet traffic, down from 60 and 10 percent respectively, in 1998.
The P4P will both reduce the cost to ISPs and improve the performance of P2P applications according to a paper to be presented at ACM SIGCOMM 2008, a premier computer networking conference in August 2008 in Seattle.The current P2P information exchange schemes are "network-oblivious" and use intricate protocols for tapping the bandwidth of participating users to help move data. The existing schemes are often both inefficient and costly -- like dialing long-distance to call your neighbor, and both of you paying for the call.The Yale team has played many roles in this project, ranging from naming and analyzing the architecture, to testing and to implementation of some key components of the system.
Right now the ISPs and P2P companies are dancing with the problem -- but stepping on each other's toes. Our objective is to have an open architecture that any ISP and any P2P can participate in. Yale has facilitated this project behind the scenes and without direct financial interest through a working group called P4P that was formed in July 2007 to prompt collaboration on the project.The working group is hosted by DCIA [Distributed Computing Industry Association] and led by working group co-chairs Doug Pasko from Verizon, and Laird Popkin from Pando. Currently, the group has more than 50 participating organizations.
The P4P architecture extends the Internet architecture by providing servers, called iTrackers, to each ISP. The servers provide portals to the operation of ISP networks. The new P4P architecture can operate in multiple modes. In a simple mode, the ISPs will reveal their network status so that P2P applications can avoid hot-spots. In another mode, P4P will operate much like a stock or commodities exchange -- it will let markets and providers interact freely to create the most efficient information and cost flow, so costs of operation drop and access to individual sites is less likely to overload.
While ISPs like AT&T, Comcast, Telephonica and Verizon and the P2P software companies like Pando each maintains its independence, the value of the P4P architecture is significant, as demonstrated in recent field tests. For example, in a field test conducted using the Pando software in March 2008, P4P reduced inter-ISP traffic by an average of 34 percent, and increased delivery speeds to end users by up to 235 percent across US networks and up to 898 percent across international networks.

Moblie Peer-to-Peer Applications


Mobile peer-to-peer (P2P) applications allow a team or group to create new levels of ad hoc co-operation and collaboration around a specific, real-time goal. But developing compelling and secure applications is a challenge. Now a platform developed by European researchers rises to that challenge.
Many business sectors could benefit enormously from secure P2P mobile communications, but developing applications tailored to specific needs is expensive, time-consuming and not necessarily reliable. Security, in particular, is a difficult issue to resolve.But now researchers at the EU-funded PEPERS project are putting the final touches to a mobile P2P development platform for secure applications. The platform could mean a rapid rise in the number of secure, industry specific P2P mobile applications, helping to increase the competitiveness of European business.
P2P applications allow decentralised companies to more effectively manage a dispersed and highly mobile workforce. Journalists will be able to work more collaboratively on breaking news, and security guards will coordinate responses to situations, increasing security and personal safety.P2P applications are a powerful innovation enabled by the internet. Essentially, P2P allows individuals to connect and work together, rather than having to go through a central communications unit first.P2P can allow thousands of people to collaborate around a specific long-term or ad hoc goal. The technology gave rise to Wikipedia, the online encyclopaedia written by thousands of volunteers. It enabled the creation of Digg, Stumble-Upon and del.ico.us, all phenomenally successful bookmarking services.
But these are all desktop examples. By creating a platform to develop secure mobile P2P applications easily, the PEPERS team helped to move P2P from the virtual to the real.The goal of the project was to enable the development of secure mobile P2P applications.So we created a platform to help develop, and run, secure applications on mobile devices. It means people can set up an ad hoc group to tackle an emerging task.The platform can be used to create secure, mobile, seamless P2P applications for specific domains.
Security guards, for example, transfer large sums of cash or patrol a client’s premises. They need to keep in touch and organise themselves on-the-fly to complete their day-to-day tasks. Currently coordination is handled by a central dispatch, but that solution is beset with problems.Firstly, it takes time to query dispatch, and more time for them to send another guard who might be only a couple of dozen metres away. And if dispatch is fielding queries from dozens of personnel, scattered around a city, the process takes longer, and the risk of error increases.With an application developed through the PEPERS platform, guards can communicate directly, quickly and securely.
As a first step, the PEPERS team only set out to prove that a platform supporting the development of secure mobile P2P applications was feasible.Making a secure system on a lightweight platform like a mobile device was a challenge. We had to optimise the software for mobile devices.The project chose to develop the platform on mobile devices based on the well-established, open-source Symbian operating system for mobile use.The upshot is that secure and customised mobile P2P applications can be easily developed for specific business goals.

Smartphones


Applications dominate today's smartphone market. In the future, internet-capable televisions, tablet and desktop PCs, and cars will all run apps, which can, for example, help plan and book a ski trip.To allow web applications run on four screens, the webinos project consortium creates a full open source code base. The tagerted technology will allow different devices and applications to work together, securely, seamlessly and interoperably. Most recentely, the consortium has summarized the first research results in four reports covering use cases, security, technical requirments and industry landscape.
The webinos project brings together major, international companies and research institutions including Fraunhofer FOKUS, Technische Universität München, Oxford University with electronics companies such as Sony Ericsson and Samsung, carmaker BMW, the standardization body W3C and telecommunication operators like Deutsche Telekom. webinos is funded by a ten million Euro grant from the European Union.
The result of the webinos project is an open source platform that allows the safe exchange of data services across multiple devices. The tools provided through the platform will enable software designers from any industry to create web applications and services that can be used and shared over a broad spectrum of converged and connected devices -- regardless of their respective hardware specifications and operating systems.
It is already clear that the opportunities for new business models are endless, especially as the webinos project is targeting a standardised interactivity platform for software and application developers to design secure, personalised and innovative apps. The technology allows web applications to be simultaneously accessed on our television, our computer, our games consol or music player and our car.Small and medium-sized enterprises would also benefit from an uncomplicated and low-cost 'foot in the door' of the world of apps.They do not need to go to the expense of purchasing or developing the software themselves if they want to tailor services to a wide variety of terminals.
Privacy and security concerns are at the heart of webinos. It allows barrier-free usability, but only with user permission. Embedded in the open source platform are features that put sensitive data and functions under explicit user control, and not left to an questionable trustworthiness of a cloud. User expectations on Security and Privacy are covered by one of the published reports.

micro array technology

DNA microarray analysis is one of the fastest-growing new technologies in the field of genetic research. Scientists are using

DNA microarrays to investigate everything from cancer to pest control. Now you can do your own DNA microarray experiment!

Here you will use a DNA microarray to investigate the differences between a healthy cell and a cancer cell.

Genomics refers to the comprehensive study of genes and their function. Recent advances in bioinformatics and high-throughput

technologies such as microarray analysis are bringing about a revolution in our understanding of the molecular mechanisms

underlying normal and dysfunctional biological processes. Microarray studies and other genomic techniques are also

stimulating the discovery of new targets for the treatment of disease which is aiding drug development, immunotherapeutics

and gene therapy. In this site, we have compiled an extensive list of resources to assist reseachers interested in

establishing a microarray platform and performing expression profiling experiments.

Gene expression profiling or microarray analysis has enabled the measurement of thousands of genes in a single RNA sample.

There are a variety of microarray platforms that have been developed to accomplish this and the basic idea for each is

simple: a glass slide or membrane is spotted or "arrayed" with DNA fragments or oligonucleotides that represent specific gene

coding regions. Purified RNA is then fluorescently- or radioactively labeled and hybridized to the slide/membrane. In some

cases, hybridization is done simultaneously with reference RNA to facilitate comparison of data across multiple experiments.

After thorough washing, the raw data is obtained by laser scanning or autoradiographic imaging . At this point, the data may

then be entered into a database and analyzed by a number of statistical methods.

A number of issues must be addressed before establishing a microarray platform and beginning expression profiling studies, in

particular, the overall cost. For a cDNA microarray platform, one must purchase a clone set, robot, printing pins and the

reagents needed for DNA amplification and purification. The cost of these materials can vary significantly, but one can

expect to need at least $100,000 to establish such a platform. However, once the process of printing and hybridizing

microarrays has been optimized, the cost per experiment will fall dramatically. Thus, one must decide if the number of

planned experiments is enough to warrant the time and cost of establishing a microarray platform. If not, it may be more

prudent to seek the services of an academic microarray core facility or a commercial entity.


Silicon Nanowires



Scientists at the National Institute of Standards and Technology (NIST), along with colleagues at George Mason University and Kwangwoon University in Korea, have fabricated a memory device that combines silicon nanowires with a more traditional type of data-storage. Their hybrid structure may be more reliable than other nanowire-based memory devices recently built and more easily integrated into commercial applications.
As reported in a recent paper,* the device is a type of "non-volatile" memory, meaning stored information is not lost when the device is without power. So-called "flash" memory (used in digital camera memory cards, USB memory sticks, etc.) is a well-known example of electronic non-volatile memory. In this new device, nanowires are integrated with a higher-end type of non-volatile memory that is similar to flash, a layered structure known as semiconductor-oxide-nitride-oxide-semiconductor (SONOS) technology. The nanowires are positioned using a hands-off self-alignment technique, which could allow the production cost--and therefore the overall cost--of large-scale viable devices to be lower than flash memory cards, which require more complicated fabrication methods.
The researchers grew the nanowires onto a layered oxide-nitride-oxide substrate. Applying a positive voltage across the wires causes electrons in the wires to tunnel down into the substrate, charging it. A negative voltage causes the electrons to tunnel back up into the wires. This process is the key to the device's memory function: when fully charged, each nanowire device stores a single bit of information, either a "0" or a "1" depending on the position of the electrons. When no voltage is present, the stored information can be read.
The device combines the excellent electronic properties of nanowires with established technology, and thus has several characteristics that make it very promising for applications in non-volatile memory. For example, it has simple read, write, and erase capabilities. It boasts a large memory window--the voltage range over which it stores information--which indicates good memory retention and a high resistance to disturbances from outside voltages. The device also has a large on/off current ratio, a property that allows the circuit to clearly distinguish between the "0" and "1" states.
Two advantages the NIST design may hold over alternative proposals for nanowire-based memory devices, the researchers say, are better stability at higher temperatures and easier integration into existing chip fabrication technology.

Computer Memory In Artificial Atoms


Three University of Copenhagen nano-physicists have made a discovery that could change the way data is stored on computers. In the future it will be possible to store data much faster, and with more accuracy. This discovery has been published in the journal Nature Physics.
A computer has two equally important elements: computing power and memory. Traditionally, scientists have developed these two elements in parallel. Computer memory is constructed from magnetic components, while the media of computing is electrical signals. The discovery of the scientists at Nano-Science Center and the Niels Bohr Institute, Jonas Hauptmann, Jens Paaske and Poul Erik Lindelof, is a step on the way towards a new means of data-storage, in which electricity and magnetism are combined in a new transistor concept.
They are the first to obtain direct electrical control of the smallest magnets in nature, one single electron spin. This has vast perspectives in the long run. In our experiments, we use carbon nanotubes as transistors. We have placed the nanotubes between magnetic electrodes and we have shown, that the direction of a single electron spin caught on the nanotube can be controlled directly by an electric potential. One can picture this single electron spin caught on the nanotube as an artificial atom.
Direct electrical control over a single electron spin has been acknowledged as a theoretical possibility for several years. Nevertheless, in spite of many zealous attempts worldwide, it is only now with this experiment that the mechanism has been demonstrated in practice.
Transistors are important components in every electronic device. We work with a completely new transistor concept, in which a carbon nanotube or a single organic molecule takes the place of the traditional semi-conductor transistor. Our discovery shows that the new transistor can function as a magnetic memory.

iuvo Microconduit Arrays

iuvo Microconduit Arrays enable the use of highly miniaturized, advanced cell models and functional assays with automated

liquid handling and HCA platforms, with the goal of more accurate replication of in vivo processes in drug discovery. Instead

of the "buckets" of various sizes used for cell culture in multiwell plates, iuvo plates have cell culture compartments with

geometries designed specifically to support the biology and/or functions of interest.
uvo plates comply with SBS standards, and the liquid in the microchannels can be displaced with passive pumping as often as

necessary using standard automated liquid dispensing equipment. No specialized instrumentation is required to carry out

assays or acquire the data. Three dimensional cell culture becomes easy to carry out, and the entire compartment is

accessible to microscope-based HCA instruments as well as plate scanners. The microchannel environment reduces dilution of

secreted signaling molecules, and the media can be sampled for analysis of secreted factors.
Surface tension leads to pressure inside drops. All things equal, the pressure is higher inside smaller drops (~1/r). When

two drops are connected by a channel, the pressure is equilibriated by flow. The smaller drop flows into the channel,

replacing the channel contents.
The importance of 3D extracellular matrix on cellular function and behavior is clear from studies in differentiation, tissue

engineering and even molecular pharmacology. BellBrook Labs is leveraging its unique iuvo™ Microconduit Array technology to

provide high content cellular assays in 3D extracellular matrix. Let BellBrook Labs Bring Its 3D Expertise & Experience to

your Research!
    * Screening/Profiling in 3D Biological Matrix - Replicates in vivo microenvironment and signaling pathways.
    * Quantitative Compound Data - Quantitative potency data including IC50 and magnitude of effect. In situ molecular

staining allows pathway and mechanism analysis. Final reports include raw image files, dose response curves, and IC50 values.
    * Multiparametric High Content Platform - Allows a variety of assays, including viability, cell cycle, apoptosis, and

migration/invasion.
    * Rapid Turn-Around Time - Data delivered within three weeks.

Invasion of tumor cells through ECM (extracellular matrix) is one of the hallmarks of cancer, yet the process is poorly

modeled in vitro, especially in an automated format. We are now offering compound profiling services for information rich, 3D

tumor cell invasion assays using our novel iuvo Microchannel 5250 array platform. Designed as an improvement to Boyden

chamber assays, our fully automated invasion assay uses microscopic imaging of cell movement through horizontally-oriented,

collagen filled microchannels to provide quantitative data on cell motility. The microchannels facilitate post-assay

staining, which dramatically increases the information content of the assays.