Author Archives: Meraglim
As technology evolves and becomes a more integral part of financial services, the amount of data available is growing significantly, and this has already begun to benefit the niche companies that have gotten on board. There are now many different opportunities for using big data in a variety of areas, particularly in the area of risk management. Thus far, big data has already contributed a lot to the development of risk management solutions, particularly in the world of finance. However, what has already been uncovered is only the tip of the iceberg; the many potential applications of big data for risk management is only now beginning to be fully realized.
Big Data Definition
Big data is defined as large amounts of data, as well as the use of this data in making decisions and managing processes. Financial institutions have long used internal data to accomplish this, but more and more external data is being used to manage financial risk. External data may be directly given to a financial institution, already prepared and organized, or they may collect the data themselves; for example, through social media.
The Potential of Big Data
Big data is capable of enhancing the quality of risk management models because of the increasing availability and diversity of statistics. Now, big data can be used to simulate a variety of scenarios in order to realize all of the potential risks, leading to faster reactions to developments within the market. Big data also allows financial institutions to identify fraud quicker and with more precision by comparing internal and external data.
Beyond the risk management applications, big data can be applied to other areas in the world of finance. For example, big data provides far more ability to customize products through the use of location data, click streams, and social media analysis. This will enable financial institutions to better retain their client base, as well as attract new clientele. It is also worth noting that implementing the use of algorithms and big data to anticipate market exchanges has great potential.
Current Big Data Success Stories
Financial institutions are already in the process of implementing big data measures. One example of this can be see in Singapore’s UOB bank. It recently tested a risk management system that used big data to streamline the calculation of total-bank risk, reducing calculation time from 18 hours to mere minutes. In the future, stress tests can be carried out in real time, allowing quicker reactions to new risks. Another example of the successful implementation of big data for a financial institution is Morgan Stanley. The bank launched a big data program to better their analysis of their portfolio’s size and results. With technology that enables pattern recognition, this analysis is set to improve risk management significantly.
Kreditech, a German business that offers credit scores for individuals, has also implemented a successful big data project. They use location data, social media analysis, online purchasing behavior, and web analysis for risk management. Similarly, American company Kabbage uses big data to analyze the risks of providing loans to corporate customers, using external data from social media and online delivery services. Finally, Paymint, a U.S. company that specializes in compliance, combats credit card fraud using big data analysis to recognize fraud patterns. Their big data software analyzes millions of transactions each month to manage risk.
How to Successfully Use Big Data
To successfully use big data for risk management, it’s prudent to implement a structured evolutionary approach in order to accommodate the broad scope of big data. First, internal data should be collected and used. After that, you will have a clearer idea of data sources that will benefit you, so you can go to external sources, provided the benefit outweighs the cost. More important than amount of data is an integrated process of analysis. Taking small steps towards implementing big data programs allows you to identify any weaknesses or areas of risk.
The biggest obstacle to using big data is data protection, according to a recent study from the Fraunhofer Institute. Other obstacles include lack of knowledge, budgetary restrictions, lack of access to expertise, and prioritizing other programs first. There are also smaller technical problems, such as immature technology, that are a factor, but they are much easier to fix. The importance of big data in risk management will be fully realized once these hurdles are addressed by a skilled financial data analytics company.
In conclusion, the success of several financial service providers exemplifies the benefits big data can provide to risk management, but the full extent of how powerful this can be has not yet been realized. In order to get the full range of advantages big data analysis can provide you, it is wise to contract a financial data analytics company for assistance. At Meraglim™, we have both the expertise and risk assessment software necessary to provide comprehensive data analytics to manage risk. If you have questions about how Meraglim™ could help you, contact us today.
Despite the rapid evolution of technology over the last few decades, you may be surprised to learn that the computers we use today are not so different from the first computer built back in 1941. Your computer, while much smaller and faster than its 30-ton predecessors, performs fundamentally the same task: changing and interpreting binary code into a computational result.1 Binary code is composed of “bits,” which are the smallest unit of computer data and are represented as either zero or one. Whatever task your computer performs, it does so by processing a series of zeros and ones through an algorithm, which produces a new set of zeros and ones. While traditional computing works for now, we are fast approaching a point where the transistors (memories) of computers will soon be as small as atoms.2 If computers are to continue to become smaller and more powerful over time as they have been, new methods of computing will need to be developed. That’s where quantum computing comes in.
What is Quantum Computing?
It may sound like something out of science fiction, but quantum computing is a reality today. In essence, quantum computing is the application of quantum mechanics to information processing.3 Where a traditional computer uses bits, a quantum computer uses quantum bits, or qubits. Just like bits, qubits encode zeros and ones; however, using the power of quantum physics, qubits can be zero, one, or both at the same time.4 This might be difficult to conceptualize in a world dictated by classical physics, but the world of quantum physics, which deals specifically with things on the atomic scale, opens up more possibilities. Qubits are able to behave this way due to the phenomena of superposition and entanglement.
Superposition refers to a quantum system’s ability to be in multiple states at the same time, meaning that it can be both “up” and “down”, “here” and “there” simultaneously.5 Entanglement refers to a powerful correlation between two quantum particles that is so strong that even when they are far apart, they remain perfectly in unison. Due to superposition and entanglement, quantum computers are able to process an incredible number of calculations at the same time. While a traditional computer is slowed down by the limitations of only working with ones and ones, because a quantum computer can also work with superpositions of ones and zeros, calculations that were considered impossible can now be efficiently completed by a quantum computer, all while expending much less energy. This is why quantum computing is so important for the future of technology. As computer processors get smaller and smaller while the amount of information they compute must get bigger and bigger, eventually, they will reach a stalemate. Because quantum computers can calculate at a much quicker rate, they show promise in addressing this problem.
History of Quantum Computing
The idea of quantum computing was first introduced in 1982 by Richard Feynman, a physicist and Nobel Prize winner.6 During this time, physicists and computer scientists were exploring the concept of a computer based on quantum mechanics, but it was Feynman who ultimately first presented an abstract model that demonstrated how a quantum theory may be applied to a computation system. This meant that a physicist could carry out quantum physics experiments through a computer. In 1985, physicist David Deutsch built upon Feynman’s model and published a paper to introduce the concept of a quantum computer that could be used outside the world of physics; namely, a replacement for traditional computers.7 After publication, there was a lot of buzz around this concept, and many possible applications for a quantum computer were brainstormed. However, none of the concepts really took off until 1994, when Peter Shor created a method of solving an infamous problem in number theory called factorization using a quantum computer. This brought to light how one could use mathematical operations to factor large numbers at much more quicker pace than the traditional computer. This sparked interest in quantum computers outside of the scientific community.
Throughout the 20 years since the publication of this paper, significant advancements have been made in the field of quantum computing. Most significantly, the first functional quantum computer was built in 2007 by D-Wave Systems.8 Their initial model at this time was 28-qubits. Since that time, they have doubled the number of qubits in their models every year, and in January of this year, they released the first commercially available quantum computer, composed of 2000-qubits.9
Next Steps and Obstacles
While quantum computing shows promise to revolutionize computers as we know them, there are several obstacles in the way of their widespread commercialization. First of all, in order to program a quantum computer, one must have extensive knowledge of quantum physics. D-Wave is working on this issue with the introduction of new software called Qbsolv, which is set to allow developers to program quantum computers without knowledge of quantum physics.10 Of course, the issue is, there are so few quantum computers in this world, that there is little opportunity to develop the necessary skills to program quantum computers. While there are simulators that you can download onto your computer that allow you to test out the D-Wave software, it is not quite the same as running it on a real quantum computer.
Additionally, quantum computing may be more susceptible to errors than traditional computing. Qubits can be affected by a variety of factors, such as heat, noise, and electromagnetism.11 While IBM is currently researching a promising solution to detecting errors, this is only one problem facing quantum computing. Another is the issue of coherence.12 Coherence is a metric by which the quality of a qubit is measured, which means how long it maintains its quantum properties. Qubits must maintain these properties for an extended period of time for the quantum computer to function, so the next steps for many researchers will be to enhance the coherence of qubits.
The worlds of defense and intelligence have long been interested in quantum computing. One of the most important things quantum computing has to offer the military is the speed and types of calculations that can be performed by quantum computers.13 Given the ability of quantum computers to process data at a much quicker pace than traditional computers, processes that require sifting through large amounts of data could be streamlined using quantum computing. This would allow the military to be much more efficient, optimizing defense logistics such as which way to travel. Additionally, as software becomes more and more integral to operating many weapons, any type of technology that makes this process more efficient and effective is appealing.14
Additionally, this technology has major implications for decrypting communications.15 Currently, encryption is dependent on the inability of hackers to solve long encryption keys. However, with quantum computing, a process that would be impossible before could be done within minutes. Therefore, there is currently a race between nations to develop the research necessary to have a leg up on spies and hackers with quantum computing.16 Whoever establishes this technology first will have a large strategic advantage. They will be able to develop encryption methods that render them “unhackable.” Currently, the U.S. Army, Navy, and Air Force are working together to establish a quantum communication network.
As stated above, the first commercially available quantum computer was released earlier this year by D-Wave Systems. Other companies working on producing Quantum computers include 1QBit, Optalysys, Quantum Biosystems, and MagiQ.17 Though only D-Wave has a commercially available quantum computer at the moment, it is safe to wager that with time, quantum computers will saturate the market. Beyond computers, there are other commercial applications for quantum computing that are on the horizon. Essentially any optimization problem, meaning when you are trying to find the best possible option within your parameters, can be aided by quantum computing.18 For consumers, this means that features that would be too complicated for a traditional computer to handle could be available within the next few decades. Most intriguing to many consumers will probably be the way this science will change their smartphones. On a smaller scale, quantum computing is set to optimize the apps on our phones that tell us the weather and the best route to take to work, as your device will be better able to analyze data.19 Additionally, the encrypted communication methods the military discovers will likely create the most secure methods with which to communicate and perform transactions over the internet, transforming the financial services industry. Additionally, quantum computing is set to enhance machine learning to a whole new level, meaning that soon, our computers could identify what is in an image, learn more about your habits as an individual, and even develop intuition.20 On top of all this, quantum computing sets to transform research as we know it by enabling scientists to process data at lightning speed.21 If this proves true, the possibilities for new medical advancements and technological developments are endless.
For now, quantum computing remains accessible to only a small number of people on earth. Yet if the true potential of this science is harnessed, in the future, quantum computing could be behind every major technological advancement. And given how quickly D-Waves managed to make this theory become a reality, that future could be sooner than we think.
Want to learn more about quantum computing? Check out the links below.
Quantum Computing – Stanford Encyclopedia of Philosophy
How Does a Quantum Computer Work? – Veritasium
How to Fight a War With a Quantum Computer – The National Interest
How Quantum Computing Will Change the World – Forbes
How Quantum Computing Could Help Mankind – Bloomberg TV
By now, you have likely been wowed by the incredible technology of 3D printing. Currently, 3D printing is one of the most popular areas for technology research, as the industrial applications are abundant. Not only that, 3D printers have saturated the market, and are becoming increasingly more affordable and available to the public; you may even know someone with a 3D printer in their home. With a 3D printer, one can turn a digital file into a three-dimensional object before their very eyes, which seems to offer endless possibilities.1 It may seem like 3D printing, also known as additive manufacturing, has only just been invented, but in actuality, it has been 30 years in the making.2 As 3D printing has turned a corner, so comes the new technology that everyone is talking about: 4D printing.
What is 4D Printing?
4D Printing uses 3D-printing technology and takes it to the next level. You could think of 4D printing as adding a fourth dimension to 3D printing: time. Essentially, 4D printing creates a three-dimensional object that changes according to its environment.3 4D printing uses geometric code so that the printed object can transform by itself.4 These “smart objects” can assemble themselves or change shape according to their environment. This exciting new technology has caught the attention of a variety of industries due to its many potential uses.
4D printing is a new technology that has only been in development since 2013.5 However, 3D printing — its predecessor that is essential to the technology, has been evolving over the last 30 years. It may seem as though 3D printing is newer than that because there have been so many recent innovations with the technology, but it all began in the 1980s with Charles Hull, co-founder of 3D Systems.6 In 1986, he patented stereolithography, a process that used digital data to create a three-dimensional model.7 In 1992, 3D systems created the first machine that performed this technique, called a stereolithographic apparatus (SLA) machine. Meanwhile, the two other main 3D printing technologies were being invented. In 1988, Carl Deckard of the University of Texas patented SLS technology, which 3D prints using a laser to fuse together powder grains.8 That same year, Scott Crump, co-founder of Stratasys, patented the Fused Deposition Modelling (FDM) method of 3D printing, the most commonly used today.9 In Europe, EOS GmbH was founded by Hans Langer, which created the first “Stereos” system that offered the first production applications for 3D printing.
In the 1990s, the world of 3D printing expanded, with new leaders emerging with new technologies. In 1992, Stratasys patented FDM, leading others to develop new ways to 3D print. Tools for 3D printing became more widely available, facilitated in part by the Sanders Prototype (now Solidscape) that was one of the first players to offer tools specifically designed for additive manufacturing.10 The ‘90s also saw incredible new applications for 3D printing in the medical field; the first lab-grown organ was engineered at the Wake Forest Institute for Regenerative Medicine, opening up the door for a 3D printed prosthetic leg, mini-kidney, and blood vessels.11
In 2004, the first self-replicating 3D printer was created. This enabled the mass production of these machines, and now people could have them in their homes. In 2005, the first color 3D printer was released by ZCorp.12 In 2009, the FDM patent was released to the public domain, which facilitated the invention of a slew of FDM 3D printers, the lowering of the price of 3D printers, and more visibility around this technology. Since then, the production of 3D printers has skyrocketed, and public awareness of 3D printing is higher than ever; in his 2013 presidential State of the Union address, Barack Obama mentioned 3D printing as a major issue for the future of the country.13 In the last ten years, 3D technology has seen giant leaps within the medical and commercial industries. In 2013, head of the Self-Assembly Lab of MIT Skylar Tibbits started research into 4D printing, which continues to develop today thanks to the teamwork of Self-Assembly Lab, Stratasys, and Autodesk.14 Today, with the new evolution of 4D technology, we have seen that the future is full of more incredible developments.
How it Works
A 4D printer is essentially a 3D printer that has been adapted to be able to print “smart” materials.15 3D printers use a layering process to create shapes, whether by SLA or any of the other methods. Regardless of which method is used, the basic premise of 3D printing is to successively build layers on top of one another to create a shape. In 4D printing, this same process is used, but is applied to create models that can change themselves. During the process, the smart material bonds with the plastic used to print the object and can absorb water. Once the object is printed, the water in the smart material expands, causing the shape to change. This enables the printed object to have several different dimensions; it could go from a 1D object to 3D, a 2D surface to a 3D object, or morph from one 3D shape to another.16 Each 4D model is specially designed to react and form a new shape when the water expands. While water is used in current prototypes, this material could potentially be made out of a variety of activation materials, such as temperature, vibration, pressure, or light. Once these new activation methods have been fully developed, the possibilities are endless.
The military has shown interest in 4D printing, granting a $855,000 grant to the 4D research efforts of a team of researchers from The University of Illinois, The University of Pittsburgh Swanson School of Engineering, and Harvard University’s School of Engineering and Applied Science, respectively.17 While the research is still in its infancy, there are a lot of potential military applications for this technology. For example, there is a vision of a military vehicle that adapts to the environment in order to protect it from damage and corrosion.18 Additionally, there is talk of uniforms that are able to transform based on environment to better camouflage soldiers or to protect against poisonous gases or shrapnel, as well as self-assembling weaponry.19 This investment in 4D technology reflects the U.S. military’s desire to have a firm technological advantage in the battlefield, and 4D printing may reveal itself to be a great boon in the future.
The concept of 4D technology already has several commercial industries excited, and it isn’t hard to understand why. One such industry is the sportswear industry.20 Research is currently being conducted on a “smart shoe,” which would be able to turn into a running shoe when you run that would turn waterproof when you meet a puddle or otherwise adapt to changes in the environment. While many experiments have been conducted into other commercial applications for 4D printing, one can imagine how every industry could ultimately benefit. For example, boxes printed on a 4D printer would be able to unfold and refold themselves.21 Businesses could ship their inventory in these boxes, the boxes would fold themselves, and they could ship them back to the warehouse, saving them millions of dollars on the cost of shipping materials. Just imagine buying a piece of furniture and having it assemble itself once it is out of the box!22 4D printing could revolutionize so many industries that we cannot even fathom everything it will bring. When thinking along these lines, the possibilities are endless.
Perhaps most astounding developments are the potential medical applications for 4D printing. Currently, the ARC Centre of Excellence for Electromaterials Science (ACES) at Wollongong University is researching 4D printing applications for medicine.23 As we know, 3D printing has already revolutionized the medical field by making prosthetics and implants as well as fabricating tissues and organs.24 4D printing has the potential to be even more radical. One area of research currently being explored is the idea of 4D-printed medical implants.25 These implants could change shape according to changes in the body. For example, a 4D-printed cardiac tube could change shape in response to a sudden change in blood pressure. Additionally, this technology could be used to make drug capsules that release medication in response to illness; for example, if it were to respond with body temperature, the drugs would be released immediately when a fever begins.26
In the future, 4D printing will have completely changed our world. Houses will be delivered to you in boxes and will assemble themselves. Bridges will never collapse because they will have the ability to repair any damage they experience.27 Your pipes will never freeze because they will be able to expand, contract, and adjust temperature according to the weather. Our clothes will adapt according to temperature and climate. We will live longer because our wearable medical technology will let us know the moment there are any health concerns on the horizon. We will be able to build structures on other planets because we will be able to send materials to deep space without the need for human beings or robots.28 Energy will be completely revolutionized as 4D printers will create solar panels that respond to temperature, expanding and contracting according to their settings.29 Once more research has been conducted, 4D printing promises to change life as we know it.
Interested in learning more about 3D and 4D printing? View the links below.
The Emergence of 4D Printing – TED Talk by Skylar Tibbits
Programmable Matter: 4D Printing’s Promises and Risk – Georgetown Journal
3D Printing Raises Ethical Issues in Medicine – ABC Science
How 4D Printing Is Now Saving Lives – Computerworld
A Review on Recent Progresses in 4D Printing – Virtual and Physical Prototyping
There is a lot of buzz about driverless cars right now as automotive and technology companies have begun to roll out these autonomous vehicles. If you have been following the developments in self-driving cars, you are likely familiar with all the benefits that are touted by its advocates: fewer accidents, less congestion and pollution, improved land use, and more mobility for the disabled.1 Many people are quite excited for this technology, while others remain skeptical and fearful of it. Regardless of which side of the spectrum you fall on, the technology is fascinating and presents many possible applications that are important to consider. It is likely that in the near future, autonomous vehicles will be a part of our daily lives, so it is best to gain an understanding of how this new technological development could impact you.
History and Development
The first attempt at a driverless car was in 1925 by a man named Francis P. Houdina.2 Houdina toured the country that summer, demonstrating how his invention worked. The automobile, called the American Wonder, was controlled via radio by a driver who followed the driverless car in a separate vehicle. While Houdina continued to tour the car for several years, it had significant safety issues; at one point, it crashed into a car filled with cameramen. Therefore, it was never mass produced or sold commercially.
After this, technology focused more on developing automatic highways rather than automatic cars. In 1956, GM introduced the concept of the Firebird II, a car that would be steered with an electric highway.3 This idea was taken a step further in 1959 by the Radio Corporation of America. A test model for this “highway of the future” was built in Princeton, NJ, which featured an electrical cable that warned the car of any obstructions ahead, which then triggered the car to either brake or switch lanes. The engineer behind this project, Vladimir Zworykin, believed that the widespread use of this technology would address the growing problem of automobile accidents in America, and anticipated these electric highways to become the norm within a decade or two. However, the infrastructure costs both in installation and maintenance stopped the project before it really began. In 1977, Japan’s Tsukuba Mechanical Engineering Lab released what was considered the first truly autonomous car.4 The vehicle worked using analog computer technology and two cameras for signal processing. It was able to drive up to 18.6 MPH with the help of an elevated rail.
In the ‘80s, movies such as “Christine” and “Knight Rider” brought into the public eye the idea of an autonomous vehicle.5 Simultaneously, more research started being conducted into making driverless cars a reality. One research team at Bundeswehr University in Munich turned a van into a self-driving vehicle that they called the VaMoRs. At the same time, researchers at the Carnegie Mellon Robotics Institute started a line of robotic cards with the transformation of a panel van into an autonomous car. These projects required vans in order to hold all of the necessary equipment.
In 2004, the Defense Advanced Research Projects Agency (DARPA) unveiled the Grand Challenge series, which is a competition that gives robotics departments at universities and robotics companies the opportunity to develop an autonomous vehicle.6 While DARPA is clearly concerned with this technology for military purposes, the technology that is coming out of this project are directly creating the commercial driverless cars that are being developed. Google has hired the best researchers to come out of the Grand Challenge series to develop the Google car.
How They Work
Current models of self-driving cars build on several features that are already used in driver-controlled cars. For example, developments in the ‘80s and ‘90s in vehicles focused on automation to enhance safety.7 An example of this is anti-lock brakes; while previously, the driver needed to pump the brakes to prevent them from locking, anti-lock brakes do the pumping for you. About a decade of the development of anti-lock brakes, automobile manufacturers took this technology a step further to create sensors that adding traction and stability control. These steps in safety development of standard cars represents the move towards driverless cars. Additionally, the evolution of other technologies has contributed to the development of autonomous vehicles.
Light Detection and Ranging (LiDAR) is a technology that autonomous vehicles use to create a three-dimensional map and detect hazards by using laser beams to determine the distance between the car and other objects.8 In the Google Car, a Velodyne 64-beam laser is mounted on the top of the car so it has an unobstructed, 360-degree view on a custom-built, rotating base.
While LiDAR works to accurately map surroundings, it is unable to monitor speed of surrounding vehicles. Therefore, autonomous vehicles also use bumper-mounted radar units for this purpose.9 Driverless cars are outfitted with two sensors on the front bumper and two on the back bumper, which send signals to the on-board processors to move out of the way or apply the brakes. Radar technology works with other technologies in the car, such as gyroscopes, a wheel encoder, and inertial measurement units, in order to help the car make informed decisions to avoid crashes.
The cameras used in the different models of driverless cars vary, but cameras are commonly used.10 One example is that it is mounted to the exterior, allowing the car to get an overlapping view of its surroundings. Just as the human eye uses different overlapping images for peripheral vision, depth, and dimensionality, the driverless car uses multiple cameras to create a complete image. The camera has a 50-degree view up to 30 meters. Cameras are just one piece of the puzzle that is a driverless car; if they were to malfunction, the car could still work.
Some prototypes of autonomous vehicles also include sonar technology.11 Sonar technology has some disadvantages when compared to LiDAR and radar technologies; namely, it has a narrow field of view and a short effective range. What sonar technology truly adds is a redundancy that allows the car to cross-reference multiple sources of data to make its decisions.
In order to get anywhere, autonomous vehicles require advanced positioning systems to keep on course and to route to its destination.12 In the case of the Google Car, it uses Google Maps, GPS satellites, a wheel encoder, and inertial measurement units to maintain correct driving speed and route the car. Using real-time data from the cameras combined with GPS, the system can determine the speed of cars surrounding it, while correcting for things like construction or traffic.
The software required to power driverless cars is quite impressive. Using all the data that it acquires from the sensors, cameras, and GPS, this software create algorithms to make its important decisions.13 The car is hardwired to respond in certain ways, such as stopping at a stop sign, but many responses are learned from experience. Every time a self-driving car is used, it learned more solutions that can be used in different situations, just like a human driver. This software also processes the data of other cars in order to learn.
Like many forms of technology, autonomous vehicles as we know them today developed out of military research. In 2016, the U.S. Army began testing self-driving vehicles, part of a long-term plan to implement driverless technology in the battlefield.14 First, they tested semi-autonomous trucks in Michigan, driving as a convoy over the course of seven miles.15 A three-phase pilot program is run by the Army Tank Automotive Research, Development and Engineering Center (TARFEC) out of Fort Bragg, NC.16 The vehicles are essentially autonomous golf carts which are used to pick up injured soldiers at the barracks and take them to the medical center, which is about a half-mile distance. One goal of this project is to cut military costs; as appointments at the medical center are quite expensive, if a soldier fails to show up, it is a huge cost. The hope is that by providing this transportation, they will cut down on missed appointments.
A more military important goal for this technology is to improve safety for soldiers.17 Driverless technology could enable the military to send supplies into dangerous areas without putting soldiers at risk. By removing the need to have soldiers drive convoy vehicles, they are removed from roads that potentially contain IEDs. Additionally, the implementation of driverless cars could speed the deployment process, allowing freight trucks to move at maximum efficiency in large platoons without the concern of human error. Additionally, the military uses autonomous helicopters to carry cargo to risky areas in which a human would be unable to safely and efficiently deploy.18 Between 2011 and 2014, these autonomous helicopters carried more than 4.5 pounds of cargo throughout thousands of missions in Afghanistan.
There are currently several automobile and technology companies developing self-driving cars. Last October, Tesla announced that it would begin equipping their cars with technology that would eventually allow them to become completely autonomous.19 GM has announced that within a year, it will be testing self-driving electric taxis with Lyft.20 Uber currently has 100 driverless cars in use in Pittsburgh, which are free as they are only part of a test at the moment.21 Additionally, a Singapore-based company called nuTonomy is testing a self-driving taxi service. Other automobile companies that are currently looking into driverless cars include Ford, Volvo, Honda, and Fiat.22
Everyone has heard of the Google Car, and currently, it looks as though it is well on its way to be commercialized; Google recently created a new company named Wayno, indicating that it is past the research phase and it now ready to release the car on the market.23 Meanwhile, it seems Apple’s efforts to create their own self-driving vehicle is being downgraded in light of lack of progress and the success of other companies.24 Clearly, the market of driverless cars is quickly developing, with some saying that they could be commonly used on the road by 2025. The eminence of this change is demonstrated by the United States Department of Transportation, which released the Federal Automated Vehicles Policy as a guideline for automakers.25
Given the speed with which this technology is now developing, we can expect to see more and more autonomous vehicles come to the commercial market within the next 10 years.26 Beyond cars, we will see many other autonomous transportation. Most intriguingly, Rolls-Royce is currently developing autonomous vessels under a project called Advanced Autonomous Waterborne Applications.27 This has fascinating implications for the maritime industry. Robotic ships could make cargo shipping safer, more efficient, and less expensive.28 Not only would it reduce accidents, it would also reduce the threat of piracy, because the ship could not be overtaken by any intruders. Additionally, with no need to accommodate a crew, these ships would have larger capacity for shipping, and make the ships lighter with fewer operating costs. This would also solve the issue of fewer and fewer people having maritime skills as seafaring becomes less and less attractive for work; instead, people with mechanical and electrical skills would be required more than people who could steer ships. Given how advantageous autonomous vessels could be, it is likely that by 2025, self-steering ships will be commercially used.
The technology behind autonomous vehicles will certainly have a dramatic impact in the coming years. In as little as a decade, we can expect the market to be saturated with self-driving cars. As with all new technology, there will be advantages and disadvantages, but one thing is for certain: autonomous vehicles will be revolutionary.
Interested in learning more about the technology and future of autonomous vehicles? Read the links below for more information.
How Lidar Works – Lidar-uk.com
When Cars Drive Themselves – New York Times
Auto Correct – The New Yorker
The Ethics of Autonomous Cars – The Atlantic
Autonomous Vehicle Technology: A Guide for Policymakers – Rand Corporation
How Will Self-Driving Cars Change Cities? – Slate
Forget Autonomous Cars – Autonomous Ships Are Almost Here – IEEE Spectrum
Every life experience, from our birth to our death, can be reduced down to electrical stimulation of our brains from sensory organs providing us with information about the world around us. “Reality” is our interpretation of these electrical signals, which means that our brains are essentially our own reality. Whatever you feel, hear, see, taste, or smell is an interpretation of the world around you that exists solely in your own brain. In general, even if we understand this concept, we work under the assumption that our interpretations are pretty close to the external world. Actually, this is not true at all. In certain crucial ways, the brain “sees” things that do not actually reflect the information that is being presented to our senses. We each live in our own reality bubble, constructed of both how we perceive using our senses and how our brains interpret these perceptions. This is exemplified by the concept of color. Color in itself is not a property of the world around us; rather, it is a category created by our perceptions. To experience the world with meaning, the brain just filter the world through our lenses. This is what makes virtual reality so intriguing for the future of communication in a variety of fields.
Now, our method of communicating our perception is with words. Words have proven to be ineffective for relaying our intentions and interpretations. With virtual reality, there is the potential for us to literally show each other way we see. Virtual reality allows us to reveal a world without our filter, which could endow mankind with a new method of communication that is a sort of telepathy, bringing the gap that exists due to our own unique interpretations of the world. With virtual reality, there is no ambiguity of what we mean like there is when we speak our intentions. This results in a truly perfect understanding, as all parties hold the exact same information. Understandably, excitement about these possibilities translates across a variety of fields. In this blog, we will look into the history of virtual reality, how it works, and its various applications.
Though the concept of virtual reality has been around since the 1950s, most people were not aware of it until the 1990s.1 However, the beginnings of this revolutionary concept started well-before it was conceived. If you think about virtual reality getting its start under the idea of creating the illusion of being somewhere other than where we actually are, it can be traced back to the panoramic paintings of the early 19th century. 2 These murals were designed to fill the entire field of vision of the viewer to make the paintings come to life, creating the illusion of really being there. Clearly, the desire to see things differently than our reality has been present for centuries.
In 1838, scientific research integral to the development of virtual reality was conducted by Charles Wheatstone. This research showed that each eye processes two different two-dimensional images, bringing them together to make one three-dimensional image. This is how he invented the stereoscope, which gave illusion of immersion into an image using this science. This later inspired the invention of the View-Master, which was designed for “virtual tourism.”
In the 1930s, Stanley G. Weinbaum would predict virtual reality in his science fiction short story, “Pygmalion’s Spectacles.”3 The story centers around a virtual reality system that uses goggles to broadcast a holographic recording of different experiences that involve all of the senses. In 1956, the first step towards virtual reality came to existence with the invention of the Sensorama.4 The Sensorama was invented by cinematographer Morton Heilig, who produced short films for the machine that immersed the viewer in the experience using a 3D display, vibrating seats, and smell generators. In the 1960s, Heilig followed the Sensorama with the invention of the Telesphere Mask, which was the first head-mounted display and featured stereoscopic 3D imagery and stereo sound.
In 1961, Philco Corporation engineers created the Headsight, a head-mounted display as we know them today.5 This technology used a different video screen for each eye as well as a magnetic motion tracking system linked up to a closed circuit camera. It was designed to see dangerous situations from a distance for military purposes. As the user moved their head, the camera would move so they could look around the environment naturally. This was the first step towards the head-mounted displays we know today, though it was not integrated with a computer. This would come later, in 1968, when Ivan Sutherland with his student Bob Sproull created the first virtual reality head-mounted display that connected to a computer called the Sword of Damocles.6 This heavy device hung from the ceiling as no user could comfortably support the weight of the machine, and required being strapped into it. In 1969, computer artist Myron Kruegere developed a series of “artificial reality” experiences that were responsive.7 Projects GLOWFLOW, METAPLAY, and PSYCHIC SPACE ultimately led to VIDEOPLACE technology, which allowed people to communicate through this responsive virtual reality.
In the 1980s, despite the fact that much technology had been developed in the field of virtual reality, there wasn’t actually a term for it. In 1987, the term “virtual reality” was coined by Jaron Lanier, who founded the Visual Programming Lab (VPL).8 Through VPL research, Lanier developed a series of virtual reality gadgets, including virtual reality goggles and gloves. These represented a giant leap forward for haptics technology, meaning touch interaction.9
In 1991, virtual reality became publicly available through a series of arcade games, though they were still not available in homes. In these games, a player would wear VR goggles, which provided immersive stereoscopic 3D images. Some units even allowed for multi-player gaming. In 1992, the sci-fi movie “The Lawnmower Man” introduced the concept of virtual reality to the general public, with Pierce Brosnan playing a scientist who uses virtual reality to turn a man with an intellectual disability into a genius.10 Interest in virtual reality peaked, and in 1993, Sega announced that they would be releasing a VR headset for the Sega Genesis console, though this technology failed to develop and it was never actually released. In 1995, Nintendo also attempted to release a 3D gaming console, though it flopped due to how difficult it was to use and it was discontinued shortly after it was released. In 1999, the concept of virtual reality became mainstream with the film “The Matrix,” in which some characters live entirely in virtually created worlds; though previous films touched on the concept, it was “The Matrix” that had a major impact.
In the 21st century, virtual reality technology has seen rapid development. As computer technology has evolved, prices have gone down, making virtual reality more accessible. With the rise of smartphones has come the HD displays and graphics capabilities necessary for lightweight, usable virtual reality devices. Today, technology such as camera sensors, motion controllers, and facial recognition are a part of daily technological tasks. Today, companies like Samsung and Google have started offering virtual reality through their smartphones, and videos game companies like PlayStation offer VR headsets for their games. The rising prevalence of virtual reality headsets has made this technology widely known. Given the strives VR technology has made in the last decade, the future of virtual reality offers fascinating possibilities.
How it Works
For the sake of simplicity, we will explain how virtual reality works through head-mounted displays, as this is the most widely known virtual reality technology. In most headsets, video is sent from a computer to the headset using an HDMI cable.11 They use either two feeds to one display or one LCD display per eye. Additionally, lens are placed between the pixels and the eye, which can sometimes be adjusted to the specific distance between the eyes. These lenses are used to focus the picture for the individual eye and create a stereoscopic 3D image using the technology that Wheatstone created centuries ago.
VR head-mounted displays also immerse the user in the experience by increasing the field of view, meaning the width of the image.12 A 360-display is not necessary and too expensive, so most headsets use around a 100 or 110 degree field of view. For the picture to be effective, the frame rate must be a minimum of 60 frames per second, though most advanced headsets go beyond this, upwards of 100 frames per second.
Another crucial aspect of VR technology is head tracking.13 Head tracking means that the picture in front of you moves with you as you move your head. The system used for head tracking is called 6DoF (six degrees of freedom) and it plots your head on a X,Y, and Z axis to measure all head movements. Some technology that may also be used include a gyroscope, magnetometer, and accelerometer, depending on the specific headset.
Headphones are also used in VR headsets to increase immersion. In general, either binaural or 3D audio is used to give the user a sense of depth of sound, meaning it can sound like a sound is coming from the side, behind, or a distance from them.
Currently, motion tracking technology is still being perfected in these VR headsets. This means that some technology uses motion sensors to track body movements, such as the Occulus Touch, which provides wireless controllers that allows you to use your hands perform actions in a game.
Finally, eye tracking is the latest component to be added to certain VR headsets. In these, an infrared sensor monitors the user’s eye movements so that the program knows where you are looking in your virtual reality. This allows in-game characters to react to where your eyes are and it also makes the depth of field more realistic. Further development of this technology is also set to reduce motion sickness, as it will make it feel more realistic to your brain.
With a greater understanding of this revolutionary technology, you can see how it can be useful in an infinite number of ways to a variety of different realms.
Virtual reality has already provided a lot of value to the military as one of the earliest motivations for this technology, with more possibilities on the horizon. Currently, virtual reality is being used to train soldiers for war.14 It is not hard to understand why the military leapt on this technology, as it allows a user to experience a dangerous environment without any actual danger to them. This makes military training not only safer, but more cost-effective in the long run, as real or physically simulated situations are quite expensive and can cause damage to costly equipment.15 Combat simulators are a common application of VR for the military, using headsets to give soldiers the illusion of being at war.16 This not only prepares them for the experience of war, it gives them a space in which they can practice using military technology with the ability to do it over again if they make a mistake. It also allows them to practice with each other within a virtual world, enhancing the communication of a unit.17 These virtual reality headsets also allow soldiers to prepare to make important decisions while in stressful situations.18 Given the demographics of army recruits in training (young adult men), this method of training is highly effective, as this group has grown up playing video games and finds this learning method appealing.19 Not only does virtual reality have applications for training soldiers, it may also be a helpful tool for helping them heal after combat; specifically, it may help treat PTSD.20 The idea is that virtual reality may allow soldiers to be exposed to potential triggers in a safe environment that allows them to process their symptoms and enables them to cope with new situations.
In the future, the military will likely take advantage of further developments in VR technology by enhancing the realism of the simulators. It is likely that more humanitarian and peacekeeping training will be done through the use of VR. It is likely that facial recognition technology will be incorporated in order to assess a person’s emotional state, which may help enhance communication further both between soldiers and with interacting with people in foreign countries. Regardless of how this new technology is applied, it is certain that the military will be at the cutting edge of the latest VR technology.
Presently, the entertainment industry is next in line after the military to benefit the most from further development of virtual reality technology. Most obviously, the world of gaming has seen impressive (and not so impressive) advancements with VR headsets. Just a couple years ago, virtual technology through video games seemed unlikely to actually come to fruition. Today, the three most prominent VR game systems are the Oculus Rift, Playstation VR, and the HTC Vive.21 Each features games that allow the user to immerse themselves into an environment, whether it is a boxing ring, a truck, or Gotham. The future of VR in gaming will likely center around the development of better eye tracking and motion detecting within virtual reality. With these developments, video games will be more immersive than ever.
Today, mobile phone companies are competing to create the most compelling VR device. Google recently released the Daydream View, a VR headset that is designed to be more comfortable and technologically advanced than its predecessor, Google Cardboard.22 Samsung has also recently released a comparable device called the Gear VR.23 Both of these devices allow the user to virtually visit anywhere in the world, use a series of apps, and also, as can be expected, play immersive games. As virtual reality technology becomes more prevalent, affordable, and usable, it is certain that more of these devices will saturate the market.
Finally, virtual reality has shown promise in the field of psychology. As mentioned above, potential has been shown for the use of VR for the treatment of PTSD. Beyond that, there is evidence to suggest that virtual reality could be applied to the clinical treatment of other anxiety disorders, such as phobias.24 Additionally, there is currently research being conducted in how virtual reality could help treat people with schizophrenia deal with their delusions and paranoia, allowing them to face their fears.25 Finally, virtual reality has the power to change how psychological research is performed entirely. With the use of VR, psychological researchers could have complete insight into the minds of certain people, giving them greater understanding of how to treat certain conditions.26
The future of virtual reality is beyond anyone’s wildest imagination at the moment, but suffice it to say, it is safe to assume that the technology will only get more realistic from here. The potential applications for this technology are enormous in the military, the private sector, and the world of psychology, but other areas are set to benefit as well in ways we cannot anticipate. With time, virtual reality may be commonly available in everyone’s living room. Regardless of its specific future applications, virtual reality is set to change the world.
If you want to learn more about the fascinating technology behind VR or its applications, see the links below for further reading.
The Future of Virtual Reality – TheNanoAge.com
Virtual Reality in the Military: Present and Future – René ter Haar
Everything You Need to Know Before Buying a VR Headset – Wired
A Virtual Out-of-Body Experience Could Reduce Your Fear of Death – Seeker
The Use of Virtual Reality in Psychology – Computational and Mathematical Methods in Science