1. Sci Fi No More: The Reality of Quantum Computing

    Despite the rapid evolution of technology over the last few decades, you may be surprised to learn that the computers we use today are not so different from the first computer built back in 1941. Your computer, while much smaller and faster than its 30-ton predecessors, performs fundamentally the same task: changing and interpreting binary code into a computational result.1 Binary code is composed of “bits,” which are the smallest unit of computer data and are represented as either zero or one. Whatever task your computer performs, it does so by processing a series of zeros and ones through an algorithm, which produces a new set of zeros and ones. While traditional computing works for now, we are fast approaching a point where the transistors (memories) of computers will soon be as small as atoms.2 If computers are to continue to become smaller and more powerful over time as they have been, new methods of computing will need to be developed. That’s where quantum computing comes in.

    What is Quantum Computing?

    It may sound like something out of science fiction, but quantum computing is a reality today. In essence, quantum computing is the application of quantum mechanics to information processing.3 Where a traditional computer uses bits, a quantum computer uses quantum bits, or qubits. Just like bits, qubits encode zeros and ones; however, using the power of quantum physics, qubits can be zero, one, or both at the same time.4 This might be difficult to conceptualize in a world dictated by classical physics, but the world of quantum physics, which deals specifically with things on the atomic scale, opens up more possibilities. Qubits are able to behave this way due to the phenomena of superposition and entanglement.

    Superposition refers to a quantum system’s ability to be in multiple states at the same time, meaning that it can be both “up” and “down”, “here” and “there” simultaneously.5 Entanglement refers to a powerful correlation between two quantum particles that is so strong that even when they are far apart, they remain perfectly in unison. Due to superposition and entanglement, quantum computers are able to process an incredible number of calculations at the same time. While a traditional computer is slowed down by the limitations of only working with ones and ones, because a quantum computer can also work with superpositions of ones and zeros, calculations that were considered impossible can now be efficiently completed by a quantum computer, all while expending much less energy. This is why quantum computing is so important for the future of technology. As computer processors get smaller and smaller while the amount of information they compute must get bigger and bigger, eventually, they will reach a stalemate. Because quantum computers can calculate at a much quicker rate, they show promise in addressing this problem.

    History of Quantum Computing

    The idea of quantum computing was first introduced in 1982 by Richard Feynman, a physicist and Nobel Prize winner.6 During this time, physicists and computer scientists were exploring the concept of a computer based on quantum mechanics, but it was Feynman who ultimately first presented an abstract model that demonstrated how a quantum theory may be applied to a computation system. This meant that a physicist could carry out quantum physics experiments through a computer. In 1985, physicist David Deutsch built upon Feynman’s model and published a paper to introduce the concept of a quantum computer that could be used outside the world of physics; namely, a replacement for traditional computers.7 After publication, there was a lot of buzz around this concept, and many possible applications for a quantum computer were brainstormed. However, none of the concepts really took off until 1994, when Peter Shor created a method of solving an infamous problem in number theory called factorization using a quantum computer. This brought to light how one could use mathematical operations to factor large numbers at much more quicker pace than the traditional computer. This sparked interest in quantum computers outside of the scientific community.

    Throughout the 20 years since the publication of this paper, significant advancements have been made in the field of quantum computing. Most significantly, the first functional quantum computer was built in 2007 by D-Wave Systems.8 Their initial model at this time was 28-qubits. Since that time, they have doubled the number of qubits in their models every year, and in January of this year, they released the first commercially available quantum computer, composed of 2000-qubits.9

    Next Steps and Obstacles

    While quantum computing shows promise to revolutionize computers as we know them, there are several obstacles in the way of their widespread commercialization. First of all, in order to program a quantum computer, one must have extensive knowledge of quantum physics. D-Wave is working on this issue with the introduction of new software called Qbsolv, which is set to allow developers to program quantum computers without knowledge of quantum physics.10 Of course, the issue is, there are so few quantum computers in this world, that there is little opportunity to develop the necessary skills to program quantum computers. While there are simulators that you can download onto your computer that allow you to test out the D-Wave software, it is not quite the same as running it on a real quantum computer.

    Additionally, quantum computing may be more susceptible to errors than traditional computing. Qubits can be affected by a variety of factors, such as heat, noise, and electromagnetism.11 While IBM is currently researching a promising solution to detecting errors, this is only one problem facing quantum computing. Another is the issue of coherence.12 Coherence is a metric by which the quality of a qubit is measured, which means how long it maintains its quantum properties. Qubits must maintain these properties for an extended period of time for the quantum computer to function, so the next steps for many researchers will be to enhance the coherence of qubits.

    Military Applications

    The worlds of defense and intelligence have long been interested in quantum computing. One of the most important things quantum computing has to offer the military is the speed and types of calculations that can be performed by quantum computers.13 Given the ability of quantum computers to process data at a much quicker pace than traditional computers, processes that require sifting through large amounts of data could be streamlined using quantum computing. This would allow the military to be much more efficient, optimizing defense logistics such as which way to travel. Additionally, as software becomes more and more integral to operating many weapons, any type of technology that makes this process more efficient and effective is appealing.14

    Additionally, this technology has major implications for decrypting communications.15 Currently, encryption is dependent on the inability of hackers to solve long encryption keys. However, with quantum computing, a process that would be impossible before could be done within minutes. Therefore, there is currently a race between nations to develop the research necessary to have a leg up on spies and hackers with quantum computing.16 Whoever establishes this technology first will have a large strategic advantage. They will be able to develop encryption methods that render them “unhackable.” Currently, the U.S. Army, Navy, and Air Force are working together to establish a quantum communication network.

    Commercial Applications

    As stated above, the first commercially available quantum computer was released earlier this year by D-Wave Systems. Other companies working on producing Quantum computers include 1QBit, Optalysys, Quantum Biosystems, and MagiQ.17 Though only D-Wave has a commercially available quantum computer at the moment, it is safe to wager that with time, quantum computers will saturate the market. Beyond computers, there are other commercial applications for quantum computing that are on the horizon. Essentially any optimization problem, meaning when you are trying to find the best possible option within your parameters, can be aided by quantum computing.18 For consumers, this means that features that would be too complicated for a traditional computer to handle could be available within the next few decades. Most intriguing to many consumers will probably be the way this science will change their smartphones. On a smaller scale, quantum computing is set to optimize the apps on our phones that tell us the weather and the best route to take to work, as your device will be better able to analyze data.19 Additionally, the encrypted communication methods the military discovers will likely create the most secure methods with which to communicate and perform transactions over the internet, transforming the financial services industry. Additionally, quantum computing is set to enhance machine learning to a whole new level, meaning that soon, our computers could identify what is in an image, learn more about your habits as an individual, and even develop intuition.20 On top of all this, quantum computing sets to transform research as we know it by enabling scientists to process data at lightning speed.21 If this proves true, the possibilities for new medical advancements and technological developments are endless.

    For now, quantum computing remains accessible to only a small number of people on earth. Yet if the true potential of this science is harnessed, in the future, quantum computing could be behind every major technological advancement. And given how quickly D-Waves managed to make this theory become a reality, that future could be sooner than we think.

    Further Information

    Want to learn more about quantum computing? Check out the links below.

    Quantum Computing – Stanford Encyclopedia of Philosophy
    How Does a Quantum Computer Work? – Veritasium
    How to Fight a War With a Quantum Computer – The National Interest
    How Quantum Computing Will Change the World – Forbes
    How Quantum Computing Could Help Mankind – Bloomberg TV


    1. https://www.cs.rice.edu/~taha/teaching/05F/210/news/2005_09_16.htm
    2. http://www.explainthatstuff.com/quantum-computing.html
    3. https://uwaterloo.ca/institute-for-quantum-computing/quantum-computing-101
    4. https://plus.maths.org/content/how-does-quantum-commuting-work
    5. https://plato.stanford.edu/entries/qt-quantcomp/#BriHisFie
    6. https://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol4/spb3/
    7. http://ffden-2.phys.uaf.edu/211.web.stuff/Almeida/history.html
    8. https://en.wikipedia.org/wiki/Timeline_of_quantum_computing
    9. https://www.dwavesys.com/press-releases/d-wave%C2%A0announces%C2%A0d-wave-2000q-quantum-computer-and-first-system-order
    10. https://www.wired.com/2017/01/d-wave-turns-open-source-democratize-quantum-computing/
    11. http://www.nature.com/articles/ncomms7979
    12. https://www.fastcompany.com/3045708/big-tiny-problems-for-quantum-computing
    13. http://www.bbc.com/future/story/20130516-big-bets-on-quantum-computers
    14. http://nationalinterest.org/blog/the-buzz/how-fight-war-quantum-computer-14708
    15. https://hacked.com/us-intelligence-launches-practical-quantum-computing-research-program/
    16. https://defensesystems.com/articles/2016/12/09/quantum.aspx
    17. http://www.nanalyze.com/2016/09/10-quantum-computing-companies/
    18. https://www.dwavesys.com/quantum-computing/applications
    19. http://www.businessinsider.com/quantum-computers-will-change-the-world-2015-4
    20. http://www.explainthatstuff.com/quantum-computing.html
    21. http://www.nature.com/news/quantum-computers-ready-to-leap-out-of-the-lab-in-2017-1.21239
  2. The Past, Present, and Jaw-Dropping Potential Future of Virtual Reality

    Every life experience, from our birth to our death, can be reduced down to electrical stimulation of our brains from sensory organs providing us with information about the world around us. “Reality” is our interpretation of these electrical signals, which means that our brains are essentially our own reality. Whatever you feel, hear, see, taste, or smell is an interpretation of the world around you that exists solely in your own brain. In general, even if we understand this concept, we work under the assumption that our interpretations are pretty close to the external world. Actually, this is not true at all. In certain crucial ways, the brain “sees” things that do not actually reflect the information that is being presented to our senses. We each live in our own reality bubble, constructed of both how we perceive using our senses and how our brains interpret these perceptions. This is exemplified by the concept of color. Color in itself is not a property of the world around us; rather, it is a category created by our perceptions. To experience the world with meaning, the brain just filter the world through our lenses. This is what makes virtual reality so intriguing for the future of communication in a variety of fields.

    Now, our method of communicating our perception is with words. Words have proven to be ineffective for relaying our intentions and interpretations. With virtual reality, there is the potential for us to literally show each other way we see. Virtual reality allows us to reveal a world without our filter, which could endow mankind with a new method of communication that is a sort of telepathy, bringing the gap that exists due to our own unique interpretations of the world. With virtual reality, there is no ambiguity of what we mean like there is when we speak our intentions. This results in a truly perfect understanding, as all parties hold the exact same information. Understandably, excitement about these possibilities translates across a variety of fields. In this blog, we will look into the history of virtual reality, how it works, and its various applications.

    History

    Though the concept of virtual reality has been around since the 1950s, most people were not aware of it until the 1990s.1 However, the beginnings of this revolutionary concept started well-before it was conceived. If you think about virtual reality getting its start under the idea of creating the illusion of being somewhere other than where we actually are, it can be traced back to the panoramic paintings of the early 19th century. 2 These murals were designed to fill the entire field of vision of the viewer to make the paintings come to life, creating the illusion of really being there. Clearly, the desire to see things differently than our reality has been present for centuries.

    In 1838, scientific research integral to the development of virtual reality was conducted by Charles Wheatstone. This research showed that each eye processes two different two-dimensional images, bringing them together to make one three-dimensional image. This is how he invented the stereoscope, which gave illusion of immersion into an image using this science. This later inspired the invention of the View-Master, which was designed for “virtual tourism.”

    In the 1930s, Stanley G. Weinbaum would predict virtual reality in his science fiction short story, “Pygmalion’s Spectacles.”3 The story centers around a virtual reality system that uses goggles to broadcast a holographic recording of different experiences that involve all of the senses. In 1956, the first step towards virtual reality came to existence with the invention of the Sensorama.4 The Sensorama was invented by cinematographer Morton Heilig, who produced short films for the machine that immersed the viewer in the experience using a 3D display, vibrating seats, and smell generators. In the 1960s, Heilig followed the Sensorama with the invention of the Telesphere Mask, which was the first head-mounted display and featured stereoscopic 3D imagery and stereo sound.

    In 1961, Philco Corporation engineers created the Headsight, a head-mounted display as we know them today.5 This technology used a different video screen for each eye as well as a magnetic motion tracking system linked up to a closed circuit camera. It was designed to see dangerous situations from a distance for military purposes. As the user moved their head, the camera would move so they could look around the environment naturally. This was the first step towards the head-mounted displays we know today, though it was not integrated with a computer. This would come later, in 1968, when Ivan Sutherland with his student Bob Sproull created the first virtual reality head-mounted display that connected to a computer called the Sword of Damocles.6 This heavy device hung from the ceiling as no user could comfortably support the weight of the machine, and required being strapped into it. In 1969, computer artist Myron Kruegere developed a series of “artificial reality” experiences that were responsive.7 Projects GLOWFLOW, METAPLAY, and PSYCHIC SPACE ultimately led to VIDEOPLACE technology, which allowed people to communicate through this responsive virtual reality.

    In the 1980s, despite the fact that much technology had been developed in the field of virtual reality, there wasn’t actually a term for it. In 1987, the term “virtual reality” was coined by Jaron Lanier, who founded the Visual Programming Lab (VPL).8 Through VPL research, Lanier developed a series of virtual reality gadgets, including virtual reality goggles and gloves. These represented a giant leap forward for haptics technology, meaning touch interaction.9

    In 1991, virtual reality became publicly available through a series of arcade games, though they were still not available in homes. In these games, a player would wear VR goggles, which provided immersive stereoscopic 3D images. Some units even allowed for multi-player gaming. In 1992, the sci-fi movie “The Lawnmower Man” introduced the concept of virtual reality to the general public, with Pierce Brosnan playing a scientist who uses virtual reality to turn a man with an intellectual disability into a genius.10 Interest in virtual reality peaked, and in 1993, Sega announced that they would be releasing a VR headset for the Sega Genesis console, though this technology failed to develop and it was never actually released. In 1995, Nintendo also attempted to release a 3D gaming console, though it flopped due to how difficult it was to use and it was discontinued shortly after it was released. In 1999, the concept of virtual reality became mainstream with the film “The Matrix,” in which some characters live entirely in virtually created worlds; though previous films touched on the concept, it was “The Matrix” that had a major impact.

    In the 21st century, virtual reality technology has seen rapid development. As computer technology has evolved, prices have gone down, making virtual reality more accessible. With the rise of smartphones has come the HD displays and graphics capabilities necessary for lightweight, usable virtual reality devices. Today, technology such as camera sensors, motion controllers, and facial recognition are a part of daily technological tasks. Today, companies like Samsung and Google have started offering virtual reality through their smartphones, and videos game companies like PlayStation offer VR headsets for their games. The rising prevalence of virtual reality headsets has made this technology widely known. Given the strives VR technology has made in the last decade, the future of virtual reality offers fascinating possibilities.

    How it Works

    For the sake of simplicity, we will explain how virtual reality works through head-mounted displays, as this is the most widely known virtual reality technology. In most headsets, video is sent from a computer to the headset using an HDMI cable.11 They use either two feeds to one display or one LCD display per eye. Additionally, lens are placed between the pixels and the eye, which can sometimes be adjusted to the specific distance between the eyes. These lenses are used to focus the picture for the individual eye and create a stereoscopic 3D image using the technology that Wheatstone created centuries ago.

    VR head-mounted displays also immerse the user in the experience by increasing the field of view, meaning the width of the image.12 A 360-display is not necessary and too expensive, so most headsets use around a 100 or 110 degree field of view. For the picture to be effective, the frame rate must be a minimum of 60 frames per second, though most advanced headsets go beyond this, upwards of 100 frames per second.

    Another crucial aspect of VR technology is head tracking.13 Head tracking means that the picture in front of you moves with you as you move your head. The system used for head tracking is called 6DoF (six degrees of freedom) and it plots your head on a X,Y, and Z axis to measure all head movements. Some technology that may also be used include a gyroscope, magnetometer, and accelerometer, depending on the specific headset.

    Headphones are also used in VR headsets to increase immersion. In general, either binaural or 3D audio is used to give the user a sense of depth of sound, meaning it can sound like a sound is coming from the side, behind, or a distance from them.

    Currently, motion tracking technology is still being perfected in these VR headsets. This means that some technology uses motion sensors to track body movements, such as the Occulus Touch, which provides wireless controllers that allows you to use your hands perform actions in a game.

    Finally, eye tracking is the latest component to be added to certain VR headsets. In these, an infrared sensor monitors the user’s eye movements so that the program knows where you are looking in your virtual reality. This allows in-game characters to react to where your eyes are and it also makes the depth of field more realistic. Further development of this technology is also set to reduce motion sickness, as it will make it feel more realistic to your brain.

    With a greater understanding of this revolutionary technology, you can see how it can be useful in an infinite number of ways to a variety of different realms.

    Military/Defense Applications

    Virtual reality has already provided a lot of value to the military as one of the earliest motivations for this technology, with more possibilities on the horizon. Currently, virtual reality is being used to train soldiers for war.14 It is not hard to understand why the military leapt on this technology, as it allows a user to experience a dangerous environment without any actual danger to them. This makes military training not only safer, but more cost-effective in the long run, as real or physically simulated situations are quite expensive and can cause damage to costly equipment.15 Combat simulators are a common application of VR for the military, using headsets to give soldiers the illusion of being at war.16 This not only prepares them for the experience of war, it gives them a space in which they can practice using military technology with the ability to do it over again if they make a mistake. It also allows them to practice with each other within a virtual world, enhancing the communication of a unit.17 These virtual reality headsets also allow soldiers to prepare to make important decisions while in stressful situations.18 Given the demographics of army recruits in training (young adult men), this method of training is highly effective, as this group has grown up playing video games and finds this learning method appealing.19 Not only does virtual reality have applications for training soldiers, it may also be a helpful tool for helping them heal after combat; specifically, it may help treat PTSD.20 The idea is that virtual reality may allow soldiers to be exposed to potential triggers in a safe environment that allows them to process their symptoms and enables them to cope with new situations.

    In the future, the military will likely take advantage of further developments in VR technology by enhancing the realism of the simulators. It is likely that more humanitarian and peacekeeping training will be done through the use of VR. It is likely that facial recognition technology will be incorporated in order to assess a person’s emotional state, which may help enhance communication further both between soldiers and with interacting with people in foreign countries. Regardless of how this new technology is applied, it is certain that the military will be at the cutting edge of the latest VR technology.

    Commercial Applications

    Presently, the entertainment industry is next in line after the military to benefit the most from further development of virtual reality technology. Most obviously, the world of gaming has seen impressive (and not so impressive) advancements with VR headsets. Just a couple years ago, virtual technology through video games seemed unlikely to actually come to fruition. Today, the three most prominent VR game systems are the Oculus Rift, Playstation VR, and the HTC Vive.21 Each features games that allow the user to immerse themselves into an environment, whether it is a boxing ring, a truck, or Gotham. The future of VR in gaming will likely center around the development of better eye tracking and motion detecting within virtual reality. With these developments, video games will be more immersive than ever.

    Today, mobile phone companies are competing to create the most compelling VR device. Google recently released the Daydream View, a VR headset that is designed to be more comfortable and technologically advanced than its predecessor, Google Cardboard.22 Samsung has also recently released a comparable device called the Gear VR.23 Both of these devices allow the user to virtually visit anywhere in the world, use a series of apps, and also, as can be expected, play immersive games. As virtual reality technology becomes more prevalent, affordable, and usable, it is certain that more of these devices will saturate the market.

    Psychological Applications

    Finally, virtual reality has shown promise in the field of psychology. As mentioned above, potential has been shown for the use of VR for the treatment of PTSD. Beyond that, there is evidence to suggest that virtual reality could be applied to the clinical treatment of other anxiety disorders, such as phobias.24 Additionally, there is currently research being conducted in how virtual reality could help treat people with schizophrenia deal with their delusions and paranoia, allowing them to face their fears.25 Finally, virtual reality has the power to change how psychological research is performed entirely. With the use of VR, psychological researchers could have complete insight into the minds of certain people, giving them greater understanding of how to treat certain conditions.26

    The future of virtual reality is beyond anyone’s wildest imagination at the moment, but suffice it to say, it is safe to assume that the technology will only get more realistic from here. The potential applications for this technology are enormous in the military, the private sector, and the world of psychology, but other areas are set to benefit as well in ways we cannot anticipate. With time, virtual reality may be commonly available in everyone’s living room. Regardless of its specific future applications, virtual reality is set to change the world.

    Further Reading

    If you want to learn more about the fascinating technology behind VR or its applications, see the links below for further reading.

    The Future of Virtual Reality – TheNanoAge.com
    Virtual Reality in the Military: Present and Future – René ter Haar
    Everything You Need to Know Before Buying a VR Headset – Wired
    A Virtual Out-of-Body Experience Could Reduce Your Fear of Death – Seeker
    The Use of Virtual Reality in Psychology – Computational and Mathematical Methods in Science


    1. http://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality8.htm
    2. https://www.vrs.org.uk/virtual-reality/history.html
    3. http://www.gutenberg.org/ebooks/22893
    4. https://www.wareable.com/wearable-tech/origins-of-virtual-reality-2535
    5. http://www.redorbit.com/reference/the-history-of-virtual-reality/
    6. https://www.freeflyvr.com/time-travel-through-virtual-reality/
    7. http://thedigitalage.pbworks.com/w/page/22039083/Myron%20Krueger
    8. http://www.jaronlanier.com/general.html
    9. https://www.vrs.org.uk/virtual-reality-gear/haptic/
    10. http://www.imdb.com/title/tt0104692/
    11. https://www.wareable.com/vr/how-does-vr-work-explained
    12. http://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality.htm
    13. http://ftp.hitl.washington.edu/projects/knowledge_base/virtual-worlds/EVE/II.G.Military.html
    14. http://science.howstuffworks.com/virtual-military.htm
    15. https://www.geospatialworld.net/article/virtual-reality-trains-soldiers-for-the-real-war/
    16. http://fortune.com/2015/12/16/army-training-with-vr/
    17. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.76.3048&rep=rep1&type=pdf
    18. https://www.wareable.com/vr/how-vr-is-training-the-perfect-soldier-1757
    19. http://abcnews.go.com/Technology/treating-ptsd-virtual-reality-therapy-heal-trauma/story?id=38742665
    20. http://www.techradar.com/news/gaming/15-best-vr-games-best-virtual-reality-games-for-pc-and-mobile-1300576
    21. http://www.trustedreviews.com/google-daydream-view-review
    22. http://www.trustedreviews.com/news/samsung-takes-the-fight-to-daydream-vr-with-new-gear-vr-controller
    23. http://ieeexplore.ieee.org/document/1106906/?reload=true
    24. https://www.psychologytoday.com/blog/know-your-mind/201605/how-virtual-reality-could-transform-mental-health-treatment
    25. https://www.hindawi.com/journals/cmmm/2015/151702/
  3. What We Can Learn from Market Bubble Bursts

    Just like the bubbles children blow on playgrounds, economic bubbles all eventually burst. However, people often have a short memory when it comes to these bubbles because it is difficult to see a bubble for what it is in real time. When we look back on price charts later, it is easy to identify bubbles, but when the bubble starts to inflate, we only see one piece of the puzzle. Markets are a direct reflection of people’s opinions. When people feel the market is healthy, it functions well because people put more money into it. As people’s good opinions of the market grow, the bubble grows with them, and this is where the trouble comes in; the opinions are skewed. Investors make more impulsive decisions, while they more level-headed investors become overwhelmed by their overly enthusiastic counterparts.

    People Become Irrational

    iamge-1A common definition of a bubble is when prices rise above a logical valuation of the asset. Naturally, this is subjective; while one person might think it is set at a fair price, another would think it completed overpriced. The bubble involves more than just more “overpriced” valuations; it also involves people abandoning logic that they held before the bubble. In a stock market bubble, this may present itself in the form of new valuation metrics that are justified with claims that an industry is “different” than others. This arguments are sometimes valid, but often they are not. This is a good warning to you to be diligent before investing. People will make claims that this new asset is so radically different than anything before it that old rules do not apply. However, this thought process does not incorporate the full picture.

    While a new industry may bring some change, there are certain things that never change, like human nature. Many aspects of our human nature come together to make bubbles possible. For example, human beings are not great at judging scale, so while we may be able to judge that a narrative may justify a price rising, we are not great at justifying just how high it goes. Additionally, bubbles are caused by confirmation bias; with every piece of news that supports the new valuation metrics reinforces the group mentality of this being radically different. Any negative stories are brushed aside because they do not confirm what they wish to be true. A true sign of an impending bubble is when you see story after story emerge before they can be challenged; this is a clear indication that the market is being dominated by emotion rather than logic.

    Participation Falls

    As prices rise, the risk for betting against these prices get higher and higher, and doubters retreat to the sidelines. This makes the only participants in the market the people who are jacking up prices. Eventually, this exponential rise must shift as people begin to sell and the price falls. Then, the doubters come back, and the cycle continues. Bubbles occur when the market skews towards investors that are too positive about an asset. This is not just a theory; it can be seen in three different bubbles that have occurred around three assets in the 21st century: gold, oil, and the internet.

    Gold in 2011

    image-2In 2000, gold traded at about 300 dollars an ounce. At the end of 2010, this had soared to 1,300 dollars. This figure was steadily climbing throughout 2011, reaching a peak on September 6th of $1,921 an ounce. People just continued to buy and buy as the price rose out of an emotional response; they needed to invest in gold because it was clearly becoming more and more valuable, right? However, rather rapidly after, prices again began to fall. By the end of that year, it was back down to 1,600 dollars, and continued to bounce around until April 2013 when it crashed to 1,200 dollars. The bubble burst and gold investors took a hit.

    This is not to dissuade investors from gold. Gold can be volatile, but with the right approach, it is an essential investment. Gold is a long-term investment and an important asset to have in the event of a financial crisis. Gold is tricky to value as its price moves up and down based on the current political and economic climate. However, when the market comes crashing down around us all, based on history, we can expect gold to skyrocket. Gold is the best fallback for a financial crisis, but you cannot expect to instantly make money on it.

    Oil in 2008

    By mid-2008, it was clear that stocks were on the way down, and many investors feared the worst (rightly so!). Others responded by throwing all of their efforts into investing in oil. The price of oil started to rise quickly, just as gold would three years later. The difference is that the demise of oil took a much shorter amount of time, and fell far more dramatically. After peaking at 147 dollars a barrel on July 11, 2008, it started to drop; by December, it was just about 30 dollars. This obviously had devastating consequences for oil investors.

    Of course, much of this was due to the United States’ recession, which took most of the developed world with it. However, the demise of oil could have been predicted had investors not bought into the narrative of it being the solution. People saw that demand for oil was high and supply was low, which seemed to be a clear indication that it was time to buy stock in oil, but there were clear indications that a bubble was forming. One indication of a bubble is that the market only listens to one narrative, ignoring the negative side of the story. Investors saw the high demand for oil, but refused to see the impending recession.

    When it comes to investing in oil, it’s important to keep in mind both sides of the equation. In 2008, people saw the incredible demand for oil as a clear indication that they could make a lot of money off of it; however, this was a singular way of looking at it. Once the recession hit, demand slumped, and oil investors got hit bad. The lesson here is that these variables can change drastically and quickly against you. If you are going to invest in oil, it’s wise to be selective about what companies in which you buy stock. While oil prices will vary and they will be affected by it regardless, a company with a good business and smart management will weather the unpredictability better than a company without those essentials.

    Again, oil serves as a reminder that it is smartest to invest with the long term in mind. This makes you less likely to jump on a trending asset just because it looks like you will make an immediate profit. It also means you can weather the variability of the market better and wait out any undervalued asset, knowing that it will inevitably rise again.

    At the moment, oil seems to be volatile for a while. This is due to shale. Starting in 2011, the United States began producing more and more crude oil due to fracking, the production through which oil and gas deposits are withdrawn from shale rock. This not only increased the supply of oil, but took away Saudi Arabia’s ability to set the oil price for the world. With shale, the production of oil could respond to the rising price. This development is a significant change in the oil industry, as previously, it was believed that it would take years for more supplies of oil to be made, and it would cost billions of dollars. Currently, shale oil suppliers have not stopped production as prices have fallen; in fact, in Saudi Arabia, production has increased.

    The Dot Com Crash of 2000

    At the beginning of 1997, the Nasdaq Composite, a stock market index weighted towards tech stocks, was at 1,291. By 1999, it had tripled. It peaked at 5,132 in March 2000. For the rest of 2000, however, it slowly fell. By 2001, it was trading under 2,500. By October of the following year, it was at 1,108, falling below the pre-bubble level. How could this happen? Again, people fell victim of listening to only one side of the story. The internet was set to change everything; we were no longer restricted by geography, and technology was set to serve millions of consumers. Investors anticipated gigantic profits.

    This story wasn’t completely wrong; the internet has delivered many of the benefits promised to us. However, investors took these promises and invested ten-fold. While some of the internet companies, such as Amazon, did deliver the anticipated returns for the people who stayed with them, there was a lot of variability during this course. People who bought Amazon stocks for 100 dollars a share then have seen it trade at four times that; however, these people had to stick through 2001, when it dropped to seven dollars a share. For every Amazon, there are a dozen or more companies that never got off the ground.

    So how do you avoid getting burned by tech stocks? Don’t buy them. This is not to say that we advise you not to buy them; rather, risk is inherent to investing and trying to avoid risk is a waste of your time. Instead, keep in mind how much you are willing to risk because it is possible that nothing will come of your tech investment. Consider whether the buzz around a company has raised the price of shares higher than their true value. Also consider how much of their growth you will be able to benefit from yourself; does the company have a competitive advantage in the market? How much of the money they acquire from growth is going to have to spent to keep up with the competition? Finally, consider how long you are willing to wait to turn a profit. Take the example of Amazon; can you wait the decade it took for them to become a profitable investment?
    With years of experience in industries such as defense, finance, intelligence, law, and the private sector, our team at Meraglim™ has keen insight into market moves and can provide your team with accurate predictions of future bubbles and their impending bursts. When you need financial data analytics, take advantage of our team of experts paired with innovative risk assessment software. Contact Meraglim™ today to learn more about how we can help you.

  4. 2017 U.S. Economic Trends

    Though the financial crisis of 2008 is now nearly a decade behind us, we still see its impact in the global and U.S. economies alike today. As we begin 2017, we reflect on the economic trends of the recovery period of the last seven years: slow and underwhelming growth, low labor market participation, and high cost of urban living. Now that “recovery” is over, things have returned mostly to normal, and growth peeters out, economic growth will be significantly more difficult to achieve. As we face the new year, we can anticipate certain trends based on past few years.

    Back to “Normal”

    At the end of 2016, economic growth has returned to its normal level. Since September 2015, unemployment has not been over 5 percent, and reaching a low of 4.6 percent in November 2016. GDP has also been steadily growing, but not impressively; when the figure is adjusted for inflation and population growth, from the third quarter of 2015 to that of 2016, GDP grew 0.8 percent. Private domestic investment grew to 17 percent during the recovery period, but has plateaued since then. With 16 to 17 percent GDP, investment is just barely treading water. Increasing domestic investment will be a high priority for policymakers.

    Additionally, labor market participation has not recovered from the recession. This can partially be accounted for due to baby boomer retirement: those who were born in 1951 turned 65 last year. However, the younger population is also less likely to either be working or looking for work when compared to workers before the financial crisis. One of the main challenges for policymakers will be this low participation in the labor force.

    The one exception to the relative normalcy the economy has returned to in monetary policy. For years, inflation remains below the two percent target set out by the Fed. As a result, the Fed has left their policies in recovery positions, where they will likely remain until inflation is two percent.

    Output

    For a recovered economy, the GDP has been reasonable; however, it is far below what was expected before the recession. In previous recessions and depressions, GDP bounced back to what it had been previously. For example, after the Great Depression, incomes returned to normal from before the crash, and the economy continued to thrive for the next two decades. In the current economic environment, however, growth is lower and slower than this trend. The recovery from this recession has been disappointing because it has been slower than anticipated, and it did not return to normal in line with the historical precedent.

    At the beginning of 2009, the CBO predicted that the economy would grow at more than 3.5 percent a year for four years starting in 2011, and would produce $20 trillion in 2016. In 2016, output actually was below $19 trillion. This equates to a 7.5 percent income loss for investors and workers in the U.S.
    As a result, the BCO has down downgraded expectations for the future of the U.S. economy. In 2017, it anticipates production to be $19.4 trillion, as opposed to the $21 trillion they predicted in 2009. In 2009, they predicted the business sector would grow 24 percent before 2017; now, they estimate 14 percent growth.

    This lack of growth can be attributed to the combination of three factors: low productivity, low capital, and low labor. The lack of productivity growth is a direct explanation of this output loss. Indirectly, it also explains that lower productivity means lower ROI, which reduces output as investors shy away.

    Wage Growth

    Though economic growth has been underwhelming, wages have picked up. However, this is lower than anticipated, and wage growth in 2015 to 2016 can be partially attributed to energy prices dropping.

    General average compensation is mostly determined by labor productivity. The link between compensation growth and labor productivity has been established for decades. Labor productivity is determined by the amount of capital production per worker and the total factor productivity. Total factor productivity is a combination of technology, regulatory waste, market flexibility, and management practices.
    Potential labor force productivity estimates by the CBO suggests that certain factors have stifled wage growth: low productivity growth and low investment. In 2009, the potential of labor productivity was predicted to grow by 18 percent by this time. This estimate has now been dropped to 10 percent.

    In 2017, policymakers must focus on raising wage growth by focusing on how to increase labor productivity. This can be done two ways. One, promote investment production cutting the marginal tax rate on new investments. Additionally, reforming regulations can help by lowering the number of anti-competition policies because these discourage investors. Alternatively, policymakers can boost wage growth by increasing total factor productivity growth; however, this is easier said than done. To get productivity even back to before the financial crisis would be a feat, but there is no single action that can be taken to increase productivity.

    Productivity is largely determined outside of policy. However, the most promising action that policymakers can take is regulatory reform. The issue is that there are few significant enough regulations that could improve the economy through change alone. It must be done over multiple sectors and through all levels of government.

    meraglim_2017economictrends_blog_innerimage2The Labor Market

    There are fewer Americans working today than there were before the recession. This is true even when excluding retired people or students. However, labor force participation did grow after the growth of real wages in 2015. In 2017, the optimistic view is that wage growth will continue and more non-working people will get off the bench and start contributing to the economy.

    Much of what has transformed in the economy is due to two factors: changes in traditional family dynamics, and the implementation of certain public assistance programs. The traditional nuclear family of the U.S. up until as recently as 1980 has depicted men as the breadwinners of the family. Both the fact that male-dominated professions were higher income and the impact of the social expectation contributed to the strength of men in the labor force. Men were forced to stay at their jobs, even if wages fell, because it was required for their family.

    Now, women work much more than they ever have, and many households are headed by females. There are now more single moms than ever, or women as the sole breadwinners of the family. There are also substantially more two-income households that come with lifestyles that require both spouses to work. Now, fewer men are committed to the labor force. Without a high enough wage, they have other options, such as living off their spouse or partner.

    There are also an increased number of people living on disability insurance, permanently detaching from the productive economy. Therefore, in 2017, welfare reform must be a high priority for policymakers, particularly in regards to disability insurance. As it is, welfare programs penalize working; easing these restrictions means that more people could start to work, supplement with welfare, and slowly transition off of it. Additionally, housing assistance discourages marriage; eliminating penalties for marriage may encourage people to combine incomes, enabling them to get off of these social programs.

    meraglim_2017economictrends_blog_innerimageIncreased Cost of Living

    The value of workers’ wages is dependent on the prices of their purchases. With increased regulation and restrictions on trade, U.S. prices have elevated. These policies could stifle the economic growth of 2017.

    On a federal level, increasing tariffs on energy, a proposed solution, would raise consumer costs, again stifling the benefits of economic growth for Americans. On the state and local levels, there are also regulations in place that are set to continue to raise prices. For example, in 2016, New York City places limits on short-term home rentals. These regulations lower incomes, increase costs, and decrease flexibility in the market.

    On both coasts, limits of residential construction are increasing the cost of living. In Silicon Valley, the ever-growing technology sector is not having the economic growth impact it should due to regulation on construction and urban growth in the area. Historically, when a thriving industry is in a certain city, it translates into shared prosperity. Now, local laws that limit development and population density prevent the economic growth of these cities. Working-class renters are in turn priced out of the housing market and forced to move. Yes, these regulations have other benefits that can be widely shared. However, these benefits are often narrowly focused, only aiding in the constituencies of those who have power and oppose reform.

    Living standards are heavily impacted by these regulations. In 2015, the Heritage Foundation published a report that found that the average household was set back $4,440 per year due to just 12 regulations. Though there has been little improvement since then, Congress did repeal the crude oil exports ban, which brought that list of regulations down to 11. In order to foster economic growth in 2017, policymakers at all levels need to prioritize deregulating consumer markets; most important, housing, transportation, and energy.

    Conclusion

    In conclusion, while the U.S. economy did recover from the recession, the results have been underwhelming. There are fewer jobs, fewer workers, lower incomes, and higher prices than anticipated due to history. Today, people are more content to not contribute to the labor force and to not invest, leading to dragging economic growth.

    Since the 2008 financial crisis, the fixes put into place by policy makers have failed, including stimulus spending, bailouts, large deficits, and financial market regulation. These attempts have not increased investment or economic participation in general, and the cost of living question has not been addressed for the majority of the country.

    Going forward in 2017, it is imperative for policy makers to increase investment incentives, reengage non-workers into the economy, and lower the cost of living. With the right policies and the united efforts of investors, workers, and inventors, income growth can be restored to pre-recession levels, and the average household income will rise. Additionally, regulatory reform aimed at lowering the cost of living may also increase income for the average American.

    At Meraglim™, our unique combination of risk assessment software and a panel of experts allows you to predict financial market moves before anyone else. If you want to learn more about how our team can help yours, contact us today.

  5. SDRs and Impending Inflation and Panic

    Globally, we are on the precipice of a financial crisis. One event that is sure to spark inflation is the inevitable mass production of Special Drawing Rights (SDR) by the International Monetary Fund (IMF). SDRs are widely misunderstood, so when this does occur, it is likely to pass unnoticed by the majority. At minimum, this will cause massive inflation and, at worst, this will cause a loss in confidence in paper money. This will lead to a surge in gold buying, which will in turn skyrocket the cost of gold. It would be wise for global leaders to reach an agreement comparable to Bretton Woods now to offset this panic. However, this is unlikely to actually happen until after the financial collapse, when they will have to reform the global monetary system during a state of panic. To fully understand what will happen upon the production of SDRs, first you must gain an understanding and what exactly they are. In this blog, we will go over everything you need to understand SDRs.

    meraglim_inflation_blog_innerimageThe role of the SDR

    In 1969, under the Bretton Woods fixed exchange rate system, the IMF created the SDR to serve as a supplementary international reserve asset. Under Bretton Woods, countries participating required reserves (government holdings of foreign currencies or gold) be used to purchase their domestic currencies in global exchange markets in order to maintain its exchange rate. However, the international supply of the two main reserve assets, the US dollar and gold, could not support the level of trade expansion taking place at that time. The SDR was born out of the need for a reserve asset controlled by the IMF.

    Shortly after the SDR was created, the Bretton Woods system dissipated, and major currencies turned to floating exchange rates. At the same time, international capital markets grew, which caused governments to borrow at a higher rate, and countries accumulated international reserves, which decreased the dependency on SDRs. During the more recent financial crises, such as the 2008 United States financial recession, SDRs totaled 182.6 billion, creating liquidity in the global economy and providing necessary reserves for several countries.

    SDRs are not currency. In actuality, they act as a potential claim on the usable currencies of members of the IMF. SDR holders can exchange their SDRs for currency in one of two ways: first, a voluntary arrangement of exchanges between IMF members; and second, when designating members of the IMF that hold strong positions externally to buy SDRs from weaker members. The SDR is also the unit of the IMDR account, as well as several other international organizations.

    meraglim_inflation_blog_innerimage2The value of the SDR

    At first, the SDR’s value was established as equivalent to 0.888671 grams of gold, the value of the US dollar at the time. In 1973, when Bretton Woods collapsed, the IMF redefined the SDR as a basket of currencies. Currently, the SDR includes the dollar, the euro, the renminbi, the yen, and the sterling.

    The value of the SDR in relation to the dollar changes daily, and is posted on the IMF website. It’s value is the sum of each basket currency as valued in dollars, based on the exchange rates quoted in the London market daily at noon.

    Every five years, the Executive Board reviews the composition of this basket of currencies, unless an event occurs that causes the IMF to believe an earlier review is required. This is to make sure that the SDR reflects currencies’ importance within the global economy. The most recent review occurred in November 2015, where they determined that starting October 2016, the Chinese renminbi was to be included in the basket. During this review, they also implemented a new weighting formula. This formula assigns equal shares to the exports of the currency issuer and a financial indicator. This indicator is composed of equal shares of official reserves denominated in the country’s currency that are also held by authorities that do not issue the currency, the foreign exchange turnover, and the sum of outstanding international liabilities and debt securities in the currency.

    The breakdown of weights in the baskets currency are:

    • US dollar: 41.73 percent
    • Euro: 30.93 percent
    • Chinese renminbi: 10.92 percent
    • Japanese yen: 8.33 percent
    • Pound sterling: 8.09 percent

    The weight of each currency determines the amount of each of these currencies that is included in the valuation basket that was put into place in October of 2016. These amounts are fixed for the next five years until the next SDR valuation. As currency amounts are a fixed figure, the relative weight can change during valuation periods, with the weight risings when currencies appreciate relative to the other currencies, and conversely, currencies’ weight falls as it depreciates in comparison. The next review will occur before October 2021.

    The SDR interest rate

    The interest rate is the basis of calculating the interest that is charged to borrowing members, as well as the interest paid to members for providing resources for IMF loans. The interest rate also determines how much is paid to members for SDR holdings. The SDR interest rate varies weekly based on the weight average of interest rates on debt instruments in the money markets of currencies within the SDR basket.

    Who receives SDRs

    The IMF can allocate SDRs to member countries in portion relative to their IMF quota. This provides an unconditional reserve asset to each member. SDRs are self-financing and raise charges on allocations, which in turn are used to pay the interest on SDR holdings. Should a member not use their SDR holdings, charges are equivalent to interest received. On the other hand, if their SDR holdings are above their allocation, the member country earns interest on this excess. Alternatively, if a member holds fewer SDRs than previously allocated, interest is paid on the shortfall. SDR cancellations are also permitted, but have never been used.

    There is also a condition that allows the IMF to give SDRs to non-members including such organizations as the Bank of International Settlements (BIS) and the European Central Bank (ECB). These holders can hold and SDRs for transactions with members or other prescribed holders. Additionally, the IMF cannot allocate SDRs to itself.

    A general SDR allocation must be based on a global need to add to reserve assets. General SDR allocations can be made for periods up to five years, but they have only been made three times, once from 1970-72, once in 1979-98, and once in 2009.

    On August 10, 2009, a one-time allocation of 21.5 billion SDRs was made. This allowed all IMF members to join the SDR system on equity, and addressed the issue of countries who joined after 1981 not receiving an SDR allocation until 2009. Together, the allocations are 204.1 billion SDR.

    Buying/selling SDRs

    Sometimes, IMF members need SDRs for IMF obligations, or they may sell SDRs to address their reserve composition. In these cases, the IMF works as an intermediary between the members in order to make sure that SDRs are exchanged for usable currency. For the last twenty years, the SDR market has worked solely with voluntary trade agreements. After the 2009 general allocations, voluntary arrangements expanded for the sake of liquidity in the market. There are now 32 voluntary SDR trading arrangements, with 19 new since 2009. These arrangements have been the key to SDR liquidity since 1987. However, if there is a situation in which there is not enough capacity under these trading arrangements, there is the designation mechanism in place. This mechanism allows strong members (as defined by the IMF) to buy SDRs with currencies up to certain amounts from weak members. This is to ensure liquidity and the reserve asset value of the SDR.

    At Meraglim™, our comprehensive understanding the intricacies of the global economy can serve you and your team. When you need financial data analytics, you can rely on our panel of experts paired with our innovative risk assessment software to provide you with the information you need. Interested in how we can help your team? Contact us today.

  6. Currency Wars and the IMPACT Method

    In 2011, founder Jim Rickards published his book “Currency Wars: The Making of the Next Global Crisis,” which revealed the potentially devastating impact of the current currency war on the US. The current currency war may seem unpredictable; however, with the assistance of Meraglim’s innovative financial models and risk assessment software, you can have greater insight to the global market movements before anyone else. In this blog, we will go over a brief history of currency wars and dive into the current one, as well as how Meraglim™ can put you in the best position to take advantage of the currency war.

    A History of Currency Wars

    Before 1930, global trade was uncommon, so while governments often devalued their currency, exchange rates were not a concern and therefore, currency wars did not exist. Currency debasement was used to increase the supply of money domestically, particularly to pay debts or finance wars. When nations competed economically, they used mercantilism, which, while it still attempted to limit imports while increasing exports, did not do so using devaluation. In the late 18th century, mercantilism began to fall out of popularity as the free trade model became favored. During this time, the gold standard was first adopted, creating conditions for currency wars to occur as money held an intrinsic value; yet, there was no opportunity. Opportunity presented itself after World War I, when several countries faced recessions and only a few returned to the gold standard. However, a currency war did not take place, as the United Kingdom wanted to bring its currency’s value back to where it was before the war, and therefore, cooperate with other countries. By 1925, many countries re-joined the gold standard.

    currencywars_blog_innerimageThen the Great Depression happened. During this time, the gold standard was largely abandoned. Unemployment was widespread, making devaluation quite common; however, this was not competitive, as devaluation was so prevalent that it was rare for nations to gain an advantage that lasted. When the 1930s currency war actually started is controversial, but it involved the US, France, and Britain. In the 1920s, these countries had parallel interests, and worked collectively to strengthen the British Sterling. However, after the Wall Street crash of 1929, France began to sell the Sterling, having lost faith in its value. The US and France began sterilizing the inflows of money, hoarding gold instead of using it to increase their supplies of money. This contributed to the Sterling crisis of 1931, which caused Britain to take their currency off the gold standard. For years afterwards, competitive devaluation and retaliatory tariffs disrupted international trade. This currency war finally ended in 1936, with the Tripartite monetary agreement.

    The end of World War II until 1971 is considered the Bretton Woods area, when competitive devaluation could not occur because of the semi-fixed exchange rates of the Bretton Woods system. There was also high growth during this period on a global level, so even if a currency war were possible, there was not much motivation to do so. From 1973 to 2000, there were conditions for currency wars to occur, but not enough states wanted to devalue money simultaneously to result in a currency war. In the ‘80s, the US wanted to devalue, but did so with cooperation per the Plaza Accord. In the 1990s, a renewed movement for free markets made the prevailing attitude one that emphasized not intervening with economies, even to correct current account deficits.

    However, free market influences were destroyed during the 1997 Asian crisis, when economies in Asia were forced to accept low prices for their assets due to running low on foreign reserves. This caused them to intervene regularly and they adopted a strategy of finding export opportunities while building foreign reserves. This did not result in a currency war because it was widely accepted by other economies because it benefited their citizens, who could now buy the cheap imports Asia supplied. While the current account deficit in the US grew, there was not much concern among economists.

    By 2009, currency war conditions were back, as the economic downturn affected global trade. Economics were very concerned with their deficits and export led growth became the idealized strategy due to Asia’s success with it. During this time, the US and China were the major players in this currency war, pushing the value of many other economies up. The US began putting more pressure on China to allow appreciation of their currency. China did allow a two percent appreciation after much pressure, but this did little to ease Western concerns. US pressure finally came to a head in September 2010, when the yuan rapidly appreciated steeply. In 2012, there was a movement for major economies to work more

    currencywars_blog_bonds_innerimageFor a while, panic about a currency war quieted, but was re-sparked when the central bank of Japan announced that it would begin a bond-buying program that would probably devalue the yen. This caused panic, with many analysts claiming that Japan was intentionally trying to start a currency war. However, ultimately, the act was recognized as a strategy for boosting the economy as opposed to competitive devaluation. While commentators concerns about the currency war continuing eased, the possibility still loomed, this time not between Japan and the US, but the US and Germany. In October 2013, the US criticized Germany for its large current account surplus which slowed the global economy.

    In 2014, currency war still loomed, this time as nations began devaluing their currencies to address concerns with deflation. In January 2015, the European Central Bank began a quantitative-easing program that many believe to be the escalation of the currency war, though it was not intended to devalue the Euro. In August 2015, China devalued the yuan because of poor export numbers in July. This resulted in a loss for other major export companies where they have drastically devalued the currency previously. This caused more devaluation in Asian economies, including Vietnam and Kazakhstan.

    Despite what certain analysts say, the currency war continues today. Just as a real war, there are times of quiet in currency wars. Fortunately, with Meraglim™, you can have an inside look into when the next event will occur. While the currency war rages on, you and your team can benefit from it with Meraglim’s help.

    IMPACT Method

    Jim Rickards, one of our founders, created the IMPACT method to forecast these turning points to allow investors to take advantage of them. IMPACT stands for International Monetary Policy Analysis and Currency Trading. This model is one of the many Meraglim™ implements to help our clients navigate global capital markets. How do we know that this brand new, powerful model is effective? After a year of research, we have found many examples of trading opportunities that could have benefited our clients with incredible exponential growth throughout the last 20 years.

    With the aid of the IMPACT model, we identify emergent properties, giving you the opportunity to benefit from our predictions. This revolutionary model is only the tip of the iceberg in terms of what Meraglim™ can offer to your team. If you are a global leader or institutional investor, you can benefit from Meraglim’s expertise in a variety of fields and innovative risk assessment software. Contact us today to learn more about how Meraglim™ can help your team in the global money market.