meta minds

thumbnail

Enigma machine: the device that changed WWII

Cristian Gal - CSO
The Enigma machine is the creation of dr. Arthur Scherbius. This device was capable of transcribing coded information for secure communications. In 1923 he set up his Chiffriermaschinen Aktiengesellschaft (Cipher Machines Corporation) in Berlin to manufacture this product. [separator] The German military however was producing its own version. The German navy introduced their version as well in 1926, followed by the army, in 1928, and the air force, in 1933. The military Enigma version allowed an operator to type in a message, then scramble it by means of three to five notched wheels, or rotors, which displayed different letters of the alphabet. The receiver needed to know the exact settings of these rotors in order to reconstitute the coded text. The Poles managed to crack the commercial Enigma versions by reproducing the internal parts of the machine, but that was not useful for decoding the military versions. During the World War II, the military versions of Enigma were heavily used by the Germans, convinced that it couldn't be decoded. The allies established a special divison at Bletchley Park, Buckinghamshire, whose task was to decode the German communications. The best mathematicians were recruited here and, with the intelligence from the Poles, they build early computers with the task to work out the vast number of permutations in Enigma settings. In the mean time, the Germans were upgrading their machine by improving the hardware used for setting the code in each machine. Also, the use of daily codes for the machine made the allies’ job a lot harder. One of the briliant mathematicians involved in decoding Enigma was Alan Turing. Born in 1912, in London, he studied at Cambridge and Princeton universities. Turing played a key role inventing, along with fellow code-breaker Gordon Welchman, a machine known as the „Bombe”. This device helped to significantly reduce the work of the code-breakers. From mid-1940, German Air Force signals were being read at Bletchley and the intelligence gained from this was quite helpful. From 1941, messages sent using the army's Enigma were read also. The one used by the German navy, on the other hand, was not that easy to crack. Capturing Enigma machines and codes from different German units helped decipher communication, but with a considerable delay. To compesate for this, allies started hunting for ships and planes that carried Enigma codes in order to decode communications faster. In July 1942, Turing developed a complex code-breaking technique he named „Turingery”. This method helped the team at Bletchley understand another device that enciphered German strategic messages of high importance - the „Lorenz” cipher machine. Bletchley division’s ability to read these messages contributed greatly to the Allied war effort. Alan Turing’s legacy came to light long after his death. His impact on computer science was widely acknowledged: the annual „Turing Award” has been the highest accolade in that industry since 1966. But the work done at Bletchley Park – and Turing’s role there in cracking the Enigma code – was kept secret until the 1970s. Actually, the full story was not known until the 1990s. It has been estimated that the efforts of Turing and his fellow code-breakers shortened the war by several years. What is certain is that they saved countless lives and helped determine the course and outcome of the conflict.
Read more >

    thumbnail

    How Hackers Benefit from the Coronavirus Crisis

    Sergiu Popa - Director of cybersecurity
    Computer hacking – a fascinating subject populated with tales from the scholars of trivia who often heard about hacking from TV, seen it in a movie or acquired a couple of certifications which they believe allow them to call themselves so. [separator] We give you hacking insights based on experience, not hypothetical scenarios created in labs. How can one hacker exploit corona? In the times of the COVID-19 crisis, forecasts estimated that cyber-crime will increase 400%. And these estimations went low. They actually increased way more than that.   Let's delve into the subject. Usually, social engineering is probably the most potent way of delivering attack payloads to corporate environments whose users’ only training consists in less than mentally challenging security mantras (change your password, don’t click on these links, click on these other links, etc.). Furthermore, the psychological nature of a crisis such as the one we are facing now attempts to, at the very least, excite a basic human trait: curiosity. Throw in curiosity and a cunning manner of delivering a message and the result is called “victims”. Let’s analyze the following examples, which we introduce in a somewhat random fashion, but they will eventually make sense in the end.   The crisis kind of pushed companies to adopt working from home as the way to move forward. This move by itself is self-obvious in terms of how it can be exploited by the clever hacker. Hackers identify the first element that creates an exploit: confusion. A study indicates that oral communication when perpetuated to a chain of more than 5 people dilutes itself to 20% or less. It is quite easy to imagine an IT department training. “Guys, do not click on phishing links. No spamming links. We may update our VPN to incorporate multi factor authentication.” Most people are unable to identify phishing links. It can be quite hard sometimes, as some of these links are actually legitimate, but their purpose is to lead to spear-phishing. Please consult https://www.phishtank.com/ and test your phishing “street smarts”. Then, people are told that they may update their VPN. Well, that right there can break all hell loose. If a user receives an email from their IT department, being asked to download a new VPN client, 95% of the users will attempt to do it, while only 30% of that 95% will succeed in installing the malicious package (lack of computer literacy when it comes to installing programs).   Imagine the next scenario: a hacker wants to break into a bank, but their security is quite strong and he may not want to create mathematical models of deception for their network analysis software. What can he do? Quite simple. All their profiles are listed on LinkedIn. Great. What’s next? Gathering social media information on these people, he can somehow obtain a score of who is tolerant to a degree of hypochondria. Then he emails them as being the hospital and tells them that according to their records, there is a high degree of probability that they may be infected with COVID-19 and they may want to register for a free COVID test at their website, https://ExampleHospital.com, where they will be asked to fill in their address, DOB, phone number, email and eventually fax in a copy or upload a copy of their NI document. The skilled operator (hacker) will now go and brute-force the Wi -Fi password to their house. Or they might get more creative, and eventually offer some chatting software, support software which enables the victim to talk to others in their category or consult with a live doctor. Of course, the “get-you-well” software is nothing more than a trojan, a RAT (remote administration tool).   This is just a casual example of what a hacker might do. But let’s consider the following scenario: The employees of company X receive an email from the IT department stating that their picture has to be uploaded to the new SharePoint directory for a work from home directory creation and the distribution of COVID-19 testing toolkits. This attachment containing the picture might be ransomware or adware or some other malware. Usually, the common criminal will send ransomware. The average criminal will send some malware/adware and the smart criminal will send an APT, whose purpose is to lie dormant and probably redirect TB of Google traffic to their benefit to shortening links and this situation can go on for years.   As we can see, the COVID-19 crisis, if played on the right soft psychological side of people, can have devastating effects on a company’s security systems. As always, knowledge is power. At Metaminds, we pay close attention to every requirement our clients express and make sure we address their concerns with a flawless, custom-designed solutions to ensure the safety of their operations.
    Read more >

    thumbnail

    Telstar Satellite: the Launch of the Modern World

    Marius Marinescu - CTO
    Trans-Atlantic television and other communications became a reality as the Telstar communications satellite was launched. A product of AT&T Bell Laboratories, the satellite was the first orbiting international communications satellite that sent information to tracking stations on both sides of the Atlantic Ocean. Initial enthusiasm for making phone calls via the satellite waned after users realized there was a half-second delay as a result of the 25,000-mile transmission path. [separator] Even if nowadays it seems like a phone call is a regular thing, IT professionals have dealt with many difficulties in the past to make fixed phone calls a reality. Nowadays we are concerned with making our conversations safer by solving the different security breaches we are confronted with, but back then people had other issues. Quick recap for the millennials: long before everyone had a smartphone or two, the implementation of a telephone was quite different than today. Most telephones had real, physical buttons. Even more bizarrely, these phones were connected to other phones through physical wires. Weird, right? These were called “landlines”, a technology that is still employed in many households around the world. It gets even more bizarre. Some phones were wireless (quite just like your smartphone) but they couldn’t get a signal more than a few hundred feet away from your house for some reason. These were “cordless telephones”. Many hackers are working on deconstructing the security behind these cordless phones for a few years now and found these cordless phones aren’t secure at all. While nothing is 100% secure, many people thought that DECT and 5.8GHz phones were safe, at least more so than the cordless phones from the 80s and 90s. While DECT has been broken for a long time, 5.8GHz phones were considered to be safer than 900mhz phones, as scanners are harder to come by in the microwave bands, because very few people have a duplex microwave transceiver sitting around. But everything is bound to happen eventually. With the advent of cheap SDR, hackers demonstrated that listening to and intercepting any phone call they want is actually possible. Using a duplex microwave transceiver (very cheap at ~$300 for the intended purpose) they freely explored the radio system inside these cordless phones. After taking a duplex microwave transceiver to a cordless phone, hackers found the phone technically didn’t operate in the 5.8 GHz band. Control signals, such as pairing a handset to a base station, happened at 900 MHz. Here, a simple replay attack was enough to get the handset to ring. It gets worse: simply by looking at the 5.8 GHz band with a transceiver, they found an FM-modulated voice channel when the handset was on. That’s right: the phone transmits the voice signal without any encryption whatsoever. This isn’t the first time hackers found a complete lack of security in cordless phones. A while ago, they explored the DECT 6.0 standard, a European cordless phone standard for PBX and VOIP. There was no security there, either. It would be chilling if landlines were as spread today as they were 20 some years ago, because the tools to perform a landline hack are freely available and thoroughly documented.
    Read more >

    thumbnail

    `Tron: Legacy’ – a Data Project Turned Blockbuster

    Cristian Gal - CSO
    Few people realize that making movies like Tron: Legacy is also a huge data project. Doing a movie with that much computer generated content creates an enormous amount of data, amount that now is measured in petabytes. Also, because the computer generated content is integrated in the filmed content, usually the CGI companies involved are using at some point a more or less finished version of the movie. That makes them a prime target for hacking attempts. [separator] The HBO hack of 2017, when Game of Thrones scripts and episodes of Curb Your Enthusiasm and Ballers were released online before their air dates, caused chaos for the premium cable network. The hackers were motivated by greed. The organization that went by the name Mr. Smith was seeking a ransom in the range of $6 million to prevent the release of this highly sensitive information. And this data breach is far from the first example the entertainment industry has faced.   The Sony hack of 2014, in which thousands of confidential company documents and emails were released, had a long-lasting impact on the company. It resulted in the ouster of Amy Pascal, head of Sony Pictures Entertainment, turned „The Interview” into a box-office bomb, resulted in a slew of lawsuits and, in general, caused a lot of pain and embarrassment to a lot of people.   And then there’s the release of Quentin Tarantino’s „The Hateful Eight” script. The Oscar-winning director closely guard his material. When it turned out that someone had leaked an early draft of the Western whodunit, Tarantino actually considered shelving the project altogether. Even though Tarantino went on with making the movie after all, it underscores an issue that many in Hollywood face, whether working in production or at a studio. That issue is: “how to ensure the security of information and intellectual property?”.   A movie or TV production can employ hundreds of people. And with each production there are countless documents and files – scripts, budgets, payroll documents and video – that could be very detrimental to the production and its staff if leaked out. Knowing hackers are looking for high-value targets, having a strong data security system in place is of the utmost importance. Unfortunately, most in the entertainment industry – be they productions or studios – aren’t using the enterprise-grade protection they need to keep their information safe. Especially when it comes to productions, they’re simply using the most rudimentary of storage and security services.   To secure such a great amount of movie data against hacking and premature leaking, Hollywood had to embrace digital security. As many other industries before it, Hollywood turned to a new class of technology companies, that for the last few years have been offering ways to manage the data slipping into employees’ personal smartphones and Internet storage services. They wrap individual files with encryption, passwords and monitoring systems that can track who is doing what with sensitive files.   The most sensitive Hollywood scripts were — and, in many cases, still are — etched with watermarks, or printed on colored and even mirrored paper to thwart photocopying. Letter spacing and minor character names were switched from script to script to pinpoint leakers. Plot endings were left out entirely. The most-coveted scripts are still locked in briefcases and accompanied by bodyguards whose sole job is to ensure they don’t end up in the wrong hands.   But over the last decade, such measures have begun to feel quaint. Watermarks can be lifted. Color copiers don’t care what color a script is. Even scripts with bodyguards linger on a computer server somewhere. And once crew members started using their personal smartphones on set, people started leaving with everything they had created for the movie production.   So the movie studios had to employ security solutions that give file creators the ability to manage who can view, edit, share, scan and print a file, and for how long. If hackers steal the file off someone’s computer, all they will see is a bunch of encrypted characters. Also, some Hollywood studios are removing their movie editing software from the Internet employing a process known as “air-gapping”— so that if hackers breach their internal network, they can’t use that access to steal the data.   One of the quirkier features that some studios use is adding a digital spotlight view that mimics holding a bright flashlight over a document in the dark. Everything beyond the moving circular spotlight is unreadable. The feature makes it difficult for anyone peering over your shoulder — or a hacker pulling screen shots of your web browser — to read the whole document.
    Read more >

    thumbnail

    Brief History Of Artificial Intelligence, Part I – “Early Contributors”

    Stefan Iliescu - CDS
      In this first article of our series dedicated to the brief history of AI, we will focus on essential achievements in this field in the pre-computer age period. The dominant method of research at the time was to look in nature for ideas for solving severe problems. In the absence of an understanding of the functioning of natural systems, the research could only be experimental. So the most daring of the researchers approached the creation of mobile automatons (pre-robots) as the first attempt to create artificial intelligence.   [separator]   Grey Walter’s “Tortoise”   Born in the United States but educated in England, Walter failed to obtain a research fellowship in Cambridge and started neurophysiological research in various places over the world. Heavily influenced by the work of the Russian physiologist Ivan Pavlov and Hans Berger (the inventor of the electroencephalograph for measuring electrical activity in the brain), Walter made several discoveries using his version of EEG machine in the field of brain topography. The most notable was the introduction of triangulation as a method of locating the strongest alpha waves within the occipital lobe, thus facilitating the detection of brain tumors or lesions responsible for epilepsy. He pioneered the brain topography based on EEG machine with a multitude of spiral-scan CRTs coupled to high-gain amplifiers.   Walter remained famous as an early contributor to the AI field mainly for making some of the first mobile automatons in the late ’40s, named tortoises (after the tortoise in “Alice in Wonderland) because of their slow speed and shape. These battery-powered automatons were prototypes to test his theory that a small number of cells can induce complex behavior and choice. As a very simple model of the nervous system, they implemented two neuron architecture by incorporating only two motors, two relays, two valves, two condensers, and one sensor (ELSIE had sensor for light and ELMER had sensor for touch). ELSIE scanned the surroundings continuously with the rotating photoelectric cell until a light source was detected. If the light was too bright, it moved away. Otherwise, ELSIE moved toward the light source. ELMER explored the surroundings as long as it didn’t encounter any obstacles; otherwise, ELMER retreated after the touch sensor had registered a contact. Both versions of the tortoise moved toward an electric charging station when the battery level was low.   Walter noted that the automatons “explore their environment actively, persistently, systematically, as most animals do”. This is what happened most of the time, except when a light source was attached to ELSIE’s nose. The automaton started “flickering, twittering and jigging like a clumsy narcissus” and Walter concluded that this was a sign of self-awareness. Even though many scientists today believe that robots will not achieve self-awareness, Walter’s experiment succeeded in proving that complex behaviours can be generated by using only a few components and that biological principles can be applied to robots.   Subsequent developments, some remaining only in a theoretical phase, promised substantial improvements in the direction of intelligent behaviour, Walter trying to add “learning” skills – even if they were in a primary form, such as Pavlovian conditioning. For example, the incorporation of an auditory sensor and the whistle immediately before contact between ELMER and an obstacle will cause ELMER to subsequently perform an obstacle avoidance maneuver before contact occurs – if it “heard” the whistle. Although it seems that Walter materialised this attempt, it seems that the echo was not noticeable in the scientific world at that time.   John Hopkins’ “Beast”   Another well-known realisation of a mobile automaton is the “Beast” project from the ’60s of a team of engineers from Johns Hopkins University Applied Physics Laboratory, including Ron McConnell (Electrical Engineering) and Edwin B. Dean, Jr. (Physics). By having a height of half a meter, over 200 cm diameter, and a weight of almost 50 kilograms, “Beast” was built to perform two tasks only: explore the surroundings and survive on its own. Initially equipped with physical switches, “Beast” moved “freely” following the white walls of the laboratory and avoiding potential obstacles encountered. When the battery level was low, “Beast” “looks for” a black wall socket and plugs it in for power. Without a central processing unit, its control circuitry consisted of multiple transistor modules that controlled analogue voltages; three types of transistors allowed three classes of tasks:   – Make a decision when activating a sensor, by emulating Boolean logic; – Specify a period to do something, by creating timing gates; – Control the pressure for the automaton’s arm and the charging mechanism by using power transistors.   A second version also received a photoelectric cell in addition to an improved sonar system. With the help of two ultrasonic transducers, “Beast” could now determine the distance, location within the perimeter, and obstructions along the path – thus exposing a significantly more complex “behaviour” than those of Walter’s tortoises. Performances such as stopping, slowing down or bypassing an obstruction or recognising doors, stairs, installation pipes, hanging cables or people through taking the appropriate actions are perhaps the most significant technical achievement of the pre-computer age.   In his response to Bill Gates, who predicted in 2008 that the “next” hot field would be robotics, McConnell humorously stated about their work from the ’60s: “The robot group built two functioning prototypes that roamed and “lived” in the hallways of the lab, avoiding hazards such as open stairwells and doors, hanging cables and people while searching for food in the form of AC power on the walls to recharge their batteries. They used the senses of touch, hearing, feel and vision. Programming consisted of patch cables on patch boards connecting hand-built logic circuits to set up behaviour for avoidance, escape, searching and feeding. No integrated circuits, no computers, no programming language. With a 3-hour battery life, the second prototype survived over 40 hours on one test before a simple mechanical failure disabled it.”   Ashby’s “Mobile Homeostat”   Indeed, the most intriguing prototype of care saw the light of day before the computer age was The Homeostat¹, created by W. Ross Ashby, Research Director at the Barnwood House Hospital in Gloucester, in 1948 and presented at the Ninth Macy’s. Conference on Cybernetics in 1952. The Homeostat contained four identical control switch-gear kits that came from WW2 bombs (with inputs, feedback, and magnetically driven, water-filled potentiometers), and each transformed into an electro-mechanical artificial neuron. The purpose of this prototype was extremely challenging for that time, namely to be an example for all types of behaviour – by addressing all living functions.   During the presentation, The Homeostat was able to perform tasks that indicate some cognitive abilities, i.e., the ability to learn and adapt to the environment. But the approach was at least strange: while other automaton of the time exhibited a dynamic character by exploring the environment, the goal of the Homeostat was to reach the perfect state of balance (i.e. homeostasis). This approach was intended to support the author’s principle of ultra-stability and the law of a variety of requirements. Based on the concept of “negative feedback,” the Homeostat approached incrementally the path between the current state and the final state of equilibrium, the steps representing the concrete responses of the automatons to changes in the environment (which affected the state of equilibrium). In detail, the principle of “Law of Requisite Variety” (as the author called it), stated that in order to break the variety of disturbances from the external environment, a system needs a “goal-seeking” strategy and a wide variety of possibilities to respond to them. For the animal world, a final goal like “no goal” was equivalent to achieving immortality. The part of “cognitive intelligence” embedded in the activity of automatons was precisely this “goal-seeking” approach, and, from a technical standpoint, “its principle is that it uses multiple coils in a milliammeter & uses the needle movement to dip in a trough carrying a current, so getting a potential which goes to the grid of a valve, the anode of which provides an output current”. But the audience was not very convinced of this principle, and, on the whole, its activity could be classified as a “goal-less goal-seeking machine.” It was Gray Walter, who called The Homeostat a “Machina sopor,” of which he said “fireside cat or dog which only stirs when disturbed, and then methodically finds a comfortable position and goes to sleep again,” in contrast with his creation, “The Tortoise,” called “Machina speculatrix,” which embodies the idea that “a typical animal propensity is to explore the environment rather than to wait passively for something to happen.” It was later learned that Alan Turing advised Ashby to implement a simulation on the ACE² computer instead of building a special machine.   However, The Homeostat received a significant comeback in the 1980s, when a team of cognitive researchers from the University of Sussex led by Margaret Boden created several practical robots incorporating Ashby’s ultrastability mechanism. Boden was fascinated by the idea of ​​modeling an autonomous goal-oriented creature, arguing that the future of cognitive science is one based on The Homeostat.   Conclusions   The cybernetics of the ’60s are long gone, and the current possibilities of computer simulation are infinitely more capable than anything that could be imagined or created by the geniuses of those times, and within reach of any school student. Suffice it to say that the level of tropism of Tortoises is equivalent to that of a simple bacteria and The Beast equals the ability to coordinate of a large nucleated cell’s like Paramecium, which is a bacterial hunter; or that what was then presented as a continuous adaptation of responses to external stimuli is far from what we understand and have today in terms of learning – supervised or unsupervised. But evolution has not been just the result of the appearance of computer technology and its fantastic development. As I mentioned in the introduction, the history of AI overlaps the history of cognitive science. So at today’s AI level, achievements in multiple fields have contributed, including linguistics, psychology, philosophy, neuroscience, anthropology, and, of course, mathematics. Simply put, even though in most cases it was agreed that it was a success, we can say that these mobile automatons of the pre-computer-era were nothing more than experiments before theoretical research and not during it. The rudimentary means of construction, the lack of a common language in the field and the non-adjustment between the model and the implementation mechanisms have often made the researchers of the time doubt each other’s achievements³; unimaginable today, when everyone understands that an self-driving car can anticipate complex accidents better than all the drivers involved or that a software robot crushes the world chess champion without even training by playing with someone other than himself.   [separator]   Footnotes:
    1. In biology, homeostasis is the state of steady internal, physical, and chemical conditions maintained by living systems.
    2. The Automatic Computing Engine (ACE) was a British early electronic serial stored-program computer designed by Alan Turing.
    3. With regard of The Homeostat of Ashby, the cyberneticist Julian Bigelow famously asked, “whether this particular model has any relation to the nervous system? It may be a beautiful replica of something, but heaven only knows what.”
      References:
    1. Steve Battle – “Ashby’s Mobile Homeostat”
    2. Margaret A. Boden – “Mind as Machine, A History of Cognitive Science”
    3. Margaret A. Boden – “Creativity & Art, Three Roads to Surprise”
    4. Stefano Franchi, Francesco Bianchini – “The Search for a Theory of Cognition: Early Mechanisms and New Ideas”
    5. http://cyberneticzoo.com/cyberneticanimals/1962-5-hopkins-beast-autonomous-robot-mod-ii-sonarvision-jhu-apl-american/
    6. http://www.rutherfordjournal.org/article020101.html
    Read more >

    thumbnail

    The Hawking Radiation: Passport to Escape From a Black Hole

    Stefan Iliescu - CDS
      “My goal is simple. It is a complete understanding of the universe, why it is as it is, and why it exists at all”, said Stephen Hawking, the famous theoretical physicist and cosmologist of the 20th century. The quote emphasizes that he was not one to settle for an easy challenge, a trait that we hope is the basis of every individual in our team. The task he set for himself was too large for an individual to complete in a lifetime, but, even so, the renowned British physicist accomplished substantial parts of it by leading the world to understand the bits of the universe.   Stephen Hawking devoted all his resources to the study of black holes, individually and in collaboration with other acclaimed researchers. His debut took place in 1970, when, together with Sir Roger Penrose, established the theoretical basis (the Penrose – Hawking singularity theorems) for the formation of black holes. Their prediction was proven by recent observational experiments (2015-2019) at the Laser Interferometer Gravitational-Wave Observatory (LIGO) that detected gravitational waves emitted by colliding black holes (or emerging ones).   The same theoretical basis was the expansion of the black hole (this translates into an increase in the area of ​​a black hole's event horizon) with the absorption of matter and energy from its vicinity. According to the second law of thermodynamics, the entropy of the black hole can only increase, and, as the entropy is an energy-dependent function that possesses the temperature, the scientists wanted to know how high the temperature of a black hole can go. Here comes perhaps the most significant contribution so far in the field, namely the Hawking radiation, which may be responsible for keeping the temperature bellow a „certain limit”. He uncovered that black holes, once thought to be static, unchanging, and defined only by their mass, charge, and spin, are actually ever-evolving engines that emit radiation and evaporate over time. Although this contribution has not yet been proven by any experiment, which is why Hawking did not win the Nobel Prize in his lifetime, it is seen as the only widely recognized result by physicists in the field as support for a unifying theory of quantum mechanics and gravity.   The next question for the scientific world was, logically, whether the radiation emitted by the black hole preserves the information that came with the ingestion of matter, even in a scrambled form. For many years Hawking did not believe so, and proposed in 1997, characteristically for him, a bet (Thorne – Hawking – Preskill bet). In 2004 Hawking updates his own theory stating that the black hole event's horizon is not really a "firewall" but rather an "apparent horizon" that enables energy and information to escape (from the quantum theory standpoint), thus declaring himself the loser of the bet. Moreover, he considers that he has thus corrected the biggest mistake of his life in the field. Neither Kip Thorne, who was with him in the bet against John Preskill nor half of the scientific world, is considered convinced of this update, today, two years after Hawking's death. In the absence of solid experimental evidence (which, among other things, will support a quantum theory of gravity), the question of whether and how information leaks from a black hole (through Hawking radiation) remain open.
    Read more >

    thumbnail

    Web Browser Security: From Netscape Navigator to Microsoft Edge

    Marius Marinescu - CTO
      The Internet has become an intrinsic part of our everyday life, both if you are interested in the threats it poses from a cybersecurity point of view or if you are only enjoying the many advantages it offers. Not so long ago though, you had to be a visionary to imagine the power it was going to hold in the future. Microsoft wanted to get into the browser game as soon as possible after Netscape Communications Corporation became the web browser industry leader, a little after the release of its flagship browser, Netscape Navigator, in October 1994. [separator] Soon after, Microsoft licensed from Spyglass Inc. the Mosaic software that would be furtherly used as the basis for the first version of Internet Explorer. Spyglass was an Internet software company founded by students at the Illinois Supercomputing Center that managed to develop one of the earliest browsers for navigating the web. They waited an entire year to go public after they began distributing their software and making up to $7 million out of it, which happened exactly on this day, 25 years ago.   Microsoft developed the functionality of the Internet Explorer browser and embedded it in the core Windows operating system for the better part of the last 25 years. They are still providing to this day the old Windows Internet Explorer 11 (latest supported version) with security patches, but they are replacing it on the newer operating systems with their own Microsoft Edge browser, which in turn, they are replacing this year with a brand new Microsoft Edge browser. Confusing, right? The main difference between the old Edge browser and the new Edge browser is that the latter is based on Google’s Ghromium web engine and has nothing to do with Microsoft’s old code-base.   But until the new Edge browser will be the default choice on Microsoft OS’s, let’s take a look at the current Edge browser and his relationship with the old Internet Explorer. The already „old” Microsoft Edge has more in common with Internet Explorer than you might think especially when it comes to security flaws.   Given that the number of vulnerabilities found in Edge is far below Internet Explorer, it's reasonable to say Edge looks like a more secure browser. But is Edge really more secure than Internet Explorer? According to a Microsoft blog post from 2015, the software giant's Edge browser, an exclusive for Windows 10, is said to have been designed to "defend users from increasingly sophisticated and prevalent attacks."   In doing that, Edge scrapped older, insecure, or flawed plugins or frameworks, like ActiveX or Browser Helper Objects. That already helped cut a number of possible drive-by attacks traditionally used by hackers. EdgeHTML, which powers Edge's rendering engine, is a fork of Trident, which still powers Internet Explorer.   However, it's not clear how much of Edge's code is still based off old Internet Explorer code. When asked, Microsoft did not give much away. They said that "Edge shares a universal code base across all form factors without the legacy add-on architecture of Internet Explorer. Designed from scratch, Microsoft does selectively share some code between Edge and Internet Explorer, where it makes sense to do so."   Many security researchers are saying that overlapping libraries are where you get vulnerabilities that aren't specific to either browser, because when you're working on a project as large as a major web browser, it's highly unlikely that you would throw out all the project specific code and the underlying APIs that support it. There are a lot of APIs that the web browser uses that will still be common between the browsers. If you load Microsoft Edge and Internet Explorer on a system, you will notice that both of them load a number of overlapping DLLs.   The big question is how much of that Internet Explorer code remains in Edge, and crucially, if any of that code has any connection to the overlap of flaws found in both browsers that poses a risk to Edge users. The bottom line is that it's hard, if not impossible to say if a browser is more or less secure than another browser.   A "critical" patch, which fixes the most severe of vulnerabilities, is a moving scale and has to consider the details of the flaw, as well as if it's being exploited by attackers. With an unpredictable number of flaws found each month coupled with their severity ratings, a browser's security worth can vary month by month.   As history showed us, in the last 5 years the Edge browser had no fewer than 615 security vulnerabilities and Internet Explorer almost doubles that – 1030.   Microsoft's decision to adopt the Chromium open-source code to power its new Edge browser could mean a sooner-than-expected end of support for Internet Explorer and the end of support for the shared code-base with the „old” Edge browser. And that’s a good thing for the security of users that are only using the browser provided by the operating system itself (7.76% - Microsoft Edge, 5.45% - Internet Explorer as of April 2020).
    Read more >

    thumbnail

    Siri Shortcuts: Hey, Siri! Watch Out For Scareware!

    Cristian Gal - CSO
    Some of us can’t imagine life without Siri or another virtual assistant to help, guide and save time throughout the day. Even though it has so many advantages, the fact that, in order to work properly, it must always be listening, raises serious privacy concerns. [separator] The first step that led to the creation of today’s speaking devices was an educational toy named the Speak & Spell, announced back in 1978 by Texas Instruments. It offered a number of word games, similar to the hangman, and a spelling test. What was revolutionary about it was its use of a voice synthesis system that electronically simulated the human one.  

    The system was created as an offshoot of the pioneering research into speech synthesis developed by a team that included Paul Breedlove as the lead engineer. Breedlove was the one that came up with the idea of a learning aid for spelling. Breedlove’s plan was to build upon bubble memory, another TI research effort, and as such it involved an impressive technical challenge: the device should be able to speak the spelling word out loud.

    The team analyzed several options regarding how to use the new technology and the winner was this 50$ toy idea.

        With Apple’s introduction of iOS 12 for all their supported mobile devices came a powerful new utility for automation of common tasks called Siri Shortcuts. This new feature can be enabled via third-party developers in their apps, or custom built by users downloading the Shortcuts app from the app store. Once downloaded and installed, the it grants the power of scripting to perform complex tasks on users’ personal devices.   Siri Shortcuts can be a useful tool for both users and app developers who wish to enhance the level of interaction users have with their apps. But this access can potentially also be abused by malicious third parties. According to X-Force IRIS research, there are security concerns that should be taken into consideration in using Siri Shortcuts.   For instance, Siri Shortcuts can be abused for scareware, a pseudo-ransom campaign trying to trick potential victims into paying a certain a criminal by convincing them their data is in the hands of a remote attacker. Using native shortcut functionality, a script could be created to transmit ransom demands to the device’s owner by using Siri’s voice. To lend more credibility to the scheme, attackers can automate data collection from the device and have it sent back the user’s current physical address, IP address, contents of the clipboard, stored pictures/videos, contact information and more. This data can be displayed to the user to convince them that an attacker can make use of it unless they pay a ransom.   To move the user to the ransom payment stage, the shortcut could automatically access the Internet, browsing to a URL that contains payment information via cryptocurrency wallets, and demand that the user pay-up or see their data deleted, or exposed on the Internet.   Apple prefers quick access over device security for Siri, which is why the iOS default settings allow Siri to bypass the passcode lock. However, allowing Siri to bypass the passcode lock could allow a thief or hacker to make phone calls, send texts, send e-mails, and access other personal information without having to enter the security code first.   There is always a balance that must be struck between security and usability. Users and software developers must choose how much perceived security feature-related inconvenience are they willing to endure in order to keep their devices safe versus how quickly and easily they want to be able to use them.   Whether you prefer instant access to Siri without having to enter a passcode is completely up to you. In some cases, while you're in the car, for example, driving safely is more important than data security. So, if you use your iPhone in hands-free mode, keep the default option, allowing the Siri passcode bypass.   As the Siri feature becomes further advanced and the amount of data sources it is tapped into increases, the data security risk for the screen lock bypass may also increase. For example, if developers tie Siri into their apps in the future, Siri could provide a hacker with financial information if a Siri-enabled banking app is running and logged in using cached credentials and a hacker asks Siri the right questions.
    Read more >

    thumbnail

    SSL/TLS Vulnerabilities Leave Room for Security Breaches

    Marius Marinescu - CTO
    By integrating cybersecurity and complex architectures in the IT field, we cannot appreciate enough the unprecedented security developed by Netscape Corporation. Besides developing Navigator, the browser that would change the way the Internet was used by the masses, it also pioneered the Secure Sockets Layer (SSL) Protocol that enabled privacy and consumer protection. [separator] The underlying technology used for their browsers at that time, Navigator and Communicator, still powers today’s security standard, Transport Layer Security (TLS).   Back in 1996, Washington Post published an article in which they speculated that Netscape might one day turn into a challenge for Microsoft, due to the fact that the software startup was growing very fast. It seems like they were right since, years later, the source code used for Netscape Navigator 4.0 would lead to the creation of Mozilla and its Firefox browser. This is one of the best alternatives to Google Chrome which, in 2016, managed to dethrone Internet Explorer, the browser created by Microsoft. Although all modern browsers are using the SSL and TLS protocols pioneered by Netscape Corporation, these protocols had their fair share of vulnerabilities over the years. So, remember that using the latest browser, without any other security solution, doesn’t mean that you are protected against the latest attacks. Here are some of the most prominent attacks involving breaches of the SSL/TLS protocols that had surfaced in recent years:   POODLE The Padding Oracle On Downgraded Legacy Encryption (POODLE) attack was published in October 2014 and exploits two aspects: the fact that some servers/clients still support SSL 3.0 for interoperability and compatibility with legacy systems and a vulnerability within SSL 3.0 that is related to block padding. The client initiates the handshake and sends a list of supported SSL/TLS versions. An attacker intercepts the traffic, performing a man-in-the-middle (MITM) attack, and impersonates the server until the client agrees to downgrade the connection to SSL 3.0. The SSL 3.0 vulnerability is in the Cipher Block Chaining (CBC) mode. Block ciphers require blocks of fixed length. If data in the last block is not a multiple of the block size, extra space is filled by padding. The server ignores the content of padding. It only checks if padding length is correct and verifies the Message Authentication Code (MAC) of the plaintext. That means that the server cannot verify if anyone modified the padding content. An attacker can decipher an encrypted block by modifying padding bytes and watching the server response. It takes a maximum of 256 SSL 3.0 requests to decrypt a single byte. This means that once every 256 requests, the server will accept the modified value. The attacker does not need to know the encryption method or key. Using automated tools, an attacker can retrieve the plaintext character by character. This could easily be a password, a cookie, a session or other sensitive data.   BEAST The Browser Exploit Against SSL/TLS (BEAST) attack was disclosed in September 2011. It applies to SSL 3.0 and TLS 1.0 so it affects browsers that support TLS 1.0 or earlier protocols. An attacker can decrypt data exchanged between two parties by taking advantage of a vulnerability in the implementation of the Cipher Block Chaining (CBC) mode in TLS 1.0. This is a client-side attack that uses the man-in-the-middle technique. The attacker uses MITM to inject packets into the TLS stream. This allows them to guess the Initialization Vector (IV) used with the injected message and then simply compare the results to the ones of the block that they want to decrypt.   CRIME The Compression Ratio Info-leak Made Easy (CRIME) vulnerability affects TLS compression. The compression method is included in the Client Hello message and it is optional. You can establish a connection without compression. Compression was introduced to SSL/TLS to reduce bandwidth. DEFLATE is the most common compression algorithm used. One of the main techniques used by compression algorithms is to replace repeated byte sequences with a pointer to the first instance of that sequence. The bigger the sequences that are repeated, the higher the compression ratio. All the attacker has to do is inject different characters and then monitor the size of the response. If the response is shorter than the initial one, the injected character is contained in the cookie value and so it was compressed. If the character is not in the cookie value, the response will be longer. Using this method an attacker can reconstruct the cookie value using the feedback that they get from the server.   BREACH The Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) vulnerability is very similar to CRIME, but BREACH targets HTTP compression, not TLS compression. This attack is possible even if TLS compression is turned off. An attacker forces the victim’s browser to connect to a TLS-enabled third-party website and monitors the traffic between the victim and the server using a man-in-the-middle attack.   Heartbleed Heartbleed was a critical vulnerability that was found in the heartbeat extension of the popular OpenSSL library. This extension is used to keep a connection alive as long as both parties are still there. The client sends a heartbeat message to the server with a payload that contains data and the size of the data (and padding). The server must respond with the same heartbeat request, containing the data and the size of data that the client sent. The Heartbleed vulnerability was based on the fact that if the client sent false data length, the server would respond with the data received by the client and random data from its memory to meet the length requirements specified by the sender. Leaking unencrypted data from server memory can be disastrous. There have been proof-of-concept exploits of this vulnerability in which the attacker would get the private key of the server. This means that an attacker would be able to decrypt all the traffic to the server. Server memory may contain anything: credentials, sensitive documents, credit card numbers, emails, etc.   Bleichenbacher This relatively new cryptographic attack can break encrypted TLS traffic, allowing attackers to intercept and steal data previously considered safe and secure. This downgrade attack works even against the latest version of the TLS protocol, TLS 1.3, released in 2018 and considered to be secure. This cryptographic attack is a variation of the original Bleichenbacher Oracle attack and represents yet another way to break RSA PKCS#1 v1.5, the most common RSA configuration used to encrypt TLS connections nowadays. Besides TLS, this new Bleichenbacher attack also works against Google's new QUIC encryption protocol as well. The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations. Even the newer version of the TLS 1.3 protocol, where RSA usage has been kept to a minimum, can be downgraded in some scenarios to TLS 1.2, where the new Bleichenbacher attack variation works.   In most cases, the best way to protect yourself against SSL/TLS-related attacks is to disable older protocol versions. This is even a standard requirement for some industries. For example, June 30, 2018, was the deadline for disabling support for SSL and early versions of TLS (up to and including TLS 1.0) according to the PCI Data Security Standard. The Internet Engineering Task Force (IETF) released advisories concerning the security of SSL. Deprecation of TLS 1.0 and 1.1 by IETF is expected soon.
    Read more >

    thumbnail

    Anonymous’ Hacking Tactics – Revealed In The Attack On Vatican

    Marius Marinescu - CTO
    The Los Angeles Times reported that Father Leonard Boyle was working to put the Vatican’s Library on the World Wide Web through a site funded by IBM. “Bringing the computer to the Middle Ages and the Vatican library to the world.” Boyle computerized the library’s catalog and placed manuscripts and paintings on the website, which was in part funded by IBM. Today, thousands of manuscripts and incunabula have been digitized and are publicly available on the Vatican Library website. A number of other offerings are available, which include images and descriptions of the Vatican’s extensive numismatic collection that dates back to Roman times. [separator] The Vatican’s digital presence soon caught the hacker’s attention and in August 2011, when by the elusive hacker movement known as Anonymous launched a cyber-attack against it.  Although the Vatican has seen its fair share of digital attacks over the years, what makes this particular one special is the fact that this was the first Anonymous attack to be identified and tracked from start to finish by security researchers, providing a rare glimpse into the recruiting, reconnaissance and warfare tactics used by the shadowy hacking collective.   The campaign against the Vatican, which has not received wide attention at the time, involved hundreds of people, some with hacking skills and some without. A core group of participants openly drummed up support for the attack using YouTube, Twitter and Facebook. Others searched for vulnerabilities on a Vatican Web site and, when that failed, enlisted amateur recruits to flood the site with traffic, hoping it would crash.   Anonymous, which first gained widespread notice with an attack on the Church of Scientology in 2008, has since carried out hundreds of increasingly bold strikes, taking aim at perceived enemies including law enforcement agencies, Internet security companies and opponents of the whistle-blower site WikiLeaks.   The group’s attack on the Vatican was confirmed by the hackers and it may be the first end-to-end record of a full Anonymous attack. The attack was called “Operation Pharisee” in a reference to the sect that Jesus called hypocrites. It was initially organized by hackers in South America and Mexico before spreading to other countries, and it was timed to coincide with Pope Benedict XVI’s visit to Madrid in August 2011 for World Youth Day, an annual  international event that regularly attracts more than a million young Catholics.   Hackers initially tried to take down a website set up by the church to promote the event, handle registrations and sell merchandise. Their goal – according to YouTube messages delivered by an Anonymous figure in a Guy Fawkes mask – was to disrupt the event and draw attention.   The hackers spent weeks spreading their message through their own website and social media channels like Twitter and Flickr. Their Facebook page encouraged volunteers to download free attack software so that they might join the attack. It took the hackers 18 days to recruit enough people. Then the reconnaissance began. A core group of roughly a dozen skilled hackers spent three days poking around the church’s World Youth Day site looking for common security holes that could let them inside. Probing for such loopholes used to be tedious and slow, but the advent of automated tools made it possible for hackers to do this around the clock.   In this case, the scanning software failed to turn up any gaps. So, the hackers turned to a brute-force approach – a DDoS attack. Even unskilled supporters could take part in this from their computers or smartphones. Over the course of the campaign’s final two days, Anonymous enlisted as many as a thousand people to download attack software, or directed them to custom-built websites that let them participate using their cellphones. Visiting a particular web address caused the phones to instantly start flooding the target website with hundreds of data requests each second, with no special software required.   On the first day, the denial-of-service attack resulted in 28 times the normal traffic to the church site, rising to 34 times the next day. Hackers involved in the attack, who did not identify themselves, said, through a Twitter account associated with the campaign, that the two-day effort succeeded in slowing the site’s performance and making the page unavailable “in several countries”. Anonymous moved on to other targets, including an unofficial site about the pope, which the hackers were briefly able to deface.   In the end, the Vatican’s defenses held up because, unlike other hacker targets, it invested in the infrastructure needed to repel both break-ins and full-scale assaults, using some of the best cybersecurity technology available at the time. Researchers who have followed Anonymous say that despite its lack of success in this and other campaigns, their attacks show the movement is still evolving and, if anything, emboldened.
    Read more >

    thumbnail

    Fortran

    Stefan Iliescu - CDS
    ”Modern Fortran is a powerful and flexible programming language that constitutes the foundation of high performance computing for research and science. Its powerful parallelization capabilities, low-level machine learning and deep learning libraries make it perfectly suited for the large scale simulation of physical systems to the detriment of C language. But history also gives us another perspective on the competition between Fortran and C. The code was passed on to the students, who, found Fortran much easier to learn than C. Given the long history of Fortran, it is no surprise that a large amount of legacy code in physics is written in Fortran.”  Ștefan Iliescu - Chief Data Scientist at Metaminds.    
    Read more >

    thumbnail

    WorldWideWeb

    Cristian Gal - CSO
    31 years ago, Berners-Lee wrote a proposal for "a large hypertext database with typed links". This turns up in what we know today as the World Wide Web. "His work paved the way for a brave new hyper-connected world where, be it against disinformation or to ward off malware, protecting is caring. Cyber security is a matter of responsibility.” — Petru Cristian Gal, Security Solutions Team Leader Metaminds.
    Read more >

    thumbnail

    Belady Anomaly

    Stefan Iliescu - CDS
    ”Usually, if you increase the number of frames allocated to a process in virtual memory the chances to receive fewer page faults increase. Sometimes the opposite happens and the phenomenon is called Belady's Anomaly. This phenomenon is experienced to a greater or lesser extent in page replacement algorithms such as First In First Out (FIFO), Second Chance Algorithm and Random Page Replacement Algorithm. Although algorithms that do not suffer from this anomaly are being used too, such as LRU or Optimal Page Replacement - that follow the stack algorithm property, the anomaly is still a topic of interest for research.” Ștefan Iliescu - Chief Data Scientist at Metaminds.
    Read more >

    thumbnail

    HP-41C Pocket Calculator

    Marius Marinescu - CTO
    39 years ago NASA demonstrated the power of being prepared for any type of situation: by installing a specific software on the HP-41C pocket calculator, the astronauts from the first space shuttle flights were able to calculate the exact angle at which they needed to re-enter the Earth's atmosphere. ”Nowadays a mobile phone has roughly 5.5 million times more processing power than the pocket calculator and so does the malware. Not that you typically need to re-enter the Earth's atmosphere on a Monday morning using your new shiny mobile phone, but better to keep it safe than sorry.” Marius Marinescu, Chief Technology Officer at Metaminds.
    Read more >