meta minds

thumbnail

Brief History Of Artificial Intelligence

Stefan Iliescu- CDS
July 2

Brief History Of Artificial Intelligence

Intro   AI has a surprisingly long history, although many of us confuse this history with the popularity that AI has enjoyed in recent years. The main reason for its popularity is that any smartphone nowadays is much more powerful than a supercomputer of the '70s. Furthermore, we might wrongly believe that AI is a direct derivative of computer programs. The term AI describes the ability of a machine to emulate the cognitive functions associated with the human mind, such as learning and problem-solving.   The history of AI began in antiquity, with man imagining artificial beings endowed with intelligence, then continuing with the classical philosophy that characterized the process of human thought as mechanical manipulation of symbols and having the first concrete achievements through early mobile automatons - the ancestors of robots. Simply put, we can state that the history of AI overlaps the history of the cognitive science.   Along with man's preoccupation with understanding his way of thinking, there was also an ardent curiosity to transfer this "modus operandi" to machines. The invention of the computer in the '40s gave a unique boost to AI's development through the opportunity glimpsed by man to build an electronic brain. Our brief incursion into the history of AI proposes three short articles, respectively:   - Early contributors - The legacy of Alan Turing and John von Neumann - Top achievements.   As the purpose of a short story is to arouse readers' curiosity to learn more, it is worth mentioning that some of the early theories are still the subject of debate today. Likewise, many technical achievements, such as mobile automatons and robots, relive a second youth through extended versions or the creation of more complex scenarios.   BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE, PART ONE – “EARLY CONTRIBUTORS”   In this first article, we will focus on essential achievements in the field of AI in the pre-computer age period. The dominant method of research at the time was to look in nature for ideas for solving severe problems. In the absence of an understanding of the functioning of natural systems, the research could only be experimental. So the most daring of the researchers approached the creation of mobile automatons (pre-robots) as the first attempt to create artificial intelligence.   Grey Walter's “Tortoise”   Born in the United States but educated in England, Walter failed to obtain a research fellowship in Cambridge and started neurophysiological research in various places over the world. Heavily influenced by the work of the Russian physiologist Ivan Pavlov and Hans Berger (the inventor of the electroencephalograph for measuring electrical activity in the brain), Walter made several discoveries using his version of EEG machine in the field of brain topography. The most notable was the introduction of triangulation as a method of locating the strongest alpha waves within the occipital lobe, thus facilitating the detection of brain tumors or lesions responsible for epilepsy. He pioneered the brain topography based on EEG machine with a multitude of spiral-scan CRTs coupled to high-gain amplifiers. Walter remained famous as an early contributor to the AI field mainly for making some of the first mobile automatons in the late '40s, named tortoises (after the tortoise in Alice in Wonderland) because of their slow speed and the shape. These battery-powered automatons were prototypes to test his theory that a small number of cells can induce complex behavior and choice. As a very simple model of the nervous system, they implemented two neuron architecture by incorporating only two motors, two relays, two valves, two condensers, and one sensor (ELSIE had sensor for light and ELMER had sensor for touch).   ELSIE scanned the surroundings continuously  with the rotating photoelectric cell until a light source was detected. If the light was too bright, it moved away. Otherwise, ELSIE moved toward the light source. ELMER explored the surroundings as long as it didn't encounter any obstacles; otherwise, ELMER retreated after the touch sensor has registered a contact. Both versions of the tortoise moved toward an electric charging station when the battery level was low.   Walter noted that the automatons "explore their environment actively, persistently, systematically, as most animals do." This is what happened in most of the time, except when a light source was attached to ELSIE's nose. The automaton started "flickering, twittering and jigging like a clumsy narcissus", and Walter concluded that this was a sign of self-awareness. Even though many scientists today believe that robots will not achieve self-awareness, Walter's experiment succeeded in proving that complex behaviors can be generated by using only a few components and that biological principles can be applied to robots.   Subsequent developments, some remaining only in a theoretical phase, promised substantial improvements in the direction of intelligent behavior, Walter trying to add “learning” skills - even if they were in a primary form, such as Pavlovian conditioning. For example, the incorporation of an auditory sensor and the whistle immediately before contact between ELMER and an obstacle will cause ELMER to subsequently perform an obstacle avoidance maneuver before contact occurs - if it "heard" the whistle. Although it seems that Walter materialized this attempt, it seems that the echo was not noticeable in the scientific world at that time.   John Hopkins’ “Beast”   Another well-known realization of a mobile automaton is the "Beast" project from the '60s of a team of engineers from Johns Hopkins University Applied Physics Laboratory, including Ron McConnell (Electrical Engineering) and Edwin B. Dean, Jr. (Physics). By having a height of half a meter, over 200 cm diameter, and a weight of almost 50 kilograms, "Beast" was built to perform two tasks only: the ability to explore the surroundings and the ability to survive on its own. Initially equipped with physical switches, "Beast" moves "freely" following the white walls of the laboratory and avoiding potential obstacles encountered. When the battery level was low, "Beast" "looks for" a black wall socket and plugs it in for power. Without a central processing unit, its control circuitry consisted of multiple transistor modules that controlled analog voltages; three types of transistors allowed three classes of tasks:   - Making a decision when activating a sensor, by emulating Boolean logic - Specifying a period to do something, by creating timing gates - Control the pressure for the automaton's arm and the charging mechanism by using power transistors.   A second version also received a photoelectric cell in addition to an improved sonar system. With the help of two ultrasonic transducers, "Beast" could now determine the distance, location within the perimeter, and obstructions along the path - thus exposing a significantly more complex "behavior" than those of Walter's tortoises. Performances such as stopping, slowing down, or bypassing an obstruction or recognizing doors, stairs, installation pipes, hanging cables or people through taking the appropriate actions are perhaps the most significant technical achievement of the pre-computer age.   In his response to Bill Gates, who predicted in 2008 that the “next” hot field would be robotics, McConnell humorously stated about their work from the '60s: “The robot group built two functioning prototypes that roamed and "lived" in the hallways of the lab, avoiding hazards such as open stairwells and doors, hanging cables and people while searching for food in the form of AC power on the walls to recharge their batteries. They used the senses of touch, hearing, feel and vision. Programming consisted of patch cables on patch boards connecting hand built logic circuits to set up behavior for avoidance, escape, searching and feeding. No integrated circuits, no computers, no programming language. With a 3 hour battery life, the second prototype survived over 40 hours on one test before a simple mechanical failure disabled it.”   Ashby’s “Mobile Homeostat”   Indeed, the most intriguing prototype of care saw the light of day before the computer age was The Homeostat1, created by W. Ross Ashby, Research Director at the Barnwood House Hospital in Gloucester, in 1948 and presented at the Ninth Macy's. Conference on Cybernetics in 1952. The Homeostat contained four identical control switch-gear kits that came from WW2 bombs (with inputs, feedback, and magnetically driven, water-filled potentiometers), and each transformed into an electro-mechanical artificial neuron. The purpose of this prototype was extremely challenging for that time, namely to be an example for all types of behavior - by addressing all living functions.   During the presentation, The Homeostat was able to perform tasks that indicate some cognitive abilities, i.e., the ability to learn and adapt to the environment. But the approach was at least strange: while other automata of the time exhibited a dynamic character by exploring the environment, the goal of the Homeostat was to reach the perfect state of balance. This approach was intended to support the author's principle of ultrastability and the law of a variety of requirements. Based on the concept of "negative feedback," the Homeostat approached incrementally the path between the current state and the final state of equilibrium, the steps representing the concrete responses of the automatons to changes in the environment (which affected the state of equilibrium). In detail, the principle of "Low of Requisite Variety" (as the author called it), stated that in order to break the variety of disturbances from the external environment, a system needs a "goal-seeking" strategy and a wide variety of possibilities to respond to them. For the animal world, a final goal like "no goal" was equivalent to achieving immortality. The part of "cognitive intelligence" embedded in the activity of automatons was precisely this "goal-seeking" approach, and, from a technical standpoint, “its principle is that it uses multiple coils in a milliammeter & uses the needle movement to dip in a trough carrying a current, so getting a potential which goes to the grid of a valve, the anode of which provides an output current”. But the audience was not very convinced of this principle, and, on the whole, its activity could be classified as a "goal-less goal-seeking machine." It was Gray Walter, who called The Homeostat a "Machina sopor," of which he said "fireside cat or dog which only stirs when disturbed, and then methodically finds a comfortable position and goes to sleep again," in contrast to his creation, "The Tortoise," called "Machina speculatrix," which embodies the idea that "a typical animal propensity is to explore the environment rather than to wait passively for something to happen." It was later learned that Alan Turing advised Ashby to implement a simulation on the ACE2 computer instead of building a special machine.   However, The Homeostat received a significant comeback in the 1980s, when a team of cognitive researchers from the University of Sussex led by Margaret Boden created several practical robots incorporating Ashby's ultrastability mechanism. Boden was fascinated by the idea of ​​modeling an autonomous goal-oriented creature, arguing that the future of cognitive science is one based on The Homeostat.   Conclusions   The cybernetics of the '60s are long gone, and the current possibilities of computer simulation are infinitely more capable than anything could be imagined or created by the geniuses of those times, and within reach of any school student. Suffice it to say that the level of tropism of Tortoises is equivalent to that of a simple bacteria, and The Beast equals the ability to coordinate of a large nucleated cell's like Paramecium, which is a bacterial hunter; or that what was then presented as a continuous adaptation of responses to external stimuli is far from what we understand and have today in terms of learning - supervised or unsupervised. But evolution has not been just the result of the appearance of computer technology and its fantastic development. As I mentioned in the introduction, the history of AI overlaps the history of cognitive science. So at today's AI level, achievements in multiple fields have contributed, including linguistics, psychology, philosophy, neuroscience, anthropology, and, of course, mathematics.   Simply put, even though in most cases it was agreed that it was a success, we can say that these mobile automatons of the pre-computer-era were nothing more than experiments before theoretical research and not during it. The rudimentary means of construction, the lack of a common language in the field and the non-adjustment between the model and the implementation mechanisms have often made the researchers of the time doubt each other's achievements3; unimaginable today, when everyone understands that an autonomous car can anticipate complex accidents better than all the drivers involved or that a software robot crushes the world chess champion without even training by playing with someone other than himself.   Footnotes:   1. In biology, homeostasis is the state of steady internal, physical, and chemical conditions maintained by living systems. 2. The Automatic Computing Engine (ACE) was a British early electronicserial stored-program computer designed by Alan Turing. 3. With regard of The Homeostat of Ashby, the cyberneticist JulianBigelow famously asked, “whether this particular model has any relation to the nervous system? It may be a beautiful replica of something, but heaven only knows what.” References: 1. Steve Battle – “Ashby’s Mobile Homeostat” 2. Margaret A. Boden – “Mind as Machine, A History of Cognitive Science” 3. Margaret A. Boden – “Creativity & Art, Three Roads to Surprise” 4. Stefano Franchi, Francesco Bianchini – “The Search for a Theory of Cognition: Early Mechanisms and New Ideas” 5. http://cyberneticzoo.com/cyberneticanimals/1962-5-hopkins-beast-autonomous-robot-mod-ii-sonarvision-jhu-apl-american/ 6. http://www.rutherfordjournal.org/article020101.html
Read more >
thumbnail

The Hawking Radiation: Passport to Escape From a Black Hole

Stefan Iliescu- CDS
July 2

The Hawking Radiation: Passport to Escape From a Black Hole

  “My goal is simple. It is a complete understanding of the universe, why it is as it is, and why it exists at all”, said Stephen Hawking, the famous theoretical physicist and cosmologist of the 20th century. The quote emphasizes that he was not one to settle for an easy challenge, a trait that we hope is the basis of every individual in our team. The task he set for himself was too large for an individual to complete in a lifetime, but, even so, the renowned British physicist accomplished substantial parts of it by leading the world to understand the bits of the universe.   Stephen Hawking devoted all his resources to the study of black holes, individually and in collaboration with other acclaimed researchers. His debut took place in 1970, when, together with Sir Roger Penrose, established the theoretical basis (the Penrose – Hawking singularity theorems) for the formation of black holes. Their prediction was proven by recent observational experiments (2015-2019) at the Laser Interferometer Gravitational-Wave Observatory (LIGO) that detected gravitational waves emitted by colliding black holes (or emerging ones).   The same theoretical basis was the expansion of the black hole (this translates into an increase in the area of ​​a black hole's event horizon) with the absorption of matter and energy from its vicinity. According to the second law of thermodynamics, the entropy of the black hole can only increase, and, as the entropy is an energy-dependent function that possesses the temperature, the scientists wanted to know how high the temperature of a black hole can go. Here comes perhaps the most significant contribution so far in the field, namely the Hawking radiation, which may be responsible for keeping the temperature bellow a „certain limit”. He uncovered that black holes, once thought to be static, unchanging, and defined only by their mass, charge, and spin, are actually ever-evolving engines that emit radiation and evaporate over time. Although this contribution has not yet been proven by any experiment, which is why Hawking did not win the Nobel Prize in his lifetime, it is seen as the only widely recognized result by physicists in the field as support for a unifying theory of quantum mechanics and gravity.   The next question for the scientific world was, logically, whether the radiation emitted by the black hole preserves the information that came with the ingestion of matter, even in a scrambled form. For many years Hawking did not believe so, and proposed in 1997, characteristically for him, a bet (Thorne – Hawking – Preskill bet). In 2004 Hawking updates his own theory stating that the black hole event's horizon is not really a "firewall" but rather an "apparent horizon" that enables energy and information to escape (from the quantum theory standpoint), thus declaring himself the loser of the bet. Moreover, he considers that he has thus corrected the biggest mistake of his life in the field. Neither Kip Thorne, who was with him in the bet against John Preskill nor half of the scientific world, is considered convinced of this update, today, two years after Hawking's death. In the absence of solid experimental evidence (which, among other things, will support a quantum theory of gravity), the question of whether and how information leaks from a black hole (through Hawking radiation) remain open.
Read more >
thumbnail

Web Browser Security: From Netscape Navigator to Microsoft Edge

Marius Marinescu- CTO
June 30

Web Browser Security: From Netscape Navigator to Microsoft Edge

  The Internet has become an intrinsic part of our everyday life, both if you are interested in the threats it poses from a cybersecurity point of view or if you are only enjoying the many advantages it offers. Not so long ago though, you had to be a visionary to imagine the power it was going to hold in the future. Microsoft wanted to get into the browser game as soon as possible after Netscape Communications Corporation became the web browser industry leader, a little after the release of its flagship browser, Netscape Navigator, in October 1994.   Soon after, Microsoft licensed from Spyglass Inc. the Mosaic software that would be furtherly used as the basis for the first version of Internet Explorer. Spyglass was an Internet software company founded by students at the Illinois Supercomputing Center that managed to develop one of the earliest browsers for navigating the web. They waited an entire year to go public after they began distributing their software and making up to $7 million out of it, which happened exactly on this day, 25 years ago.   Microsoft developed the functionality of the Internet Explorer browser and embedded it in the core Windows operating system for the better part of the last 25 years. They are still providing to this day the old Windows Internet Explorer 11 (latest supported version) with security patches, but they are replacing it on the newer operating systems with their own Microsoft Edge browser, which in turn, they are replacing this year with a brand new Microsoft Edge browser. Confusing, right? The main difference between the old Edge browser and the new Edge browser is that the latter is based on Google’s Ghromium web engine and has nothing to do with Microsoft’s old code-base.   But until the new Edge browser will be the default choice on Microsoft OS’s, let’s take a look at the current Edge browser and his relationship with the old Internet Explorer. The already „old” Microsoft Edge has more in common with Internet Explorer than you might think especially when it comes to security flaws.   Given that the number of vulnerabilities found in Edge is far below Internet Explorer, it's reasonable to say Edge looks like a more secure browser. But is Edge really more secure than Internet Explorer? According to a Microsoft blog post from 2015, the software giant's Edge browser, an exclusive for Windows 10, is said to have been designed to "defend users from increasingly sophisticated and prevalent attacks."   In doing that, Edge scrapped older, insecure, or flawed plugins or frameworks, like ActiveX or Browser Helper Objects. That already helped cut a number of possible drive-by attacks traditionally used by hackers. EdgeHTML, which powers Edge's rendering engine, is a fork of Trident, which still powers Internet Explorer.   However, it's not clear how much of Edge's code is still based off old Internet Explorer code. When asked, Microsoft did not give much away. They said that "Edge shares a universal code base across all form factors without the legacy add-on architecture of Internet Explorer. Designed from scratch, Microsoft does selectively share some code between Edge and Internet Explorer, where it makes sense to do so."   Many security researchers are saying that overlapping libraries are where you get vulnerabilities that aren't specific to either browser, because when you're working on a project as large as a major web browser, it's highly unlikely that you would throw out all the project specific code and the underlying APIs that support it. There are a lot of APIs that the web browser uses that will still be common between the browsers. If you load Microsoft Edge and Internet Explorer on a system, you will notice that both of them load a number of overlapping DLLs.   The big question is how much of that Internet Explorer code remains in Edge, and crucially, if any of that code has any connection to the overlap of flaws found in both browsers that poses a risk to Edge users. The bottom line is that it's hard, if not impossible to say if a browser is more or less secure than another browser.   A "critical" patch, which fixes the most severe of vulnerabilities, is a moving scale and has to consider the details of the flaw, as well as if it's being exploited by attackers. With an unpredictable number of flaws found each month coupled with their severity ratings, a browser's security worth can vary month by month.   As history showed us, in the last 5 years the Edge browser had no fewer than 615 security vulnerabilities and Internet Explorer almost doubles that – 1030.   Microsoft's decision to adopt the Chromium open-source code to power its new Edge browser could mean a sooner-than-expected end of support for Internet Explorer and the end of support for the shared code-base with the „old” Edge browser. And that’s a good thing for the security of users that are only using the browser provided by the operating system itself (7.76% - Microsoft Edge, 5.45% - Internet Explorer as of April 2020).
Read more >
thumbnail

Siri Shortcuts: Hey, Siri! Watch Out For Scareware!

Cristian Gal- CSO
June 12

Siri Shortcuts: Hey, Siri! Watch Out For Scareware!

Some of us can’t imagine life without Siri or another virtual assistant to help, guide and save time throughout the day. Even though it has so many advantages, the fact that, in order to work properly, it must always be listening, raises serious privacy concerns.   The first step that led to the creation of today’s speaking devices was an educational toy named the Speak & Spell, announced back in 1978 by Texas Instruments. It offered a number of word games, similar to the hangman, and a spelling test. What was revolutionary about it was its use of a voice synthesis system that electronically simulated the human one.  

The system was created as an offshoot of the pioneering research into speech synthesis developed by a team that included Paul Breedlove as the lead engineer. Breedlove was the one that came up with the idea of a learning aid for spelling. Breedlove’s plan was to build upon bubble memory, another TI research effort, and as such it involved an impressive technical challenge: the device should be able to speak the spelling word out loud.

The team analyzed several options regarding how to use the new technology and the winner was this 50$ toy idea.

    With Apple’s introduction of iOS 12 for all their supported mobile devices came a powerful new utility for automation of common tasks called Siri Shortcuts. This new feature can be enabled via third-party developers in their apps, or custom built by users downloading the Shortcuts app from the app store. Once downloaded and installed, the it grants the power of scripting to perform complex tasks on users’ personal devices.   Siri Shortcuts can be a useful tool for both users and app developers who wish to enhance the level of interaction users have with their apps. But this access can potentially also be abused by malicious third parties. According to X-Force IRIS research, there are security concerns that should be taken into consideration in using Siri Shortcuts.   For instance, Siri Shortcuts can be abused for scareware, a pseudo-ransom campaign trying to trick potential victims into paying a certain a criminal by convincing them their data is in the hands of a remote attacker. Using native shortcut functionality, a script could be created to transmit ransom demands to the device’s owner by using Siri’s voice. To lend more credibility to the scheme, attackers can automate data collection from the device and have it sent back the user’s current physical address, IP address, contents of the clipboard, stored pictures/videos, contact information and more. This data can be displayed to the user to convince them that an attacker can make use of it unless they pay a ransom.   To move the user to the ransom payment stage, the shortcut could automatically access the Internet, browsing to a URL that contains payment information via cryptocurrency wallets, and demand that the user pay-up or see their data deleted, or exposed on the Internet.   Apple prefers quick access over device security for Siri, which is why the iOS default settings allow Siri to bypass the passcode lock. However, allowing Siri to bypass the passcode lock could allow a thief or hacker to make phone calls, send texts, send e-mails, and access other personal information without having to enter the security code first.   There is always a balance that must be struck between security and usability. Users and software developers must choose how much perceived security feature-related inconvenience are they willing to endure in order to keep their devices safe versus how quickly and easily they want to be able to use them.   Whether you prefer instant access to Siri without having to enter a passcode is completely up to you. In some cases, while you're in the car, for example, driving safely is more important than data security. So, if you use your iPhone in hands-free mode, keep the default option, allowing the Siri passcode bypass.   As the Siri feature becomes further advanced and the amount of data sources it is tapped into increases, the data security risk for the screen lock bypass may also increase. For example, if developers tie Siri into their apps in the future, Siri could provide a hacker with financial information if a Siri-enabled banking app is running and logged in using cached credentials and a hacker asks Siri the right questions.
Read more >
thumbnail

SSL/TLS Vulnerabilities Leave Room for Security Breaches

Marius Marinescu- CTO
June 9

SSL/TLS Vulnerabilities Leave Room for Security Breaches

By integrating cybersecurity and complex architectures in the IT field, we cannot appreciate enough the unprecedented security developed by Netscape Corporation. Besides developing Navigator, the browser that would change the way the Internet was used by the masses, it also pioneered the Secure Sockets Layer (SSL) Protocol that enabled privacy and consumer protection. The underlying technology used for their browsers at that time, Navigator and Communicator, still powers today’s security standard, Transport Layer Security (TLS).   Back in 1996, Washington Post published an article in which they speculated that Netscape might one day turn into a challenge for Microsoft, due to the fact that the software startup was growing very fast. It seems like they were right since, years later, the source code used for Netscape Navigator 4.0 would lead to the creation of Mozilla and its Firefox browser. This is one of the best alternatives to Google Chrome which, in 2016, managed to dethrone Internet Explorer, the browser created by Microsoft. Although all modern browsers are using the SSL and TLS protocols pioneered by Netscape Corporation, these protocols had their fair share of vulnerabilities over the years. So, remember that using the latest browser, without any other security solution, doesn’t mean that you are protected against the latest attacks. Here are some of the most prominent attacks involving breaches of the SSL/TLS protocols that had surfaced in recent years:   POODLE The Padding Oracle On Downgraded Legacy Encryption (POODLE) attack was published in October 2014 and exploits two aspects: the fact that some servers/clients still support SSL 3.0 for interoperability and compatibility with legacy systems and a vulnerability within SSL 3.0 that is related to block padding. The client initiates the handshake and sends a list of supported SSL/TLS versions. An attacker intercepts the traffic, performing a man-in-the-middle (MITM) attack, and impersonates the server until the client agrees to downgrade the connection to SSL 3.0. The SSL 3.0 vulnerability is in the Cipher Block Chaining (CBC) mode. Block ciphers require blocks of fixed length. If data in the last block is not a multiple of the block size, extra space is filled by padding. The server ignores the content of padding. It only checks if padding length is correct and verifies the Message Authentication Code (MAC) of the plaintext. That means that the server cannot verify if anyone modified the padding content. An attacker can decipher an encrypted block by modifying padding bytes and watching the server response. It takes a maximum of 256 SSL 3.0 requests to decrypt a single byte. This means that once every 256 requests, the server will accept the modified value. The attacker does not need to know the encryption method or key. Using automated tools, an attacker can retrieve the plaintext character by character. This could easily be a password, a cookie, a session or other sensitive data.   BEAST The Browser Exploit Against SSL/TLS (BEAST) attack was disclosed in September 2011. It applies to SSL 3.0 and TLS 1.0 so it affects browsers that support TLS 1.0 or earlier protocols. An attacker can decrypt data exchanged between two parties by taking advantage of a vulnerability in the implementation of the Cipher Block Chaining (CBC) mode in TLS 1.0. This is a client-side attack that uses the man-in-the-middle technique. The attacker uses MITM to inject packets into the TLS stream. This allows them to guess the Initialization Vector (IV) used with the injected message and then simply compare the results to the ones of the block that they want to decrypt.   CRIME The Compression Ratio Info-leak Made Easy (CRIME) vulnerability affects TLS compression. The compression method is included in the Client Hello message and it is optional. You can establish a connection without compression. Compression was introduced to SSL/TLS to reduce bandwidth. DEFLATE is the most common compression algorithm used. One of the main techniques used by compression algorithms is to replace repeated byte sequences with a pointer to the first instance of that sequence. The bigger the sequences that are repeated, the higher the compression ratio. All the attacker has to do is inject different characters and then monitor the size of the response. If the response is shorter than the initial one, the injected character is contained in the cookie value and so it was compressed. If the character is not in the cookie value, the response will be longer. Using this method an attacker can reconstruct the cookie value using the feedback that they get from the server.   BREACH The Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) vulnerability is very similar to CRIME, but BREACH targets HTTP compression, not TLS compression. This attack is possible even if TLS compression is turned off. An attacker forces the victim’s browser to connect to a TLS-enabled third-party website and monitors the traffic between the victim and the server using a man-in-the-middle attack.   Heartbleed Heartbleed was a critical vulnerability that was found in the heartbeat extension of the popular OpenSSL library. This extension is used to keep a connection alive as long as both parties are still there. The client sends a heartbeat message to the server with a payload that contains data and the size of the data (and padding). The server must respond with the same heartbeat request, containing the data and the size of data that the client sent. The Heartbleed vulnerability was based on the fact that if the client sent false data length, the server would respond with the data received by the client and random data from its memory to meet the length requirements specified by the sender. Leaking unencrypted data from server memory can be disastrous. There have been proof-of-concept exploits of this vulnerability in which the attacker would get the private key of the server. This means that an attacker would be able to decrypt all the traffic to the server. Server memory may contain anything: credentials, sensitive documents, credit card numbers, emails, etc.   Bleichenbacher This relatively new cryptographic attack can break encrypted TLS traffic, allowing attackers to intercept and steal data previously considered safe and secure. This downgrade attack works even against the latest version of the TLS protocol, TLS 1.3, released in 2018 and considered to be secure. This cryptographic attack is a variation of the original Bleichenbacher Oracle attack and represents yet another way to break RSA PKCS#1 v1.5, the most common RSA configuration used to encrypt TLS connections nowadays. Besides TLS, this new Bleichenbacher attack also works against Google's new QUIC encryption protocol as well. The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations. Even the newer version of the TLS 1.3 protocol, where RSA usage has been kept to a minimum, can be downgraded in some scenarios to TLS 1.2, where the new Bleichenbacher attack variation works.   In most cases, the best way to protect yourself against SSL/TLS-related attacks is to disable older protocol versions. This is even a standard requirement for some industries. For example, June 30, 2018, was the deadline for disabling support for SSL and early versions of TLS (up to and including TLS 1.0) according to the PCI Data Security Standard. The Internet Engineering Task Force (IETF) released advisories concerning the security of SSL. Deprecation of TLS 1.0 and 1.1 by IETF is expected soon.
Read more >
thumbnail

Anonymous’ Hacking Tactics – Revealed In The Attack On Vatican

Marius Marinescu- CTO
June 3

Anonymous’ Hacking Tactics – Revealed In The Attack On Vatican

The Los Angeles Times reported that Father Leonard Boyle was working to put the Vatican’s Library on the World Wide Web through a site funded by IBM. “Bringing the computer to the Middle Ages and the Vatican library to the world.” Boyle computerized the library’s catalog and placed manuscripts and paintings on the website, which was in part funded by IBM. Today, thousands of manuscripts and incunabula have been digitized and are publicly available on the Vatican Library website. A number of other offerings are available, which include images and descriptions of the Vatican’s extensive numismatic collection that dates back to Roman times.   The Vatican’s digital presence soon caught the hacker’s attention and in August 2011, when by the elusive hacker movement known as Anonymous launched a cyber-attack against it.  Although the Vatican has seen its fair share of digital attacks over the years, what makes this particular one special is the fact that this was the first Anonymous attack to be identified and tracked from start to finish by security researchers, providing a rare glimpse into the recruiting, reconnaissance and warfare tactics used by the shadowy hacking collective.   The campaign against the Vatican, which has not received wide attention at the time, involved hundreds of people, some with hacking skills and some without. A core group of participants openly drummed up support for the attack using YouTube, Twitter and Facebook. Others searched for vulnerabilities on a Vatican Web site and, when that failed, enlisted amateur recruits to flood the site with traffic, hoping it would crash.   Anonymous, which first gained widespread notice with an attack on the Church of Scientology in 2008, has since carried out hundreds of increasingly bold strikes, taking aim at perceived enemies including law enforcement agencies, Internet security companies and opponents of the whistle-blower site WikiLeaks.   The group’s attack on the Vatican was confirmed by the hackers and it may be the first end-to-end record of a full Anonymous attack. The attack was called “Operation Pharisee” in a reference to the sect that Jesus called hypocrites. It was initially organized by hackers in South America and Mexico before spreading to other countries, and it was timed to coincide with Pope Benedict XVI’s visit to Madrid in August 2011 for World Youth Day, an annual  international event that regularly attracts more than a million young Catholics.   Hackers initially tried to take down a website set up by the church to promote the event, handle registrations and sell merchandise. Their goal – according to YouTube messages delivered by an Anonymous figure in a Guy Fawkes mask – was to disrupt the event and draw attention.   The hackers spent weeks spreading their message through their own website and social media channels like Twitter and Flickr. Their Facebook page encouraged volunteers to download free attack software so that they might join the attack. It took the hackers 18 days to recruit enough people. Then the reconnaissance began. A core group of roughly a dozen skilled hackers spent three days poking around the church’s World Youth Day site looking for common security holes that could let them inside. Probing for such loopholes used to be tedious and slow, but the advent of automated tools made it possible for hackers to do this around the clock.   In this case, the scanning software failed to turn up any gaps. So, the hackers turned to a brute-force approach – a DDoS attack. Even unskilled supporters could take part in this from their computers or smartphones. Over the course of the campaign’s final two days, Anonymous enlisted as many as a thousand people to download attack software, or directed them to custom-built websites that let them participate using their cellphones. Visiting a particular web address caused the phones to instantly start flooding the target website with hundreds of data requests each second, with no special software required.   On the first day, the denial-of-service attack resulted in 28 times the normal traffic to the church site, rising to 34 times the next day. Hackers involved in the attack, who did not identify themselves, said, through a Twitter account associated with the campaign, that the two-day effort succeeded in slowing the site’s performance and making the page unavailable “in several countries”. Anonymous moved on to other targets, including an unofficial site about the pope, which the hackers were briefly able to deface.   In the end, the Vatican’s defenses held up because, unlike other hacker targets, it invested in the infrastructure needed to repel both break-ins and full-scale assaults, using some of the best cybersecurity technology available at the time. Researchers who have followed Anonymous say that despite its lack of success in this and other campaigns, their attacks show the movement is still evolving and, if anything, emboldened.
Read more >
thumbnail

Fortran

Stefan Iliescu- CDS
May 8

Fortran

”Modern Fortran is a powerful and flexible programming language that constitutes the foundation of high performance computing for research and science. Its powerful parallelization capabilities, low-level machine learning and deep learning libraries make it perfectly suited for the large scale simulation of physical systems to the detriment of C language. But history also gives us another perspective on the competition between Fortran and C. The code was passed on to the students, who, found Fortran much easier to learn than C. Given the long history of Fortran, it is no surprise that a large amount of legacy code in physics is written in Fortran.”  Ștefan Iliescu - Chief Data Scientist at Metaminds.    
Read more >
thumbnail

WorldWideWeb

Cristian Gal- CSO
May 5

WorldWideWeb

31 years ago, Berners-Lee wrote a proposal for "a large hypertext database with typed links". This turns up in what we know today as the World Wide Web. "His work paved the way for a brave new hyper-connected world where, be it against disinformation or to ward off malware, protecting is caring. Cyber security is a matter of responsibility.” — Petru Cristian Gal, Security Solutions Team Leader Metaminds.
Read more >
thumbnail

Belady Anomaly

Stefan Iliescu- CDS
March 24

Belady Anomaly

”Usually, if you increase the number of frames allocated to a process in virtual memory the chances to receive fewer page faults increase. Sometimes the opposite happens and the phenomenon is called Belady's Anomaly. This phenomenon is experienced to a greater or lesser extent in page replacement algorithms such as First In First Out (FIFO), Second Chance Algorithm and Random Page Replacement Algorithm. Although algorithms that do not suffer from this anomaly are being used too, such as LRU or Optimal Page Replacement - that follow the stack algorithm property, the anomaly is still a topic of interest for research.” Ștefan Iliescu - Chief Data Scientist at Metaminds.
Read more >
thumbnail

HP-41C Pocket Calculator

Marius Marinescu- CTO
March 24

HP-41C Pocket Calculator

39 years ago NASA demonstrated the power of being prepared for any type of situation: by installing a specific software on the HP-41C pocket calculator, the astronauts from the first space shuttle flights were able to calculate the exact angle at which they needed to re-enter the Earth's atmosphere. ”Nowadays a mobile phone has roughly 5.5 million times more processing power than the pocket calculator and so does the malware. Not that you typically need to re-enter the Earth's atmosphere on a Monday morning using your new shiny mobile phone, but better to keep it safe than sorry.” Marius Marinescu, Chief Technology Officer at Metaminds.
Read more >