• Software Defined Everything. Bare-metal too.
  • Cloud-scale Systems. Cloud-ready Management.
  • Private Cloud Where You Want It. Like on-prem, Only Better.
  • Files, Objects and Meta-data. Cloud-scale Data Management.
  • The Multi-model Database and the (Hybrid) Cloud. Protect Your Data.
  • Attributes and Federation. The Modern (Cloud-ready) Digital Identity.
  • Herding Cats. Microservices, Service Composition and Integration.
  • Portals, Mobile Apps and APIs. Your (Web) Services, Your Rules.
  • Secure Enterprise Mobility. Secure Online Collaboration.
  • Bring Your Own… Apps. Well Beyond MDM.
  • AI (“Amazing Innovations”) for Governance and Cyber Security.
  • Focus on Situational Awareness and Incident Response.
  • Cyber Security Analytics. Empower the Human.
  • Enable Teamwork. Management Automation.

Metaminds

meta universe

Metaminds is Dell’s IT Transformation Partner of the Year

We are always keen to outdo ourselves and prove that our accomplishments are rather habits than chance events. For the second year in a row, Dell Technologies acknowledged our work and granted us the  ‘IT Transformation Partner of the Year’ award. The recognition was made during the virtual Dell EMC Partner Awards 2020 ceremony. The […]
Read more >

The Top Local Company in the IT&C Field

Our achievements have been acknowledged by the leader of the Romanian business press, Ziarul Financiar.
Read more >

Sharing Ideas for the World to Move Forward

  We are always happy to share the knowledge we have gathered so far, especially when this information will help businesses, as well as individuals, come out of this difficult period in a better shape than before. The fourth edition of the IDC Events Romania Summit Series, suggestively named ”The Show Must Go ON Line”, was […]
Read more >

Knowledge Corner – Our New Go-to Section for Tech Insights

  In pursuing our goal to become pioneers in metacognition, we need to have our voice heard. It’s our pleasure to introduce “The Knowledge Corner”, a brand new category in our site where you can find fresh insights about bleeding-edge technologies, astonishing pieces of history or personal experiences in computer science and engineering. It is […]
Read more >

meta minds

thumbnail

Multics and its impact on current secure operating systems

Marius Marinescu - CTO
The plan for Multics was presented to the 1965 Fall Joint Computer Conference in a series of six papers. It was a joint project with M.I.T., General Electric and Bell Labs. Bell Labs dropped out in 1969, and in 1970 GE's computer business, including Multics, was taken over by Honeywell (now Bull). [separator] MIT's Multics research began in 1964, led by Professor Fernando J. Corbató at MIT Project MAC, which later became the MIT Laboratory for Computer Science (LCS) and then Computer Science And Artificial Intelligence Laboratory (CSAIL).   Starting in 1969, Multics was provided as a campus-wide information service by the MIT Information Processing Services organization, serving thousands of academic and administrative users.   It was conceived as a general purpose time-sharing utility and was a commercial product for GE, which sold time-sharing services. It became a GE and then a Honeywell product. About 85 sites ran Multics. However, it had a powerful impact in the computer field, due to its many novel and valuable ideas.   Since it was designed to be a utility, such as electricity and telephone services, this dictated a number of Multics' features, including the modular structure of the hardware (with multiple CPUs and main memory banks, fully interconnected, and with the ability to take individual units out of service for maintenance, or to simply add additional units as demand increased over time), extremely robust security (so that individual users in a facility open to all comers would be protected from each other), etc.   In addition to the modular hardware and robust security, Multics had a number of other major technical features, some commonplace now (and some still not too common – alas!), but major advances when it was first designed, in 1967. They include:   • A single-level store • Dynamic linking for libraries, etc • A command processor implemented entirely in user code • A hierarchical file system • Separate access control lists for each 'file'   The single-level store architecture of Multics was particularly significant: it discarded the clear distinction between files (called segments in Multics) and process memory. The memory of a process consisted solely of segments that were mapped into its address space. To read or write on them, the process simply used normal instructions; the operating system took care of making sure that all the modifications were saved to secondary storage (disk).   In modern UNIX terminology, it was as if every file were 'mmap()'ed, however, in Multics there was no concept of process memory, separate from the memory used to hold mapped-in files: all memory in the system was part of some segment, which appeared in the file system; this included the temporary scratch memory of the process, such as its kernel stack, etc.   Multics also implemented virtual memory, which was very new at that time (only a handful of other systems implemented it at that point); but this was not a new idea with Multics. The segmentation and paging in Multics are often discussed together, but it is important to realize that they were not fundamentally connected. One could theoretically have an SLS system which did not page. Paging was added as well for practical reasons. Multics also made popular the now-common technique of having separate per-process stacks in the kernel and this was apparently first seen in the Burroughs B5000, but was not well known.   This is an important kernel structuring advance since it greatly simplifies code. If a process discovers, somewhere deep inside a subroutine call stack that it needs to wait for an event, it can simply do so right there, instead of having to unwind its way out, and then return later when the waited-for event has happened.   The entire system was written almost entirely in higher-level language (PL/I) - which was quite rare at the time. The Burroughs B5000 had an OS written in ALGOL, but this was the only previous system to do so. Multics ran only on special hardware, which provided hardware support for its single-level store architecture. It initially ran on the GE 645, a modified version of the GE 635. After GE was bought by Honeywell, a number of models of the Honeywell 6000 series systems were produced to run Multics on.   Altough Multics introduced many innovations, it also had many problems, and in the end of 1960s, Bell Labs, frustrated by the slow progress and difficulties, pulled out of the project. Thus a young engineer at AT&T Bell Labs, Kenneth (Ken) Thompson, with the help of his colleagues, Dennis Ritchie, Douglas McIlroy and Joe Ossanna, decided to experiment with some Multics concepts and to redo it on a much smaller scale. Thus in 1969 the idea of now ubiquitous Unix was born.   While Ken Thompson still had access to the Multics environment, he wrote simulations for the new file and paging system on it. Later the group continued his work on blackboards and scribbled notes. Also in 1969, Thompson developed a very attractive game, Space Travel, first written on Multics, then transliterated into Fortran for GECOS, and finally for a little-used PDP-7 at Bell Labs. The same PDP-7 he then decided to use for the implementation of the first UNIX. On this PDP-7, and using its assembly language, the team of researchers (initially without financial support from Bell Labs) led by Thompson and Ritchie, developed a hierarchical file system, the concepts of computer processes and device files, a command-line interpreter and some small utility programs.   The name Unics was coined in 1970 by the team member Brian Kernighan, who played on Multics name. Unics (Uniplexed information and computing system) could eventually support multiple simultaneous users, and was later shortened to Unix.   Structurally, the file system of PDP-7 Unix was nearly identical to today's, for example it had:   • An i-list: a linear array of i-nodes each describing a file. An i-node contained less than it does now, but the essential information was the same: the protection mode of the file, its type and size, and the list of physical blocks holding the contents. • Directories: a special kind of file containing a sequence of names and the associated i-number. • Special files describing devices. The device specification was not contained explicitly in the i-node, but was instead encoded in the number: specific i-numbers corresponded to specific files.   In 1970, Thompson and Ritchie wanted to use Unix on a much larger machine than the PDP-7, and traded the promise of adding text processing capabilities to Unix to some financial support from Bell, porting the code for a PDP-11/20 machine. Thus for the first time in 1970, the Unix operating system was officially named and ran on the PDP-11/20. It added a text formatting program called roff and a text editor. All three were written in PDP-11/20 assembly language. Bell Labs used this initial „text processing system”, made up of Unix, roff, and the editor, for text processing of patent applications. Roff soon evolved into troff, the first electronic publishing program with a full typesetting capability.   In 1972, Unix was rewritten in the C programming language, contrary to the general notion at the time „that something as complex as an operating system, which must deal with time-critical events, had to be written exclusively in assembly language” (although Unix was not the first OS, written in high-level language, it was Burroughs B5000 from 1961). C language was created by Ritchie as an improved version of B language, created by Thompson as a translation of BCPL from Martin Richards. The migration from assembly language to the higher-level language C resulted in much more portable software, requiring only a relatively small amount of machine-dependent code to be replaced when porting Unix to other computing platforms.   AT&T made Unix available to universities and commercial firms, as well as the United States government, under licenses. The licenses included all source code including the machine-dependent parts of the kernel, which were written in PDP-11 assembly code. Copies of the annotated Unix kernel sources circulated widely in the late 1970s in the form of a much-copied book, which led to considerable use of Unix as an educational example. At some point, ARPA (Advanced Research Projects Agency) adopted Unix as a standard language for the Arpanet (the predecessor of Internet) community. During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (particularly of the BSD version, originating from the University of California, Berkeley) by many commercial startups, for example Solaris, HP-UX and AIX. Today, in addition to certified Unix systems such as those already mentioned, Unix-like operating systems such as Linux and BSD descendants (FreeBSD, NetBSD, and OpenBSD) are commonly encountered.
Read more >

thumbnail

Stuxnet – the World’s First Digital Weapon

Marius Marinescu - CTO
  Stuxnet is an extremely sophisticated computer worm that exploits multiple previously unknown Windows zero-day vulnerabilities to infect computers and spread. Its purpose was not just to infect PCs but to cause real-world physical effects. Specifically, it targets centrifuges used to produce the enriched uranium that powers nuclear weapons and reactors. [separator] Stuxnet was first identified by the infosec community in 2010, but development on it probably began in 2005. Despite its unparalleled ability to spread and its widespread infection rate, Stuxnet does little or no harm to computers not involved in uranium enrichment. When it infects a computer, it checks to see if that computer is connected to specific models of programmable logic controllers (PLCs) manufactured by Siemens. PLCs are how computers interact with and control industrial machinery like uranium centrifuges.   If it is deconstructed, they are simply the control element of a control system. If you are building a motion detected lighting system, you must have 3 parts to that; a sensor, a controller and an actuator. In a lighting system, the sensor would be a thermal sensor that detects human presence or movement; the controller would be a circuit or something more complex that the logic of the system would be built in, and the actuator would be the lights. The end result would be the controller sensing the presence of a human through the sensors, and turning on the switch to turn on the lights. This is a very simple control system that gives the ability to program the controller without changing the circuitry, or electrical system associated with it.   Modern PLCs are programmed using the proprietary OEM software that comes along with the system. This software incorporates graphical programming interfaces such as ladder programming that enable automation engineers with limited programming knowledge to program the PLCs that will automate the connected hardware. In a factory setting, combinations of PLCs are connected using SCADA (Supervisory Control and Data Acquisition) systems that are also programmed using OEM software provided by the system manufacturers creating an ecosystem of Operational Technology software.   The biggest jaw-drop comes when we analyze the security of this software, with it being developed for engineers by OT software developers. By some, the vulnerabilities of the software have been thrashed as Insecure by Design, especially when looking at the access privileges and protocol vulnerabilities. There are significant amounts of vulnerabilities reported on leading OEM software vendors that question the very competency of the hardware giants to develop secure OT software. It is these vulnerabilities combined with OS vulnerabilities that were exploited by Stuxnet to carry out massive damages to selected critical infrastructure.   It's now widely accepted that Stuxnet was created by the intelligence agencies of the United States and Israel. The classified program to develop the worm was given the code name "Operation Olympic Games" and it was begun under President George W. Bush and continued under President Obama. While neither government has ever officially acknowledged developing Stuxnet, a 2011 video created to celebrate the retirement of Israeli Defense Forces head Gabi Ashkenazi listed Stuxnet as one of the successes under his watch.   While the individual engineers behind Stuxnet haven't been identified, we know that they were very skilled, and that there were a lot of them. Kaspersky Lab's Roel Schouwenberg estimated that it took a team of ten coders two to three years to create the worm in its final form. The U.S. and Israeli governments intended Stuxnet as a tool to derail, or at least delay, the Iranian program to develop nuclear weapons. The Bush and Obama administrations believed that if Iran was on the verge of developing atomic weapons, Israel would launch airstrikes against Iranian nuclear facilities in a move that could have set off a regional war. Operation Olympic Games was seen as a nonviolent alternative. Although it wasn't clear that such a cyberattack on physical infrastructure was even possible, there was a dramatic meeting in the White House Situation Room late in the Bush presidency during which pieces of a destroyed test centrifuge were spread out on a conference table. It was at that point that the U.S. gave the go-ahead to unleash the malware.   Stuxnet was developed as a computer malware that only attacked SCADA systems that were developed by Siemens, the German industrial devices giant. The malware was designed to exploit zero-day vulnerabilities in Microsoft Windows operating system, and the software of Siemens, SIMATIC STEP 7 and SIMATIC WinCC. In terms of Microsoft Windows, the creators of the virus exploited 4 zero-day vulnerabilities of Microsoft Windows to spread. The main objective of Stuxnet was to increase the speed of the Iranian nuclear centrifuges at Natanz, resulting in a melt-down, thus damaging the nuclear infrastructure.   It is important to note that most operational technology systems of modern critical infrastructure are built with direct cyber attacks in mind, thereby air gapping the systems in most cases. What it means is that the local networks of SCADA systems are not connected to the unsecured systems such as the Internet. This prevents a direct remote cyber attack without the engagement of a physical agent impossible, thus reducing the vulnerability of the system. The above argument was taken into consideration by the developers of Stuxnet.   Stuxnet mainly had 3 components that worked in sync: a worm to deliver the payload, a link file to replicate the worm, and a rootkit to hide all the malicious code. The malware famously exploited the Windows shortcut vulnerability from where it is spread to removable devices such as flash drives.   The sophistication in Stuxnet’s design makes it interesting to study how it affected Natanz nuclear centrifuges. A rough idea of what happened is as follows:   1. Stuxnet spreads to millions of devices through the internet, infecting computers and copying itself to the removable devices such as USB flash drives.   2. Stuxnet malware infects the computer of the maintenance engineer through the USB flash drive. Since an air gap is installed to block direct cyber attacks by the external networks to the internal network of the Natanz facility, this was the only way such an infection was possible.   3. The malware is executed in the local host computer without any indication and replicates rapidly within the local network exploiting a Windows network vulnerability.   4. The malware has found the control computer running Siemens software and has infected its configuration files. There are varying reports of this software being SIMATIC STEP 7 — the Siemens PLC software or SIMATIC WinCC — the Siemens SCADA software. The infection results in malicious lines of code being executed by the system.   5. The code changes the programming to increase the centrifugal speed of Natanz centrifuges thus controlling the hardware. These lines of code are said to be executed once in 27 days to make it undetectable.   6. Code changes the output of the system to hide the increased centrifugal speeds. For example, if the centrifugal speeds are increased from 10,000rpm to 15,000rpm over a period of 3 months, the output from the SCADA system would only display 10,000rpm as the current centrifugal speed. This is to increase the damage to the infrastructure by delaying the date of discovery.   The complexity of Stuxnet lead to it being named the world’s first digital weapon. Despite how well Stuxnet was designed, in its payload it is simply a logic bomb; a malware that is executed only when logic is met, in this case, the control computer of a Siemens S7–400 PLC, running SIMATIC WinCC and SIMATIC STEP 7 software. This was the configuration at Natanz nuclear centrifuges, but not only there. Stuxnet was never intended to spread beyond the Iranian nuclear facility at Natanz. However, the malware did end up on internet-connected computers and began to spread in the wild due to its extremely sophisticated and aggressive nature, though, as noted, it did little damage to outside computers it infected. Many in the U.S. believed the spread was the result of code modifications made by the Israelis.   The malware ultimately affected 115 countries damaging thousands of industrial equipments running the machines with said configuration. Symantec, who was the first that unraveled Stuxnet, said that Stuxnet was "by far, the most complex piece of code that we've looked at - in a completely different league from anything we’d ever seen before". And while you can find lots of websites that claim to have the Stuxnet code available to download, you shouldn't believe them: the original source code for the worm, as written by coders working for U.S. and Israeli intelligence, hasn't been released or leaked and can't be extracted from the binaries that are loose in the wild. (The code for one driver, a very small part of the overall package, has been reconstructed via reverse engineering, but that's not the same as having the original code.)   Since then several other worms with infection capabilities similar to Stuxnet, including those dubbed Duqu and Flame, have been identified in the wild, although their purposes are quite different than Stuxnet's. Their similarity to Stuxnet leads experts to believe that they are products of the same development shop, which is apparently still active.  
Read more >

thumbnail

Ubuntu: Linux’s most popular distribution for desktop

Marius Marinescu - CTO
Since its inception in 2004, Ubuntu has been built on a foundation of enterprise-grade, industry leading security practices. From the toolchain to the software suite and from the update process to the industry standard certifications, Canonical never stopped working to keep Ubuntu at the forefront of safety and reliability. [separator] In 2014, the UK government security arm CESG had published a report of its assessment on the security of all ‘End User Device’ operating systems. Its assessment compared 11 desktop and mobile operating systems across 12 categories including: VPN, disk encryption and authentication. These criteria are roughly equivalent to a standard set of enterprise security best practices, and Ubuntu 12.04 LTS came out on top – the only operating system that passed nine requirements without any “Significant Risks”.   The security assessment included the following categories: • VPN • Disk Encryption • Authentication • Secure Boot • Platform Integrity and Application SandboxingApplication Whitelisting • Malicious Code Detection and Prevention • Security Policy EnforcementExternal Interface Protection • Device Update Policy • Event Collection for Enterprise Analysis • Incident Response At that time no operating system met all of those requirements. Ubuntu however, scored the highest in a direct comparison. Only 3 sections from the security assessment had comments: VPN, Disk Encryption and Secure Boot.   VPN The comments made by CESG were that “The built-in VPN has not been independently assured to Foundation Grade.”   This means that the software does meet all the technical requirements of security to pass the assessment, but that the software itself has not been independently assessed to make sure that it hasn’t been tampered with during the development process.   Disk Encryption Disk encryption is a similar case to the VPN assessment. For Ubuntu 12.04, CESG states: “LUKS and dm-crypt have not been independently assured to Foundation Grade.” LUKS and dm-crypt are used on Ubuntu to encrypt the data on the hard disk and to decrypt the data when starting up, by requesting a password from the user. Without the password, the computer cannot start the operating system or access any of the data.   Secure Boot Secure boot is a Microsoft technology invented in cooperation with OEMs to ensure that software cannot be tampered with after the hardware has been shipped from the factory.  It has provoked much debate in security circles, as the ability to install any software which you can control is desirable from a security perspective. The German government recently criticised secure boot as preventing installation of specialised secure operating systems after sale of hardware.   Ubuntu’s response, from Ubuntu 12.10 onwards is to adopt Grub2 as the default bootloader, with support for Secure Boot, but with an ability to turn off secure boot to modify the OS, if required. Since then Ubuntu followed a steady release schedule, each new version introducing new security features and improving on the existing ones.   In 2020 Canonical delivered an update to its Ubuntu 20.04 version, that makes available a wide range of cybersecurity capabilities, including an open source virtual private network (VPN) tunnel dubbed WireGuard that provides better performance than IPsec and OpenVPN tunneling protocols because it runs on the Linux kernel.   Ubuntu 20.04 Long Term Support (LTS) also adds Kernel Self Protection measures, assures control flow integrity and includes stack-clash protection, a Secure Boot utility, the ability to isolate and confine applications built using Snap containers, and support for Fast ID Online (FIDO) multi-factor authentication that eliminates the need for passwords.   This release also adds native support for AMD Secure Encrypted Virtualization with accelerated memory encryption. These advances will help make IT environments more secure by adding capabilities into the base operating system that are readily accessible. Naturally, as more applications start taking advantage of the security capabilities embedded in Ubuntu 20.04 LTS, the overall state of DevSecOps should improve. In general, DevSecOps is a powerful idea that is still in its infancy and as more security capabilities are embedded into the operating system, the easier it will become for organizations to incorporate cybersecurity functions into the application development and deployment process.   The two primary benefits of embedding more security capabilities into the operating system are, of course, reduced costs and increased performance. The closer security functions run to the kernel, the less overhead that gets generated, which makes more processing power available to applications.   The move to embed more security capabilities into the base Ubuntu operating system also comes at a time when IT organizations are under increased pressure to reduce costs in the wake of the economic downturn brought on by the COVID-19 pandemic.   Less clear right now is the degree to which organizations are choosing to standardize on an operating system because of the degree of cybersecurity enabled. However, with developers exercising more influence over the entire IT stack these days, many of them are acutely aware of any performance trade-offs that historically have been made to ensure application security. As such, many developers have a vested interest in cybersecurity functions that can be programmatically invoked at the kernel level.   Of course, cybersecurity teams are not always aware of what security functions are embedded in the operating system level. That may change, however, as more organizations embrace DevSecOps, which shifts much of the responsibility for security on to the shoulder of developers. That so-called shift to the left provides developers with more incentive to address a wide range of cybersecurity issues much earlier in the application development process.   Longer-term, it remains to be seen how the relationship between cybersecurity teams and developers will evolve. As more cybersecurity capabilities are embedded into operating systems and the IT infrastructure they are deployed on, the overall IT environment will, in time, become much more secure than it is today. There may never be such a thing as perfect security. However, many of the low-level security issues that routinely plague IT today soon may no longer be as big an issue as they are today.
Read more >

thumbnail

The Road to the APT

Sergiu Popa - Director of Cybersecurity Research
Everybody in the cyber community should know what is an APT. Advanced Persistent Threat. [separator] It is a threat. It is advanced. And it is persistent. All threats are supposed to be persistent. So what makes an APT so special?   First, an APT is actually  a stealthy threat actor, typically a nation state or state-sponsored group, which gains unauthorized access to a network and remains undetected for a long period of time. Recently, the term may also refer to non-state sponsored groups conducting large-scale targeted intrusions for specific goals, not necessarily government oriented.   It is advanced because the operators/creators have a plethora of ideas and concepts in their arsenal. They also dispose of a myriad of methods of intelligence gathering capabilities It is persistent because it targets specific intelligence. It is a threat because the elements involved are organized, motivated and most importantly skilled.   The discussion this article creates revolves around the tools that APTs use. Naturally, as a first entry point in our brief analysis one might state that zero days are used.   However, this is not always the case, as I can confirm from an offensive security standpoint.   There might be zero days available which cannot be utilized to achieve the scope of the mission. The question is:   What is there to be done?   First of all, let’s define the steps usually taken in an APT operation:   • Initial compromise– performed by the use of SE(social engineering) and SP(spear phishing) • Establish foothold– foothold in victim's network (RAT-remote admin), create net backdoors and tunnels allowing stealth access to its infrastructure. • Escalate privileges– use whatever means to turn root or Domain Admin • Internal reconnaissance– gather as much info on the infrastructure mostly OT(operational technology) to the point where the actions can be mimicked effortlessly •  Move laterally– once a sound knowledge of the environment is obtained compromise everything that could offer further info •  Maintain presence– ensure continued control over access channels and credentials acquired in previous steps. •  Complete mission– exfiltrate stolen data from victim's network.   All these steps are extremely important and one cannot make the statement that one step is of more importance than another.   Today, we shall focus on probably what makes an exception to the above statement (to an extent): the INITIAL COMPROMISE. Let’s suppose our entry point is via a network user with the role of head of compliance in a company.   We shall not delve at this point in the social engineering details which make the operation successful, but rather on the technical aspects concerning his/her workstation. What EDR (endpoint detection and response) is in use? What telemetry is collected and taken where?   First, we would have to build a tool that from all practical purposes is legitimate in front of an AV(anti-virus) and simultaneously can collect telemetry in the form of what AV is used and where is the data sent.   How is such a tool built? By using ingenious methods where practically, all elements involved are native OS mechanisms. Living off the land is always the preferred approach. If the advanced operator can also introduce a behavioural dimension to the initial data gathering operation, then BINGO!   Now what? We have somewhere the data collected in stage one. We know they use antivirus X. What is there to be done?   Create a ZERODAY against X. How fast can that be done? Once, Abraham Lincoln was asked: “How long need a man’s legs be?” He answered: “As long as his legs touch the ground he is tall enough”.   Once X is known, it is a question of at most 50 hours before an exploit is created and tested in all the appropriate environments. If X ever finds out about it(post operation) then it won’t be a zero day anymore. If X never finds out about it then it stays a zeroday, though these zerodays being so common they should be classified into a zeroday category of their own.   This describes how the Initial Compromise is performed. However, the game just started.   Stay tuned for the other parts to follow
Read more >

thumbnail

Anonymous: the groups’ evolution and remarkable hacks

Marius Marinescu - CTO
For an unidentified group, the hacker collective called Anonymous has made the news quite a few times since its inception ­ both for good and for bad. Some say that they might just be the most powerful non-government hacking group in the world. They are also largely considered to be the most famous one. So, exactly how did Anonymous start, where do they come from, and what are they trying to do? [separator] The group, which is composed of a loosely organized international network of hacktivists, has its roots in the online image-based bulletin board 4chan, that was publicly launched on October 2003. The site was inspired by 2channel, a massive Internet forum, with seemingly random content, which is especially popular in Japan. 2channel was launched in 1999. It has over 600 boards which cover wide ranging subject matters, such as cooking, social news, and computers. Visitors to 2channel usually post anonymously, and most of the content on this site is in Japanese. In the spirit of 2channel, 4chan allows people to post anonymously as well. Unlike 2channel, the vast majority of 4chan is in English. Any poster who doesn’t post text in the name field automatically gets credited as “Anonymous”.   The majority of the forums on 4chan are based on Japanese pop culture, but their most popular forum is /b/. /b/ has a fascinating culture onto itself. A lot of the user created graphical memes you may see circulating around the Internet, like LOLcats, “All your base are belong to us”, and Pedobear, originated in the /b/ forum. As it is an image board, its content is mostly made up of user generated graphics. Usually, they’re intended to amuse, offend or do both at the same time. The majority of the postings are with unknown author (“Anonymous”), so the „Anonymous” name was inspired by the perceived anonymity under which users posted on 4chan.   The group’s two symbols - the Guy Fawkes mask that they wear in public and the „man without the head” image - both underscore the group’s inscrutability and lack of any formal leadership. Members of the group call themselves “hacktivists”, a word coined from the combination of hacker and activist. When people have technical skills, have access to the Internet and understand how network infrastructure and servers work, it can be tempting to put that knowledge into having some effect on the world. The “activist” part of “hacktivist” means that they don’t do their hacking and cracking without a cause. The various people behind Anonymous worldwide are united in a belief that corporations and organizations they consider to be corrupt should be attacked.   Not all of Anonymous’ activities involve attacking networks or websites. Anonymous has also been active in initiating public protests. But the web and IRC channels are the lifeblood of the group. If it weren’t for the Internet, Anonymous would’ve never existed.   The hacker collective’s first cause to make headlines was a 2008 effort called „Project Chanology”. On January 2008, a video from the Church of Scientology was leaked onto YouTube. It was a propaganda video featuring Tom Cruise laughing hysterically. As the clip is arguably unflattering to Scientology, the cult tried to get YouTube to remove the video due to “copyright infringement”. In response a video was posted on YouTube credited to Anonymous titled “Message to Scientology”. Thus, began Project Chanology.   A press release was written explaining the intentions behind Anonymous’ Project Chanology. The release covers why Scientology is a dangerous organization and how the cult’s attempt to have the Tom Cruise video removed from YouTube violated the freedom of speech.   Scientology has a reputation for financially exploiting its members, engaging in threatening blackmail against people who try to leave the cult and various other abuses. “Call to Action”, also credited to Anonymous, was posted on YouTube calling for protests outside of Church of Scientology centers around the world. At some point in January, a DDoS attack was also launched on the cult’s website.   During the various Anonymous protests against Scientology that year, many protestors wore Guy Fawkes masks, in the spirit of the popular film “V for Vendetta”, and also to protect their identities from the cult, which is known for attacking dissenters that Scientology calls “Suppressive Persons”.   Between marches outside of Scientology churches and the videos the group posted, they managed to establish their power and resolve in this first project.   In February 2010, the Australian government was in the process of passing legislation that would make certain online content illegal. In response, Anonymous engaged in Operation Titstorm using DDoS attacks to bring down various Australian government websites.   In June 2010, President Mahmoud Ahmadinejad was elected in Iran, which triggered protests across the country. In response, Anonymous Iran was formed, an online project between Anonymous and The Pirate Bay, a popular, but persecuted torrent search engine site. Anonymous Iran offered Iranians a forum to the world which was kept safe amidst the Iranian government’s crackdowns on online news about the riots. Project Skynet was launched by Anonymous the same month to fight Internet censorship worldwide.   Operation Didgeridie started in September 2010. The Australian government had plans to censor the Internet at the ISP level. An Anonymous initiated a DDoS attack on Prime Minister Kevin Rudd’s website and brought it down for about an hour.   Operation Payback commenced in also in September 2010. The MPAA (Motion Picture Association of America) and the RIAA (Recording Industry Association of America) hired Indian software firm AIPLEX to launch DDoS attacks on The Pirate Bay and other websites related to file sharing. Anonymous executed DDoS attacks of their own, targeting websites linked to all three organizations, the MPAA, the RIAA and AIPLEX.   Operation Payback continued in December, but this time the targets were Mastercard, Visa, Paypal, the Bank of America and Amazon. Those corporations were targeted for blocking charitable donations for the WikiLeaks.org. This is a website for whistleblowers to post insider information about corrupt government activities around the world.   In December 2010, it was reported that the wife of Zimbabwean dictator Robert Mugabe, Grace Mugabe profited from illegal diamond mining. The information was revealed via a cable leak to WikiLeaks. Anonymous brought down Zimbabwean websites via DDoS attacks, as a response to Zimbabwean government corruption.   Starting on January 2011, websites for the Tunisian Stock Exchange and the Tunisian Ministry of Industry were brought down by more Anonymous DDoS attacks. It was a reaction to Tunisian government censorship. The Tunisian government had tried to restrict the Internet access of its citizens and arrested many bloggers and cyberactivists who had criticized the government.   Also in January 2011 the Egyptian government became the next target. Efforts started with the intention of removing Egyptian President Hosni Mubarak from office. Once the government blocked the citizens’ access to Twitter, Anonymous brought down Egyptian government’s websites with DDoS attacks.   On February 2011, Aaron Barr of security firm HBGary Federal claimed to have infiltrated Anonymous and said he would release information in a press conference. HBGary’s website was powered by a CMS (content management system) that had several security loopholes. Because of those loopholes, Anonymous were able to access the site’s databases via SQL injection. Usernames, e-mail addresses and password hashes were retrieved. The MD5 hash algorithms were cracked with rainbow tables, so eventually the entire database became accessible.   By April 2011, Sony became the next Anonymous target. Sony’s PlayStation Network banned user GeoHot for jailbreaking and modifying his PS3 console. GeoHot attracted Sony’s attention by posting information about how to mod PS3s to the Internet. Throughout April, the PlayStation Network and various Sony websites were brought down via organized DDoS attacks. This was Anonymous’ way of coming to GeoHot’s defense. It took a number of weeks until the PlayStation Network was operating normally.   Mid-July 2011, people from Adbusters, the anti-consumerism magazine, started discussing what could be done in response to corporate corruption on Wall Street. The “Occupy Wall Street” movement was planned from there, for mass protests on Wall Street starting on September. On August 2011 Anonymous expressed its support for this with a video post on YouTube to rally many thousands of people to be involved in the protest. The ubiquitous and now Anonymous related Guy Fawkes masks can often be seen on protestors.   These are just a few prominent examples from their early years of “hacktivity” but, since then, the hacker collective has been involved in everything from “Occupy Wall Street” to the recent violent protests in Minneapolis over the death of George Floyd.   While Anonymous initially was lambasted in the media for cyberattacks on the government and businesses, the group’s reputation has shifted recently. There are reports that the group is now even being praised for its work, particularly its mission to combat cyber jihadists. Some even went so far as to call the collective “the digilantes” for their efforts to retaliate against acts of injustice.   “Hacktivism” is now a major phenomenon, and Anonymous is far from the only “hacktivist” group. Networks, servers and databases which may become targets must audit for security. Harden networks from DDoS attacks, use virtualization and proxy servers when possible, and assure that passwords and hashes are difficult to crack. Special care must be applied to servers which contain encryption keys.   In the meantime, whoever they are, wherever they are, with their philosophy of activism, hopefully Anonymous continues to use their powers for good, rather than evil.  
Read more >

thumbnail

Cybersecurity’s evolution: from the first computer virus to hacking at a national level

Cristian Gal - CSO
Cybersecurity is a major issue for every business with any kind of internet presence, and that's pretty much every single one. Cybersecurity can affect everything from compliance and data safety to staffing budgets and much more. [separator] Today, cybersecurity is top of mind for just about everyone. But when the internet’s first draft appeared a half-century ago, security wasn’t in the outline. The technical focus was how to make this new packet-based networking scheme work. Security did not occur to the close-knit crew of academic researchers who trusted each other; it was impossible at the time for anyone else to access the fledgling network. With today’s pervasive use of the internet, a modern surge in cyberattacks and the benefit of hindsight, it’s easy to see how ignoring security was a massive flaw. Looking back at security events, the relatively short history of cybersecurity reveals important milestones and lessons on where the industry is heading.   1971: The first computer virus is discovered You might assume that computers had to be invented before the concept of the computer virus could exist, but in a certain sense, this isn't quite right at all. It was mathematician John von Neumann who first conceptualized the idea with his paper released in 1949, in which he suggested the concept of a self-replicating automatic entity working within a computer. It wasn't until 1971 that the world would see a real computer virus. DEC PDP-10 computers working on the TENEX operating system started displaying messages saying "I'm the creeper, catch me if you can!". At the time, users had no idea who or what it could be. Creeper was a worm, a type of computer virus that replicates itself and spreads to other systems; it was created by Bold, Beranek and Newman. While this virus was designed only to see if the concept was possible, it laid the groundwork for viruses to come. A man named Ray Tomlinson (the same guy who invented email) saw this idea and liked it. He tinkered with the program and made it self-replicating - the first computer worm. Then he wrote another program - Reaper, the first antivirus software which would chase Creeper and delete it.   1983: The first patent for cybersecurity in the US As computers and systems became more advanced, it was not long until technology experts around the world were looking for ways to patent aspects of computer systems. And it was in 1983 that the first patent related to cybersecurity was granted. In September of that year, the Massachusetts Institute of Technology (MIT) was granted U.S. patent 4,405,829 for a cryptographic communications system. The patent introduced the RSA (Rivest-Shamir-Adleman) algorithm, which was one of the first public key cryptosystems. Interestingly, given that this was the very first patent, it is actually still quite relevant today, as cryptography forms a major part of cybersecurity strategies.   1993: The first DEF CON conference runs This conference is well-known as the major cybersecurity technical conference, and a fixture in the calendar of professionals, ethical hackers, technology journalists, IT experts, and many more. The conference first ran in June 1993. It was organized by Jeff Moss and attended by around 100 people. However, it wouldn't stay that small for very long. Today, the conference is attended by over 20,000 cybersecurity professionals from around the world every year.   1995: SSL is created There is a security protocol that we are often guilty of taking for granted. The Secure Sockets Layer (SSL) is an internet protocol that makes it safe and possible to do things that we think of as commonplace, such as buying items online securely. After the first-ever web browser was released, the company Netscape began working on the SSL protocol. It was in February 1995 that Netscape launched SSL 2.0, which would become the key language for securely using the internet - the Hyper Text Transfer Protocol Secure (HTTPS). Today, when you see “HTTPS” in a website address, you know its communications with your browser are encrypted. This was perhaps the most important cybersecurity measure for many years.   2003: Anonymous is created Perhaps the most famous hacking group in the word, Anonymous made a name for themselves by committing cyberattacks against targets that were considered to generally be bad. The group has no specific leader and is in fact a collection of a large number of users, which may contribute in big or small ways. Together, they exist as an anarchic, digitized global brain. The group came to prominence in 2003 and has carried out many successful hacking attempts against organizations such as the Church of Scientology. Anonymous hackers are characterized by their wearing of Guy Fawkes masks and continues being linked to numerous high-profile incidents. Its main cause is protecting citizens’ privacy.   2010: Hacking uncovered at a national level Google surprised the world in 2010, when it disclosed a security breach of its infrastructure in China - a project it named "Operation Aurora". Before 2010, it had been very unusual for organizations to announce data breaches. Google's initial belief was that the attackers were attempting to gain access to the Gmail accounts of Chinese human rights activists. However, analysts discovered the true intent was identifying Chinese intelligence operatives in the U.S. who may have been on watch lists for American law enforcement agencies. The attacks also hit more than 50 companies in the internet, finance, technology, media and chemical sectors.   Today: Cybersecurity is more important than ever It has never been more important for businesses to take cybersecurity seriously. It has the power now to affect just about everything from search engine optimization (SEO) to overall company budgets and spending needs. Organizations must learn from the fast growth in the history of cybersecurity in order to make smart decisions for the future. In recent years, massive breaches have hit name brands like Target, Anthem, Home Depot, Equifax, Yahoo, Marriott and more, compromising data for the companies and billions of consumers. In reaction, stringent regulations to protect citizen privacy like the EU General Data Protection Regulation (GDPR) and the new California Consumer Privacy Act are raising the bar for compliance. And cyberspace has become a digital battleground for nation-states and hacktivists. To keep up, the cybersecurity industry is constantly innovating and using advanced machine learning and AI-driven approaches, for example, to analyze network behavior and prevent adversaries from winning. It’s an exciting time for the market, and looking back only helps us predict where it’s going.
Read more >

thumbnail

FORTRAN: the primary development tool for supercomputers

Marius Marinescu - CTO
The list of high-tech tools in continuous use since the early 1950s isn't very long: the Fender Telecaster, the B-52 and Fortran. [separator] Fortran (which started life as FORTRAN, or FORmula TRANslator) was first created by IBM programmer John Backus in 1950. By the time John F. Kennedy was inaugurated, FORTRAN III had been released and FORTRAN had the features with which it would become the predominant programming language for scientific and engineering applications. To a nontrivial extent, it still is.   Whereas COBOL was created to be a general-purpose language that worked well for creating applications for business and government purposes in which reports and human-readable output were key, FORTRAN was all about manipulating numbers and numeric data structures.   Its numeric capabilities meant that Fortran was the language of choice for the first generation of high-performance computers and remained the primary development tool for supercomputers: Platform-specific versions of the language power applications on supercomputers from Burroughs, Cray, IBM, and other vendors.   Of course, if the strength of Fortran was in the power of its mathematical processing, its weakness was actually getting data into and out of the program. Many Fortran programmers have horror stories to tell, most centering upon the "FORMAT" statement that serves as the basis of input and output.   While many scientific applications have begun to move to C++, Java, and other modern languages because of the wide availability of both function libraries and programming talent, Fortran remains an active part of the engineering and scientific software development world.   If you're looking for a programming language in use on everything from $25 computers that fit in the palm of your hand to the largest computers on earth you only have a couple of choices. If you want that programming language to be the same one your grandparents might have used when they were beginning their careers, then there's only one option. But that option is not necessarily the safest one.   Some professionals argue that the legacy systems significantly increase security incidents in the organizations. Other professionals disagree with this claim and argue that the legacy systems are “secure by antiquity”. Due to lack of adequate documentation on the legacy systems, they argue that it is very difficult and costly for potential attackers to discover and exploit security vulnerabilities in the systems.   New research is turning on its head the idea that legacy systems such as Cobol and Fortran are more secure because hackers are unfamiliar with the technology.   Current studies found that these outdated systems, which may not be encrypted or even documented, were more susceptible to threats.   By analyzing publicly available federal spending and security breach data, the researchers found that a 1% increase in the share of new IT development spending is associated with a 5% decrease in security breaches.   In other words, federal agencies that spend more in maintenance of legacy systems experience more frequent security incidents, a result that contradicts a widespread notion that legacy systems are more secure. That’s because the integration of legacy systems makes the whole enterprise architecture too complex, too messy.   A significant amount of public IT budgets is spent maintaining legacy systems although these systems often pose significant security risks, such as the inability to utilize current security best practices, including data encryption and multi-factor authentication, which make them particularly vulnerable to malicious cyber activity.   There is no simple solution in addressing these legacy systems, but one option could be moving them to the cloud. Migration of legacy systems to the cloud offers some security advantages versus running the legacy systems on premise because cloud vendors have more resources and capabilities to build effective guardianship of valuable information than clients. Cloud vendors use common IT platforms to achieve economies of scale and scope in the production and delivery of IT services to a large number of client organizations.   Thanks to economies of scale and scope, it is more feasible for the vendors to use dedicated information security teams to protect the clients’ systems over the common IT platforms. By comparison, a client organization is unlikely to have adequate resources to afford even a fraction of the dedicated information security team of the vendors. In addition, the cloud vendors are better able to attract, motivate, promote, and retain the top security talent, which is necessary as the security threat landscape dynamically evolves. On the other hand, the legacy system environment of a client organization is unlikely to offer attractive and sustainable career paths for security professionals who look for opportunities to continuously develop and advance their professional skills and knowledge. In the legacy environments, IT professionals spend most of their careers in maintaining and operating specific legacy systems and have fewer opportunities to learn about emerging new technologies.   Migration of legacy systems to the cloud requires standardization of IT interfaces in the client organization, which can in turn make it easier for the cloud vendors to effectively guard information flows at the access and interaction points around the enterprise architectures. To be able to connect to the cloud and make use of its common standardized IT services and interfaces, a client organization needs to adhere to the standards mandated by the vendors. Thus, migrating legacy systems to the cloud often requires the standardization of the IT interfaces in the client’s enterprise architectures. The highly standardized interfaces with the client make it easier and less costly for the cloud vendor to apply common security governance and control mechanisms to guard the sensitive information exchanged through those interfaces.
Read more >

thumbnail

ERMA: the starting point for banking automation

Marius Marinescu - CTO
Bank of America in the early 1950s decided to automate their rapidly expanding check handling business. Can you imagine a time when you could take a piece of paper of any practical size and color, hand write your bank name, the payee, the amount, and add your signature (maybe legible) and use that as your bank draft or your check? [separator] Eventually this "document" would arrive at your bank for someone to process it. Bank of America had set up a chain of banks in California and their first problem was to determine, from your signature, to which branch the document should be forwarded to. (It was impractical to send the account summaries of each branch to all other branches on a daily basis.)   In any case, the system was large, shaky, error prone, tardy and labor intensive. A number of bank employees figured there had to be a better way, and their ideas were effective and deemed worthy of further study.   The Bank of America was good at banking, had deep enough pockets, but did not claim automation expertise. They hired Stanford Research Institute of Menlo Park, CA to design a system for them. (Stanford Research Institute was "requested" by Stanford University to not use their name, so the name SRI was chosen and is still used).   Among other problems that SRI addressed was the fact that there was no effective machine (computer) method of reading documents (OCR is still not reliable enough for financial transactions). Ken Eldredge of SRI invented the MICR method of encoding and reading data from documents. This method prevailed over other competing methods and the American Banking Association finally adopted it.   At the same time, transistors became generally available for practical computer use, and SRI proposed a system using these new transistors instead of the vacuum tubes of the era. General Electric prevailed in suggesting general purpose computers instead of hard-wired special purpose logic, which it designed, built and programmed.   Machines for encoding documents with MICR, as well as machines for reading/sorting documents had to be developed. SRI made a suitable prototype that was promising enough for the Bank of America to want up to 36 commercial versions.   SRI did not want to get into the manufacturing business, so Bank of America requested major computer manufacturers to bid on making 30 banking systems for them based on SRI's ideas and prototype.   To everyone's surprise, the General Electric Computer Department, (a department that was non-existent at General Electric at that time) won the $31,000,000 Bank of America ERMA contract. General Electric corporate headquarters didn't know of the bid and didn't know of this new "department". The same day the contract was signed, the bid team received a stern letter from G.E. president Ralph Cordiner stating that "under no circumstances will the General Electric Company go into the business machine business."   The General Electric Computer Department chose Phoenix as headquarters, had a manufacturing establishment built, refined the prototype, built and/or OEMed the system elements, delivered the first system and passed acceptance tests on December 31, 1958. Some "tightening up" of the equipment and operating procedures was necessary to reach the design goal of 55,000 accounts/day.   Bank of America "encouraged" its clients and others to choose preprinted checks using the new MICR along the bottom edge. By March, 1959, the machine(s) were processing 50,000 accounts/day and on September 14, 1959, the Bank of America and General Electric presented 4 of the proposed 30 systems running in a transcontinental closed-circuit TV press conference. These 4 systems were capable of processing over 220,000 customer accounts in the Los Angeles area. The machines were using the newly developed standard E13B magnetic ink font that GE had developed which was more human readable. This E13B font is used on the bottom line of your checks today.   This is the E13B font which is the banking standard and first used on the ERMA.     The ink utilzed for the MICR characters can be magnetized, as part of the reading process, to create machine-readable information The alphas "A" to "D" mark the beginning of various fields, such as issuing bank number, customer number and dollar amount. Most fields are preprinted, but the dollar amount is printed after the customer writes it.   The ERMA system served the Bank of America well for 8 years (a long time for a commercial data processing system). Unfortunately, when the time for replacement came, General Electric was no longer providing banking-oriented systems or peripherals. The now obsolete GE-225 series had been popular with banks. The GE 4xx series was not suitable (could not respond to interrupts fast enough to handle the document handlers) and the GE 6xx series was too large and expensive for handling documents. General Electric took themselves right out of the banking business.   In any case, Bank of America went with IBM. Bank of America ordered the IBM 360/65 and took delivery in July, 1966 with conversion scheduled to be completed in December 1966. But conversion was deferred as a result of IBM's continued delay in providing a multi-tasking operating system and severe tape drive problems. Situation had not improved by spring of '67. Ray of hope came in late May '67 with the successful "start" of the demand deposit conversion, the banks largest ERMA application. But damage had been done - delays had a direct economic impact on the bank’s profit amounting to $1,471,000. Total impact was estimated to be on the order of millions of dollars offset partially by IBM's contribution in the form of paying all equipment costs, providing professional help valued at $2,700,160 and due to the fact that IBM maintained an account balance of more than $14,000,000 in the bank this entire period.   During the conversion, IBM had invested 66-man years of field engineers and 10-man years of tape specialists to make the tape system operable. After the conversion, IBM accepted the GE equipment as a trade in, allowing credit for the remaining book value of the ERMAs. A first for IBM, the allowance was kept confidential to avoid starting a trend. The IBM contract to replace the ERMA systems had a delivery penalty. IBM was to pay the ERMA maintenance until their system was up and running.   Despite the initial high-cost and technological set-backs, MICR was so successful in its design that it was adopted as the industry standard by the American Banking Association (ABA) in 1956. Bank of America made MICR technology available to all banks and printers without royalty charges. In 1984, American Banker stated that “the development of the MICR line, which enabled checks to be sorted and processed at high speeds, has been recognized as one of the great breakthroughs in banking”.   Today, the MICR methodology remains the standard around the world.  
Read more >

thumbnail

How to back up your digital life

Marius Marinescu - CTO
Developing a solid backup plan requires an investment of time and money, but the cost is far less than the burdensome task of recreating data for which no backup exists. [separator] With rising malware attacks and the escalating cost of a data breach – pegged at an average of $3.92 million - cybersecurity has emerged as a top business priority. However, even with tightened security measures, breaches have increased by 67% over the past 5 years. As a result, the need to have a solid backup strategy in place has become more important than ever. To be truly protected, organizations must form a well-defined plan that can aid in the quick and seamless recovery of lost data and guarantee business continuity when all preventive measures fail.   A comprehensive backup strategy is an essential part of an organization’s cyber safety net. Ensuring critical organizational data is backed up and available for restore in the case of a data loss event can be considered an administrator’s prime concern. A backup strategy, along with a disaster recovery plan, constitute the all-encompassing business continuity plan which is the blueprint for an organization to withstand a cyberattack and recover with zero-to-minimal damage to the business, reputation, and data.   What are the typical threats?   Typical data threatening situations are accidental deletions, hard disk failures, computer viruses, thefts, fire and flood accidents. Data storage equipment has become more reliable over time, but hard drive failure rate is still around 4.2-4.8% annually. The risk of a fire accident is about 0.32% annually. Expressed in percentages, they do not seem like huge risks taken individually, but to receive total risk level, you need to sum them up.   As technological risks, like hardware failure, may be quite well-defined constants, other risks may vary quite a lot by different factors. For example, the risk of flooding in your house is quite serious if you are living at the seaside or on the banks of a bigger river. What people often forget is that there can also be smaller man-made "flooding", which may not be so dramatic but happen even more often. Some examples are accidents with water pipes, forgetting a laptop in the rain, spilling coffee all over the computer or dropping a laptop into a swimming pool. You might want to establish some common-sense rules for eliminating some of those risks, like not drinking coffee near your laptop, but some unforeseeable risks still remain.   If you add up all possible risks (and there are many of them), you may have as high as 25% probability of losing some of your data during the next year.   Here we’ll detail the steps to develop a dependable backup strategy:   1. Determine what data has to be backed up   “Everything” would probably be your answer. However, the level of data protection would vary based on how critical it is to restore that particular dataset. Your organization’s Recovery Time Objective (RTO), which is the maximum acceptable length of time required for an organization to recover lost data and get back up and running, would be a reliable benchmark when forming your backup strategy.   Assess and group your applications and data into the following: • Existentially-critical for the business to survive • Mission-critical for the organization to operate • Optimal-for-performance for the organization to thrive • Once all pertinent data is identified, layer the level of protection accordingly.   Of course, you should back up the data on all of the desktops, laptops, and servers in your office. But what about data stored on staff members' home computers? Or on mobile devices? Is your website backed up? What kind of data is your organization storing in the cloud? How is your email backed up?   It's not usually necessary to back up the complete contents of each individual computer's hard drive — most of that space is taken up by the operating system and program files, which you can easily reload from a CD if necessary.   Also consider data you currently store only in hard copy, as this kind of data is not easily reproducible. For example: Financial information, HR information, Contracts, Leases, etc.   This type of information should be stored in a waterproof safe deposit box or file cabinet as well as backed up electronically (either scanned or computer-generated). Give highest priority to crucial data.   2. Determine how often data has to be backed up   The frequency with which you back up your data should be aligned with your organization’s Recovery Point Objective (RPO), which is defined as the maximum allowable period between the time of data loss and the last useful backup of a known good state. Thus, the more often your data is backed up, the more likely you are to comply with your stated RPO. As a good rule of thumb, backups should be performed at least once every 24 hours to meet acceptable standards of most organizations.   Each organization needs to decide how much work it is willing to risk losing and set its backup schedule accordingly. Database and accounting files are your most critical data assets. They should be backed up before and after any significant use. For most organizations, this means backing up these files daily. Nonprofits that do a lot of data entry should consider backing up their databases after each major data-entry session. Core files like documents (such as your Documents folders) and email files should be backed up at least once a week, or even once a day.   3. Identify and implement a suitable backup and recovery solution   Based on your organization’s requirements, you need to identify a suitable backup solution as part of your backup strategy.   Some aspects to consider There are two broadly defined approaches to backup: on-premises backup and remote backup. Either route (or both) may be appropriate for your nonprofit.   In an on-premises setup, you can copy your data to a second hard drive, other media, or a shared drive, either manually or at specified intervals.   With this setup, all the data is within your reach — and therein lies both its value and its risk. You can always access your information when necessary, but that information is vulnerable to loss, whether through theft (someone breaking in and stealing equipment) or damage (such as a leaky water pipe or a natural disaster).   In remote backup, your computer automatically sends your data to a remote center at specified intervals. To perform a backup, you simply install the software on every computer containing data you want to back up, set up a backup schedule, and identify the files and folders to be copied. The software then takes care of backing up the data for you.   With remote backup solutions, you don't incur the expense of purchasing backup equipment, and in the event of a disaster you can still recover critical data. This makes remote backup ideal for small nonprofits (say, 2 to 10 people) that need to back up critical information such as donor lists, fundraising campaign documents, and financial data, but lack the equipment, expertise, or inclination to set up dedicated on-site storage.   Automation is another key benefit to remote backup. A software program won't forget to make an extra copy of a critical folder; a harried employee at the end of a busy week might. By taking the backup task out of your users' hands you avoid the "I forgot" problem.   The main downside to remote backup solutions is that Internet access is required to fully restore your backed-up data. If your Internet connection goes down (as may happen in a disaster scenario), you won't be able to restore from your backups until your Internet connection is restored.   Another potential downside is that you have to entrust critical data to a third party. So, make sure you choose a provider that is reliable, stable, and secure. You can also help secure your data by encrypting it before it is transmitted to the remote backup center.   With most backup solutions you can choose to back up all of your data (a full backup) or just parts of your data (an incremental or differential backup).   A full backup is the most complete type of backup. It is more time-consuming and requires more storage space than other backup options.   An incremental backup only backs up files that have been changed or newly created since the last incremental backup. This is faster than a full backup and requires less storage space. However, in order to completely restore all your files, you'll need to have all incremental backups available. And in order to find a specific file, you may need to search through several incremental backups.   A differential backup also backs up a subset of your data, like an incremental backup. But a differential backup only backs up the files that have been changed or newly created since the last full backup.   Features your organization requires Below are several essential aspects of a comprehensive and dependable backup and restore solution to consider:   • Ease of Backup: Automated and/or on-demand options •  Restore Flexibility: Cross-user, search-based, point-in-time •  Scalability: License and user management •  Ease of Use: Intuitive user interface and self-service recovery •  Post-purchase Experience: Free support and unlimited storage •  Strong Credentials: Superior customer ratings, security & compliance certifications   All backup routines must balance expense and effort against risk. Few backup methods are 100-percent airtight — and those that are may be more trouble to implement than they're worth. That said, here are some rules of thumb to guide you in developing a solid backup strategy:   Develop a written backup plan that tells you: • What's being backed up • Where it's being backed up • How often backups will occur • Who's in charge of performing backups • Who's in charge of monitoring the success of these backups • Think beyond just your office and its computers.   For on-premises backup solutions, we recommend rotating a set of backups off-site once a week. Ideally, you should store your backups in a secure location, such as a safe deposit box. Another method is to follow the "2x2x2" rule: two sets of backups held by two people at two different locations.   Especially if your area is susceptible to natural disasters, think about going a step further. You need to make sure your local and remote backup solutions won't be hit by the same disaster that damages your office.   Although it may sound overly cautious, you will be glad to have a system like this in place should disaster strike.   Consider what data would be most essential to have at your fingertips in an unexpected scenario. If you lose Internet connectivity, online services will be unavailable. What information or files would be key as you wait to regain Internet connectivity (which will enable you to restore from an offsite backup)? Where will you store those files?   4. Test and Monitor your backup system   Once your backup system is in place, test it, both to check that the backup is successful and that the restore is smooth and accurate. Verify the backup and restore with regards to various types of artifacts – accounts, emails, documents, sites, etc. If the backup solution supports end-user backup – inform and educate your users about using it. Finally, remember to monitor your backup performance and regularly check the logs for data lapses.
Read more >

thumbnail

Computer bugs: from a moth in the relay to ZombiLoad and beyond

Marius Marinescu - CTO
These days, bugs are far more complex than a moth stacked between relay contacts in a computer. In fact, in the past 2-3 years, a new class of bugs (that we now call vulnerabilities) were found directly in Intel processor chips, making them especially hard to detect and get rid of. If exploited, they can be used to steal sensitive information directly from the processor [separator] The bugs are reminiscent of Meltdown and Spectre from 2018, which exploited a weakness in speculative execution, an important part of how modern processors work. Speculative execution helps processors predict to a certain degree what an application or operating system might need next and in the near-future, making the app run faster and more efficient. The processor will execute its predictions, if they’re needed, or discard them, if they’re not.   Both Meltdown and Spectre bugs leaked sensitive data stored briefly in the processor, including secrets such as passwords, secret keys and account tokens, and private messages.   Now some of the same researchers are back with an entirely new round of data-leaking bugs. “ZombieLoad” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker in April 2020.   Almost every computer with an Intel chips dating back to 2011 is affected by the vulnerabilities.   ZombieLoad takes its name from a “zombie load”, an amount of data that the processor can’t understand or properly process, forcing the processor to ask for help from the processor’s microcode to prevent a crash. Apps are usually only able to see their own data, but this bug allows that data to bleed across those boundary walls. ZombieLoad will leak any data currently loaded by the processor’s core, the researchers said. Intel said patches to the microcode will help clear the processor’s buffers, preventing data from being read.   Practically, the researchers showed in a proof-of-concept video that the flaws could be exploited to see which websites a person is visiting in real-time, but could be easily repurposed to grab passwords or access tokens used to log into a victim’s online accounts.   Like Meltdown and Spectre, it’s not just PCs and laptops that are affected by ZombieLoad - the cloud is also vulnerable. ZombieLoad can be triggered in virtual machines, which are meant to be isolated from other virtual systems and their host device.   Although no attacks have been publicly reported, the researchers couldn’t rule them out nor would any attack necessarily leave a trace, they said.   What does this mean for the average user? There’s no need to panic, for one. These are far from drive-by exploits where an attacker can take over your computer in an instant. Researchers said it was “easier than Spectre” but “more difficult than Meltdown” to exploit and both required a specific set of skills and effort to use in an attack.   There are far easier ways to hack into a computer and steal data. But the focus of the research into speculative execution and side channel attacks remains in its infancy. As more findings come to light, the data-stealing attacks have the potential to become easier to exploit and more streamlined.   Intel has released microcode to patch vulnerable processors, including Intel Xeon, Intel Broadwell, Sandy Bridge, Skylake and Haswell chips. Intel Kaby Lake, Coffee Lake, Whiskey Lake and Cascade Lake chips are also affected, as well as all Atom and Knights processors.   But other tech giants, like consumer PC and device manufacturers, are also issuing patches as a first line of defense against possible attacks. Computer and operating system makers Apple and Microsoft and browser maker Google have released patches, with other companies expected to follow.   Intel said the latest microcode updates, like previous patches, would have an impact on processor performance. Most patched consumer devices could take a 3 percent performance hit at worst, and as much as 9 percent in a datacenter environment.   But with patches rolling out for the past few months, there’s no reason to pass on a chance to prevent such an attack.
Read more >

thumbnail

Top 10 most powerful non-distributed computer systems in the world

Marius Marinescu - CTO
STRETCH was the most complex electronic system yet designed and, in fact, it was the first one with a design based on an earlier computer (the IBM 704). Unfortunately, it failed its primary goal, that of being 200 or even 100 times faster than the competition, since it was only about 25-50 times faster. Only seven other Stretch machines were built after the one that went to Los Alamos, all for government agencies (like the Weather Service for charting the path of storms) or government contractors (like MITRE). [separator] In April 1955, IBM had lost a major bid to build a computer for the U.S. Atomic Energy Commission's Livermore Laboratory, to the UNIVAC division of Remington Rand. UNIVAC had promised up to five times the processing power as the Government's bid request, so IBM decided it should play that game too, next time it had an opportunity.         Supercomputers – the pioneers   When Los Alamos Scientific Laboratory was next to publish a bid request, IBM promised that a system operating at 100 times present speeds would be ready for delivery at the turn of the decade. Here is where the categorical split happened between "conventional computers" and supercomputers: IBM committed itself to producing a whole new kind of computing mechanism, one entirely transistorized for the first time. There had always been a race to build the fastest and most capable machine, but the market had not yet begun its path to maturity until that first cell split, when it was determined that atomic physics research represented a different customer profile compared to business accounting, and needed a different class of machine.   Stephen W. Dunwell was Stretch's lead engineer and project manager. In a 1989 oral history interview for the University of Minnesota's Charles Babbage Institute, he recalled the all-hands meeting he attended, along with legendary IBM engineer Gene Amdahl and several others. There, the engineers and their managers came to the collective realization that there needed to be a class of computers above and beyond the common computing machine, if IBM was to regain a competitive edge against competitors such as Sperry Rand.   Gordon Bell, the brilliant engineer who developed the VAX series for DEC, would later recall that engineers of his ilk began using the term "supercomputer" when referring to machines in this upper class, as early as 1957, while the 7030 project was underway.   The architectural gap between the previous IBM 701 design and that of the new IBM 7030 was so great that engineers dubbed the new system "Stretch". It introduced the notion of instruction "look-ahead" and index registers, both of which are principal components of modern x86 processor design. Though it utilized 64-bit "words" internally, Stretch utilized the first random-access memory mechanism from magnetic disk, breaking down those words into 8-bit alphanumeric segments that engineers dubbed "bytes".   Though IBM successfully built and delivered eight 7030 models between 1961 and 1963, keeping a ninth for itself, Dunwell's superiors declared it a failure for only being 30 times faster than 1955 benchmarks, instead of 100. Declaring something you built yourself a failure typically prompts others to agree with you, often for no other viable reason. When competitor Control Data set about to build a system a mere three times faster than the IBM 7030, and then in 1964 met that goal with the CDC 6600 (principally designed by Seymour Cray) the "supercomputer" moniker stuck to it like glue. (Even before Control Data ceased to exist, the term attached itself to Cray.) Indeed, the CDC 6600 introduced vector processing, executing single instructions on multiple registers in sequence, which was the beginning of parallelism. But no computer today, not even your smartphone, is without parallel processing, nor is it without index registers, look-ahead instruction pre-fetching or bytes.           The giants of supercomputing   According to the Top500.org, nowadays IBM sits in the second spot of the supercomputer race. The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.   The 55th edition of the TOP500 saw some significant additions to the list, spearheaded by a new number one system from Japan. The latest rankings also reflect a steady growth in aggregate performance and power efficiency.   The new top system, Fugaku, turned in a High Performance Linpack (HPL) result of 415.5 petaflops, besting the now second-place Summit system by a factor of 2.8x.  Fugaku, is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, which are often used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.   Number two on the list is Summit, an IBM-built supercomputer that delivers 148.8 petaflops on HPL. The system has 4,356 nodes, each equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are connected with a Mellanox dual-rail EDR InfiniBand network. Summit is running at Oak Ridge National Laboratory (ORNL) in Tennessee and remains the fastest supercomputer in the US.   At number three is Sierra, a system at the Lawrence Livermore National Laboratory (LLNL) in California achieving 94.6 petaflops on HPL. Its architecture is very similar to Summit, equipped with two Power9 CPUs and four NVIDIA Tesla V100 GPUs in each of its 4,320 nodes. Sierra employs the same Mellanox EDR InfiniBand as the system interconnect.   Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) drops to number four on the list. The system is powered entirely by Sunway 260-core SW26010 processors. Its HPL mark of 93 petaflops has remained unchanged since it was installed at the National Supercomputing Center in Wuxi, China in June 2016.   At number five is Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT).  Its HPL performance of 61.4 petaflops is the result of a hybrid architecture employing Intel Xeon CPUs and custom-built Matrix-2000 coprocessors. It is deployed at the National Supercomputer Center in Guangzhou, China.   A new system on the list, HPC5, captured the number six spot, turning in an HPL performance of 35.5 petaflops.  HPC5 is a PowerEdge system built by Dell and installed by the Italian energy firm Eni S.p.A, making it the fastest supercomputer in Europe. It is powered by Intel Xeon Gold processors and NVIDIA Tesla V100 GPUs and uses Mellanox HDR InfiniBand as the system network.   Another new system, Selene, is in the number seven spot with an HPL mark of 27.58 petaflops. It is a DGX SuperPOD, powered by NVIDIA’s new “Ampere” A100 GPUs and AMD’s EPYC “Rome” CPUs. Selene is installed at NVIDIA in the US. It too uses Mellanox HDR InfiniBand as the system network.   Frontera, a Dell C6420 system installed at the Texas Advanced Computing Center (TACC) in the US is ranked eighth on the list. Its 23.5 HPL petaflops is achieved with 448,448 Intel Xeon cores.   The second Italian system in the top 10 is Marconi-100, which is installed at the CINECA research center. It is powered by IBM Power9 processors and NVIDIA V100 GPUs, employing dual-rail Mellanox EDR InfiniBand as the system network. Marconi-100’s 21.6 petaflops earned it the number nine spot on the list.   Rounding out the top 10 is Piz Daint at 21.2 petaflops, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. It is equipped with Intel Xeon processors and NVIDIA P100 GPUs.         Interesting facts revealed by Top500:   China continues to dominate the TOP500 when it comes to system count, claiming 226 supercomputers on the list. The US is number two with 114 systems; Japan is third with 30; France has 18; and Germany claims 16. Despite coming in second on system count, the US continues to edge out China in aggregate list performance with 644 petaflops to China’s 565 petaflops. Japan, with its significantly smaller system count, delivers 530 petaflops.   Also, Chinese manufacturers dominate the list in the number of installations with Lenovo (180), Sugon (68) and Inspur (64) accounting for 312 of the 500 systems. HPE claims 37 systems, while Cray/HPE has 35 systems. Fujitsu is represented by just 13 systems, but thanks to its number one Fugaku supercomputer, the company leads the list in aggregate performance with 478 petaflops. Lenovo, with 180 systems, comes in second in performance with 355 petaflops.   Regardless of the manufacturer, as a technology trend, a total of 144 systems on the list are using accelerators or coprocessors, which is nearly the same as the 145 reported six months ago.  As has been the case in the past, the majority of the systems equipped with accelerator/coprocessors (135) are using NVIDIA GPUs.   The x86 continues to be the dominant processor architecture, being present in 481 out of the 500 systems. Intel claims 469 of these, with AMD installed in 11 and Hygon in the remaining ones.  Arm processors are present in just four TOP500 systems, three of which employ the new Fujitsu A64FX processor, the rest being powered by Marvell’s ThunderX2 processor   The breakdown of system interconnect share is largely unchanged from six months ago. Ethernet is used in 263 systems, InfiniBand is used in 150, and the remainder employ custom or proprietary networks. Despite Ethernet’s dominance in sheer numbers, those systems account for 471 petaflops, while InfiniBand-based systems provide 803 petaflops. Due to their use in some of the list’s most powerful supercomputers, systems with custom and proprietary interconnects together represent 790 petaflops.   The most energy-efficient system on the Green500 is the MN-3, based on a new server from Preferred Networks. It achieved a record 21.1 gigaflops/watt during its 1.62 petaflops performance run. The system derives its superior power efficiency from the MN-Core chip, an accelerator optimized for matrix arithmetic. It is ranked number 395 in the TOP500 list.   In second position is the new NVIDIA Selene supercomputer, a DGX A100 SuperPOD powered by the new A100 GPUs. It occupies position seven on the TOP500.   In third position is the NA-1 system, a PEZY Computing/Exascaler system installed at NA Simulation in Japan. It achieved 18.4 gigaflops/watt and is at position 470 on the TOP500.   The number nine system on the Green500 is the top-performing Fugaku supercomputer, which delivered 14.67 gigaflops per watt. It is just behind Summit in power efficiency, which achieved 14.72 gigaflops/watt.   The TOP500 list has incorporated the High-Performance Conjugate Gradient (HPCG) Benchmark results, which provided an alternative metric for assessing supercomputer performance and is meant to complement the HPL measurement.   The number one TOP500 supercomputer, Fugaku, is still the leader on the HPCG benchmark, with a record 13.4 HPCG-petaflops. The two US Department of Energy systems, Summit at ORNL and Sierra at LLNL, are now second and third, respectively, on the HPCG benchmark. Summit achieved 2.93 HPCG-petaflops and Sierra 1.80 HPCG-petaflops. All the remaining systems achieved less than one HPCG-petaflops.
Read more >

thumbnail

Lorem Ipsum: origins, evolution and controversy

Cristian Gal - CSO
August 31, 1994 is the day Aldus Corp. and Adobe Systems Inc. finalized their merger. The two companies hoped to combine forces in creating powerful desktop publishing software, building on the field Aldus founder, Paul Brainerd, had created in 1985 with his PageMaker software. PageMaker was one of three components to the desktop publishing revolution. The other two were the invention of Postscript by Adobe and the LaserWriter laser printer from Apple. All three were necessary to create a desktop publishing environment. [separator] With the advent of desktop publishing environments, the passage “Lorem Ipsum...” became the popular dummy text of the printing and typesetting industry, although Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularized in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages and became widely used within every desktop and online publishing environment.   Search the Internet for the phrase “lorem ipsum”, and the results reveal why this strange phrase has such a core connection to the lexicon of the Web. Its origins are murky, but according to multiple sites that have attempted to chronicle the history of this word pair, “lorem ipsum” was taken from a scrambled and altered section of “De finibus bonorum et malorum”, (translated: “Of Good and Evil,”) a 1st-Century B.C. Latin text by the great orator Cicero.   According to Cecil Adams, curator of the Internet trivia site The Straight Dope, the text from that work of Cicero was available for many years on adhesive sheets in different sizes and typefaces from a company called Letraset. “In pre-desktop-publishing days, a designer would cut the stuff out with an X-acto knife and stick it on the page”, Adams wrote. “When computers came along, Aldus included lorem ipsum in its PageMaker publishing software, and you now see it wherever designers are at work, including all over the Web.”   This pair of words is so common that many Web content management systems deploy it as default text. Things get really interesting when you realize that “lorem ipsum” could be transformed into so many apparently geopolitical and startlingly modern phrases when translated from Latin to English using Google Translate. Even though now the algorithm has been changed, a while back, users could notice a bizarre pattern in Google Translate: When one typed “lorem ipsum” into Google Translate, the default results (with the system auto-detecting Latin as the language) returned a single word: “China.”   Capitalizing the first letter of each word changed the output to “NATO” — the acronym for the North Atlantic Treaty Organization. Reversing the words in both lower and uppercase produced “The Internet” and “The Company” (the “Company” with a capital “C” has long been a code word for the U.S. Central Intelligence Agency). Repeating and rearranging the word pair with a mix of capitalization generated even stranger results. For example, “lorem ipsum ipsum ipsum Lorem” generated the phrase “China is very very sexy.” Below you will see some of these translation results:       Security researchers wondered what was going on here? Has someone outside of Google figured out how to map certain words to different meanings in Google Translate? Was it a secret or covert communications channel? Perhaps a form of communication meant to bypass the censorship erected by the Chinese government with the Great Firewall of China? Or was this all just some coincidental glitch in the Matrix? :)   One thing was for sure: the results were subtly changing from day to day, and it wasn’t clear how long these two common, but obscure words would continue to produce the same results.   Things began to get even more interesting when the researchers started adding other words from the Cicero text out of which the “lorem ipsum” bit was taken, including: “Neque porro quisquam est qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit . . .”  (“There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain …”).   Adding “dolor” and “sit” and “consectetur,” for example, produced even more bizarre results. Translating “consectetur Sit Sit Dolor” from Latin to English produces “Russia May Be Suffering.” “sit sit dolor dolor” translates to “He is a smart consumer.” An example of these sample translations is below:     Latin is often dismissed as a “dead” language, and whether or not that is fair or true, it seems pretty clear that there should not be Latin words for “cell phone,” “Internet” and other mainstays of modern life in the 21st Century. However, this incongruity helps to shed light on one possible explanation for such odd translations: Google Translate simply doesn’t have enough Latin texts available to have thoroughly learned the language.   In an introductory video titled “Inside Google Translate”, Google explains how the translation engine works, what are the sources of the engine’s intelligence and what are its limitations. According to Google, its Translate service works “by analyzing millions and millions of documents that have already been translated by human translators...These translated texts come from books, organizations like the United Nations and Web sites from all around the world. Our computers scan these texts looking for statistically significant patterns. That is to say, patterns between the translation and the original text that are unlikely to occur by chance. Once the computer finds a pattern, you can use this pattern to translate similar texts in the future. When you repeat this process billions of times, you end up with billions of patterns, and one very smart computer program. For some languages, however, we have fewer translated documents available and, therefore, fewer patterns that our software has detected. This is why our translation quality will vary by language and language pair.”   Still, this doesn’t quite explain why Google Translate would include so many specific references to China, the Internet, telecommunications, companies, departments and other odd couplings in translating Latin to English.   Apparently, Google took notice and something important changed in Google’s translation system that currently makes the described examples impossible to reproduce :)   Google Translate abruptly stopped translating the word “lorem” into anything but “lorem” from Latin to English. Google Translate still produces amusing and peculiar results when translating Latin to English in general.   A spokesman for Google said the change was made to fix a bug with the Translate algorithm (aligning ‘lorem ipsum’ Latin boilerplate with unrelated English text) rather than a security vulnerability.   Security researchers said that they are convinced that the lorem ipsum phenomenon is not an accident or chance occurrence.  
Read more >

thumbnail

Email security: from the first email sent from space to modern days CESS

Marius Marinescu - CTO
In this age, e-mail services have became an intrinsic part of our lives, but we give them much more credit than we should. Take a look at e-mail's history throughout time and how this influences its nature today.  [separator] Nowadays we are using e-mail services from different cloud service providers and we cannot envision a world without them. As with any mass scale service, the threats are numerous and the fact that email is not always safe by nature doesn’t help in securing this type of service. Email was never meant to be secure. The way email is used today - and its security needs - differs greatly from what its inventors intended. Email security uses AI and other filtering techniques to stop malware, phishing scams and business email compromise (BEC). As malicious actors turn to cloud environments to exploit G Suite and attack Office 365, email security is a vast undertaking with no one-size-fits-all approach. Good email security begins with a comprehensive understanding of the threat and willingness to evolve as email usage continues to rise and change.   Email’s evolution Relative to modern computer technologies, email evolved slowly. It originated in MIT’s compatible time-sharing system in 1965 to store files and messages on a central disk, with users logging in from remote computers. In 1971, the @ symbol was introduced to help users target specific recipients. In 1977, the “To” and “From” fields and message forwarding were created within DARPA’s ARPANET, constituting email’s first standard. These advances created the conditions for spam prototypes and, in 1978, the first mass email was sent to 397 ARPANET users. It was so unpopular that no one would try it again for a decade. Email security became necessary in the late 1980s, when spam proliferated as a prank among gamers, and quickly gained prominence as a criminal activity.   Thirty years later, email is vastly more powerful and sophisticated, with the cloud connecting users and syncing files in real-time. These factors have incentivized malicious actors to send nearly 4.7 billion phishing emails every day. Phishing is one of the most prominent forms of cybercrime today. Hackers use social engineering techniques to fool even the most attentive employees into opening a malicious attachment, clicking on a malicious link or disclosing credentials. 92% of malware is delivered via a successful phishing attack over email, enabling hackers to access corporate data infrastructures and steal millions of dollars or personally identifiable information (PII).   In reality, email security will never be 100% safe. Knowing this, hackers will never stop leveraging email as an attack vector — especially not when there are close to 200 Million Office 365 users and 1.5 Billion G Suite users sharing confidential information and documents to do their jobs. The email providers protecting those users are responsible for the security of the cloud, but you're responsible for the security of your data in the cloud. Email is the double-edged sword of the business world: it’s the enterprise’s communicative lifeblood, but that makes it the primary point of entry for hackers. All a hacker needs is one successful phishing email to open up opportunities for malware, ransomware, BEC, and other attack methods to obtain credentials and hold the organization hostage.   According to the Internet and Crime Complaint Center (IC3), both the number of complaints about cyberattacks and the financial loss of the attacked businesses have steadily increased over the past few years. In 2018, the IC3 received over 350,000 complaints (50,000 more than the year before) and financial losses nearly doubled from $1.4 Billion to $2.7 Billion. Although these statistics can be attributed to more reporting from increased user awareness, they also do not account for how many attacks go unreported — simply because victims don’t know about them until it’s too late. That’s one of the most insidious parts of email attacks: they allow the hacker to lurk in networks, observing systems and processes, waiting for the right moment to strike — and implicating potentially anyone and anything ever associated with the comprised account. Even business partners and clients are vulnerable. Given the stakes of poor email security, it’s jarring to see how many businesses around the world are unprepared for email attacks.   Today’s cloud environment means email security must go beyond the capabilities of most Secure Email Gateways, which were originally designed to protect on-premises email. In a cloud environment, email security must prioritize anti-phishing, anti-malware and anti-spam capabilities. With email integrated into applications and file sharing, business collaboration suites like Office 365 and G Suite present hackers with multiple entry points and exploits once inside the system. This also means that email security must include mid-attack measures, like compromised account detection and access management tools. Email security must also offer full-suite protection. It should connect to the native API of cloud email providers and associated SaaS/productivity applications, like OneDrive and SharePoint.   Key capabilities of the anti-phishing, anti-malware, and anti-spam email security market start with content inspection tools, like a network sandbox, an isolated environment that mimics end-user operations and file detonations. The network sandbox allows for the proactive determination of malicious content which administrators can disarm and reconstruct. In the actual environment, URL rewriting detects malicious links (sometimes rebranding the links, so that end users can see that security has done its job) and performs time-of-click analysis. Every email security solution should scan attachments in messages and they must be scanned before being passed through to their recipients. It’s far too easy, for instance, for hackers to embed malware in an attachment. Hackers also use attachments to execute various Business Email Compromise schemes, like false invoices. In that case, the hacker might have breached the company accountant and pretended to be them while emailing an unsuspecting employee asking them to sign off on a doctored invoice. If the forged email is reasonably good, and if the recipient of the email is working quickly and expecting the invoice, there may be no way to stop the scam — unless the email security system detects a blemish in the attachment.   Web isolation services prevent malware and phishing threats while allowing broad web access, by isolating potentially risky traffic. But phishing attacks and BEC need to be stopped with more targeted security measures, like display name spoof detection, domain-based message authentication, lookalike domain detection and anomaly detection. Together, these features identify compromised accounts. The most adapted email security connects to the cloud via APIs and uses artificial intelligence (AI) to detect communication patterns and relationships between employees and customers. With this data, the solution uses a threat detection algorithm and machine learning to prevent hackers from weaponizing the email suite in an account takeover scenario. This ability to gather real-time and historical data on every user, file, event, and policy — not only of internal accounts, but everyone who has access — allows for a seamless threat protection protocol. Solutions that adapt each specific business environment are ideal compared to one-size-fits-all vendors whose product works the same for every customer.     Email security The first step to securing email is writing solid email security policies. This begins with developing a complete understanding of incoming email to the company. What are routine communications between clients, partners and the organization? To define this, the email security solution should learn the environment rapidly, using protection for links, attachments and suspicious subject-lines, sender behavior, and language within the email. Smart email security policies should then clearly define what happens next in a workflow, but create enough flexibility for the policies to meet the organization’s needs. For instance, should all suspicious content be sent to the spam folder or quarantine folder for review? Should it be separated from the message, which can otherwise continue through? Suspicious content needs to be sent to a secure location for detailed analysis. If a threat is detected, the policies should state an action item for investigating the scope of that specific threat, and for determining if it has affected other parts of the cloud infrastructure.   Finally, once the entirety of the malicious activity is uncovered, there need to be policies around reinforcing encryption and safeguarding from future attacks. But what decisions guide changing existing policies and creating new ones? A good first step to email security policies, after a breach, is to analyze the originally breached email with full headers and original attachments, so that you can examine IP addresses. It’s equally important to examine click patterns, both as recorded by systems and as practiced by the user. What was the user thinking when they encountered the phishing email? Did he notice any suspicious activity around that time? Once you’ve gained a thorough understanding of the incident, seize the opportunity to take smart account cleanup measures. Changing passwords is a must. Keep track of the active session for the affected users, to ensure that the hacker isn’t still able to access the network through a legitimate channel like a VPN. Check mailbox configurations to see if the hacker changed them during the compromise. Finally, naturally integrate more targeted security tools into email and associated applications that end-users rely on every day, so that business can keep on humming in an even more secure state than before.   The best email security practices blend seamless protection for users and the reinforcement that protection is there. All users should know a few simple tactics for securing their accounts from the onset — the equivalents to not leaving the front door to their home unlocked. But they should also know that their organization has installed email security defense. A strong password is the absolute bare minimum of email security — yet analysis of breached accounts shows that millions of users still choose bad passwords like “123456,” “qwerty,” “password”, or their first name. In today’s world, when the average business user needs 191 passwords, password managers are a savior. Password managers like LastPass generate passwords for you, and store them in secure environments, reinforcing that the best password is the one you don’t know. 2-Factor Authentication (2FA) involves account log-in confirmation, like when a user receives a text or email asking if they’re trying to log into their account. These are the well-known - but least secure - forms of 2FA. It is a part of multi-factor authentication (MFA) and, although MFA is not foolproof, it’s another baseline email security measure, one that can catch the fallout from a weak password.   An understandably common-sense anti-phishing solution is to raise awareness among employees. If an end-user knows of the dangers of phishing, why would they click that unfamiliar link? However, research shows the limitations of that defense. One study showed that although 78% of 1700 participants knew the risk of unknown links infecting their computers with viruses, up to 56% of email users clicked a malicious link. Why? They were curious. Anti-phishing employee training can’t prevent phishing attacks. But a more specific type of anti-phishing behavioral conditioning can be taught, particularly in the platforms pioneered by KnowBe4. Employees can be trained to spot suspicious email activity and be equipped with user-friendly tools for reporting. These reports can be valuable to a security operations team tasked with monitoring threats, containing them if initiated, and analyzing them for future preventative measures. With email always evolving, the types of email security must always evolve. Legacy security solutions need to be updated for new environments, and new solutions must prove their viability. In general, all the types of email security fall under the two main stages: pre-delivery and post-delivery.     Pre-delivery Protection A Secure Email Gateway is a longtime staple of email security. Because they were built for on-premises email environments, they were designed to be a firewall for email and remain that way today. With this approach, a Secure Email Gateway rejects spam, prevents data loss, inspects content, encrypts messages and more. Secure Email Gateways protect inbound and outbound messages - but email today does far more than send and receive messages. By connecting to file sharing suites and essential workplace applications, email links every facet of a user’s online identity. Without an add-on at an extra cost, a Secure Email Gateway cannot see these essential elements of daily use, so it cannot secure them.   Another problem with Secure Email Gateways is that they are lighthouses for hackers. To reroute email through a Secure Email Gateway, an organization must change their MX records to that of the gateway. Hackers know it and have found a massive loophole in this deployment mode to send malicious content directly to employees. Publicly available on sites like MXToolbox, hackers identify what vendor an organization uses to secure their environment, identify the root domain and bypass the scan of a specific Secure Email Gateway.     Post-delivery Protection What kind of organization needs this new type of email security that scans inside the perimeter? One that: • requires on-demand scanning of mailboxes, generally as a secondary scan at low-use times; • wants to quickly manage outbreaks that spread through email; • demands detection methods that use historical communication patterns (for example, to build social graphs in defense against phishing); • has substantial intra-domain email traffic without routing through an SEG; • uses applications that have programmatic access to the mail server; • has users who regularly post messages in public folders. These solutions integrate well in modern email environments.   Cloud Email Security Supplement (CESS) is a term coined by Gartner analysts to describe new measures needed in the emerging continuous adaptive risk and trust assessment (CARTA) approach to cyber security. The fact that this subset of API-based email security uses intelligence from existing security gives them a leg up on gateways - which require you to deactivate built-in security. But all of these solutions need emails to arrive into the inbox before security scans can begin. This delay in scanning means that business end-users have a small window to click on a phishing email. As their name suggests, CESS solutions may for now be supplemental ways of protecting the entire Microsoft Office 365 suite, for instance. But once in place, the organization can affirm that the CESS satisfies all required security protection and risk avoidance, which can lead to email security consolidation — and substantial cost savings.   Another important Gartner term in email security deliver is Security Orchestration, Automation and Response (SOAR). This refers to a solution stack that can be applied to compatible products and services, which helps define, prioritize, standardize, and automate incident response functions. Recently, specialists in endpoint, malware, and email/collaboration security, introduced a new term: M-SOAR - Mail-focused SOAR - as a way of focusing exclusively on email threats, as opposed to orchestration. M-SOAR is a capability at the intersection of email gateways, awareness/training and solution software for collaboration suite security - an email security capability that few organizations currently have.
Read more >

    thumbnail

    Cybersecurity – Actual Facts and Real-World Scenarios

    Sergiu Popa - Director of Cybersecurity Research
    Cybersecurity is a subject which gained a lot of popularity with the broad public starting around the years 2007, 2008. Why is that? Let’s analyze the history of some events and see whether we can come up with an explanation. [separator] The hacker culture originated at MIT in the 1960’s. Back then, computer systems were pretty fragile in terms of architectures and a lot of stuff was permitted to the astute architect or programmer. We are making reference here to a certain robustness. The first hacking to have ever occurred was in a MIT computer system, Multics (grand-grand-grandfather of Unix), that had a password check procedure which verified the length of a specific cryptographic string. The 20 years that followed lead to the development of several hacking cultures throughout the world. Some good hackers were in Australia, some in the US and some in Germany. As computers started to gain popularity, Russia and Kazakhstan took traction. The NSF in the U.S had pretty much control of all the nodes and DARPANET was slowly declassifying the networks and making them accessible to the public at large (the creation of the internet as we know it).   First, there was the “word”… the sound   2600. The baud rate of the modem. By understanding simple principles of physics, one was able to hack just by modulating the right signals. One of the first hackers to exploit this was “The Mentor”, a hacker who had control over the Australian IRS (Internal Revenue Service). Around 1997, Kevin Mitnick starts to heavily hack a lot of environments by combining technical skills and social engineering. Afterwards, the world faced a completely quiet time when it comes to computer hacking. The question that naturally presents itself is “What happened, did hackers disappear? No, they didn’t! From 1999 to around 2007, United States Intelligence Agencies started looking closer at the phenomenon. The booming of online traders such as eBay and Craigslist gave birth to script kiddies. How did this work? The real hackers used to sell scripts to script kiddies who would in turn use these scripts for financial gains. It is very much true that during those years some Nessus and Nmap scans used to get you “root”, but the fact is that the world evolved so the attention of the Intelligence Agencies shifted to a better understanding of the phenomenon. Then, starting in 2007, another transformation took place. The whole world moved to web applications. Obviously, these were taken by storm by the serious hackers who exploited the apps with techniques such as SQL injection (officially discovered in the year of 1997, actually practiced in the 1980’s at some branches of the DIA), cross site scripting and last but not least remote command execution. A vast majority of these actions went under the radar because they were performed by real hackers, and not by script kiddies. At some point the real hackers created tools to exploit web applications and these tools were used by script kiddies. Granted, they were used for financial gains and this called the attention of the FBI and some other law enforcement agencies. At this point in time, intelligence agencies started recruiting real hackers in order to understand the scenarios. Law enforcement was just coming into the scene. The expansion of online business attracted more online fraud. Hence, the necessity for cyber protection took birth, in a somewhat forced fashion. Let’s analyze this claim for a second.   Why are we saying “forced fashion”?   Because major technology vendors felt overwhelmed with the pressure of coping with these attacks. So, who did they hire? They hired people with backgrounds in computer engineering. Obviously, it was some start, but not the best approach. Why? I remember, back in 2008, I had an interlocutor who was a possessor of various licenses, such as MCSE, CCNA, CCNE and a bunch of other fancy licenses that were enough to put one in the 300K plus per year income bracket in the US. I took out a USB stick with something called a BIOS level rootkit. This type of rootkit (advanced remote-control software, usually clandestine) is not detected by anything in the world. No software. Why? Because there is no Antivirus or any sort of protection at the BIOS level. By plugging in this USB, I had access to absolutely everything on the victim computer. My interlocutor, who was a SAC (Special Agent in Charge with some FBI cybercrime office) looked astonished and humbly stated that his intelligence level related to offensive maneuvers where the equivalent of the intelligence of 10 years old when presented with such threats. The conclusion is inevitable as the famous saying goes: “You sow what you reap!” The law enforcement agencies created a culture that confronted the problem by employing the people with the wrong mindset. It was a start. We do not blame them, but we do applaud those who took a departure from the norm. Nowadays we are in the possession of a reach market which offers us solutions for every single problem: 1. We have WAF (Web Application Firewalls); 2. We have smart switches; 3. We have AI (Artificial Intelligence systems) that analyze network traffic; 4. We have EDR (Endpoint Detection and Response), or otherwise known as antivirus; 5. We have DLP (Data Loss Prevention).   But do these solutions suffice when dealing with a skilled offensive actor? Absolutely not!   They are there to prevent 95% of the known threats. And we say “known threats” because we have vulnerabilities also called N days. If N=0 then we have a 0 day. A vulnerability which is not yet known by anybody except the one who discovered it, and the one who discovers a zero day may choose not to disclose the discovery. Then we have “N days” exploits where N is bigger than 0. That is why vendors have patches. However, to the professional hacker these security mechanisms do not matter. Let me offer a quick example which can illustrate the statement. Every second or fourth Tuesday of the month, Microsoft patches its systems. If a hacker listens to the patching process then binary diffing is performed (a technique that shows clearly which DLL or executables within the OS were changed). All a hacker has to do is exploit code against those changed executables. In theory, is not a zero day because Microsoft is patching it, so it is an “N day”, where N equals the number of days it takes to exploit it. However, please take into consideration, in a corporate environment consisting of thousands of machines, how often are systems patched? This sounds like a problem. We need to ask ourselves: if we purchase a respectable security solution, how effective is this solution, when faced with a skilled attacker? Unless this solution is administered by people who have the appropriate skills in the offensive game, then this solution will be as effective as the level of the attacker. There is also something else worth mentioning. We are being bombarded with news about security incidents every day. However, none of these are the actual real threats. The real threats are not made public nor do they reveal themselves to people coming from a culture to where nowadays one acquires a couple of security licenses and is deemed an expert. Lately, there is no talk about APTs (Advanced Persistent Threats) because they are not discovered with the existing skills set, not because they do not exist. The message of this article is clear: Only skilled attackers playing defenders are able to protect systems from other skilled attackers. If it were otherwise, you wouldn’t hear of security breaches every day. Most of the organizations that are attacked are in the possession of protection technology. Unless that technology is deployed correctly by somebody who understands the playground, it is ineffective and we shall keep on hearing about security incidents.
    Read more >

    thumbnail

    Cloud storage – how to keep your data safe and secure

    Marius Marinescu - CTO
    The demand for cloud-based solutions is increasing all around the world and data is moving to the cloud at a record pace. This includes everything, from secure data storage to entire business processes. Cloud-based internet security is an outsourced solution for storing data. Instead of saving data onto local hard drives, users store it on Internet-connected servers. Data Centers manage these servers to keep the data safe and secure to access. [separator] Enterprises turn to cloud storage solutions to solve a variety of problems: small businesses use the cloud to cut costs, while IT specialists see it as the best way to store sensitive data. Any time you access files stored remotely, you are accessing a cloud. Email is a prime example. Most email users don’t bother saving emails to their devices because those devices are connected to the Internet.   There are three types of cloud solutions and each of these offers a unique combination of advantages and drawbacks:   • Public Cloud: These services offer accessibility and security. This security is best suited for unstructured data, like files in folders. Most users don’t get a great deal of customized attention from public cloud providers. This option is affordable.   • Private Cloud: Private cloud hosting services are on-premises solutions. Users assert unlimited control over the system. Private cloud storage is more expensive. This is because the owner manages and maintains the physical hardware.   • Hybrid Cloud: Many companies choose to keep high-volume files on the public cloud and sensitive data on a private cloud. This hybrid approach strikes a balance between affordability and customization.   All files stored on secure cloud servers benefit from an enhanced level of security. The security credential most users are familiar with is the password. Cloud storage security vendors secure data using other means as well.   Some of these include:   • Advanced Firewalls: All Firewall types inspect traveling data packets. Simple ones only examine the source and destination data. Advanced ones verify packet content integrity. These programs then map packet contents to known security threats.   • Intrusion Detection: Online secure storage can serve many users at the same time. Successful cloud security systems rely on identifying when someone tries to break into the system. Multiple levels of detection ensure cloud vendors can even stop intruders who break past the network’s initial defenses.   • Event Logging: Event logs help security analysts understand threats. These logs record network actions. Analysts use this data to build a narrative concerning network events. This helps them predict and prevent security breaches.   • Internal Firewalls: Not all accounts should have complete access to data stored in the cloud. Limiting secure cloud access through internal firewalls boosts security. This ensures that even a compromised account cannot gain full access.   • Encryption: Encryption keeps data safe from unauthorized users. If an attacker steals an encrypted file, access is denied without finding a secret key. The data is worthless to anyone who does not have the key.   • Physical Security: Cloud data centers are highly secure. Certified data centers have 24-hour monitoring, fingerprint locks, and armed guards. These places are more secure than almost all on-site data centers. Different cloud vendors use different approaches for each of these factors. For instance, some cloud storage systems keep user encryption keys from their users. Others give the encryption keys to their users.   Best-in-class cloud infrastructure relies on giving users the ideal balance between access and security. If you trust users with their own keys, users may accidentally give the keys to an unauthorized person. There are many different ways to structure a cloud security framework. The user must follow security guidelines when using the cloud. For a security system to be complete, users must adhere to a security awareness training program. Even the most advanced security system cannot compensate for negligent users. Security breaches are rarely caused by poor cloud data protection. More than 40% of data security breaches occur due to employee error. Improve user security to make cloud storage more secure. Many factors contribute to user security in the cloud storage system. Many of these focus on employee training:   • Authentication: Weak passwords are the most common enterprise security vulnerability. Many employees write their passwords down on paper. This defeats the purpose. Multi-factor authentication can solve this problem.   • Awareness: In the modern office, every job is a cybersecurity job. Employees must know why security is so important and be trained in security awareness. Users must know how criminals break into enterprise systems. Users must prepare responses to the most common attack vectors.   • Phishing Protection: Phishing scams remain the most common cyber-attack vector. These attacks attempt to compromise user emails and passwords. Then, attackers can move through business systems to obtain access to more sensitive files.   • Breach Drills: Simulating data breaches can help employees identify and prevent phishing attacks. Users can also improve response times when real breaches occur. This establishes protocols for handling suspicious activity and gives feedback to users.   • Measurement: The results of data breach drills must influence future performance. Practice only makes perfect if analysts measure the results and find ways to improve upon them. Quantify the results of simulation drills and employee training to maximize the security of cloud storage.   Employee education helps enterprises successfully protect cloud data. Employee users often do not know how cloud computing works. Explain cloud storage security to your employees by answering the following questions:   • Where Is the Cloud Located?   Cloud storage data is located in remote data centers. These can be anywhere on the planet. Cloud vendors often store the same data in multiple places. This is called redundancy.   How is Cloud Storage Different from Local Storage?   Cloud vendors use the Internet to transfer data from a secure data center to employee devices. Cloud storage data is available everywhere.   • How Much Data Can the Cloud Store?   Storage in the cloud is virtually unlimited. Local drive space is limited. Bandwidth – the amount of data a network can transmit per second – is usually the limiting factor. High-Volume, low-bandwidth cloud service will run too slowly for meaningful work.   • Does the Cloud Save Money?   Most companies invest in cloud storage to save money compared to on-site storage. Improved connectivity cuts costs. Cloud services can also save money in disaster recovery situations.   • Is the Cloud Secure and Private?   Professional cloud storage comes with state-of-the-art security. Users must follow the vendor’s security guidelines. Negligent use can compromise even the best protection.   • What are the Cloud Storage Security Best Practices?   Cloud storage providers store files redundantly. This means copying files to different physical servers. Cloud vendors place these servers far away from one another. A natural disaster could destroy one data center without affecting another one hundreds of miles away.   Consider a fire is breaking out in an office building. If the structure contains paper files, those files will be the first to burn. If the office’s electronic equipment melts, then the file backups will be gone, too. However, if the office saves its documents in the cloud, this is not a problem. Copies of every file exist in multiple data centers located throughout the region. The office can move into a building with Internet access and continue working. Redundancy makes cloud storage security platforms failure-proof. On-site data storage is far riskier. Large cloud vendors use economies of scale to guarantee user data is intact. These vendors measure hard drive failure and compensate for them through redundancy. Even without redundant files, only a small percentage of cloud vendor hard drives fail. These companies rely on storage for their entire income. They take every precaution to ensure users’ data remains safe. Cloud vendors invest in new technology. Advances improve security measures in cloud computing, new equipment improves results. This makes cloud storage an excellent option for securing data against cybercrime. With a properly configured cloud solution in place, even ransomware poses no real threat. You can wipe the affected computers and start fresh. Disaster recovery planning is a critical aspect of cloud storage security.
    Read more >

    thumbnail

    August 1996: Netscape Creates Navio to Compete with Microsoft

    Marius Marinescu - CTO
    Netscape Communications Corp has created Navio Communications Inc. subsidiary to develop internet software for the consumer market - anything from cars to games consoles - aimed at non-PC users, but based on stripped-down versions of its Navigator web browser software. “The aim is to go where the PC can’t and is not likely to go”, said Netscape at that time. And where the PC can’t and won’t go, Netscape’s obviously hoped Microsoft Corp. can’t either and won’t try to follow. [separator] Navio signed agreements with IBM Corp., Oracle Corp., Nintendo Co. Ltd., Sony Corp., Sega Enterprises Ltd. and NEC Corp. The last four were clearly masters at the consumer market place, while IBM and Oracle were not such obvious participants. As to the type of products, details were sketchy, with Navio insisting it was just a company announcement. None of the partners was even present.   Navio’s chief executive, Wei Yen, identified three areas in which the products – due sometime in 1997 - will be used. The first was television-centric environments, such as game consoles, set-top boxes, and Digital Video Disk (DVD) systems. The second was communications devices, including Personal Digital Assistants (PDAs), cellular and other telephones. Yen said this category may subsume into one device before long (you got to give credit to that vision as it was fulfilled 11 years later in 2007 by Apple with the first iPhone). And lastly, the information terminal, by which Yen meant network computers, kiosks and other home appliances. He said the first batch of all three categories of products was likely to release around the same time in 1997.   The Navio software was based on Navigator technology and ran on devices with embedded, real-time operating systems or no operating systems at all, supporting all the standards that Navigator supported. Navio software was modular and dynamically downloadable. Netscape was readying a modular version of Galileo, the next version of Navigator. The full version was due at the end of the year, with the modular version early next year. The Navio modular software was at least connected with the modular Navigator work, according to the company. In other words, if a specific Navigator module was already there, it won’t be re-written for Navio. The plan was for the Navio browsers to reformat input for televisions and devices such as phones that only have space for a few lines of text and the Navigator team to provide the knowledge as far as Java, security and objects. The whole software stack will be extensible via plug-ins.   Marc Andreessen, Netscape’s co-founder and chief technology officer reckoned the market for the Navio software was at least 500 million users in five years’ time. If all the PCs – about 240 million in 1996 - phones, consoles, pagers, cars, televisions and practically everything else that moved and everything that didn’t were included, then that number was clearly conservative, and pretty meaningless. But Netscape was fast out of the blocks in signing up all the games console companies that mattered, together with IBM and Oracle, as well as some others that it declined to talk about, even though deals were not thought to be exclusive. Yen claimed, at the time, that internet will be as important as electricity to consumer devices in the next century, and Andreessen predicted an internet device on every desk and in every backpack, eventually (again, credit to that foresight). Andreessen said because of the extra advertising opportunities, the potential for giving consumer internet devices away for free was even greater than with cellular phones, which were already given away in many markets, and were also ideal internet devices. He wondered whether some sort of consumer internet access device might be bundled on the front of a magazine, or even come with a pizza box.   Oracle bought the majority stake in Navio in 1997. The company got assimilated into the huge Oracle machine and their dream of developing a pervasive ecosystem of inexpensive internet connected devices that will be based on their Navio Browser/OS never took flight. They never released a product, device, browser, operating system or otherwise, and they never published a roadmap for their supposed products and third-party integrations.   Their unfulfilled dream is called nowadays IoT :)
    Read more >

    thumbnail

    Brief History Of Artificial Intelligence, Part II – “Theoretical Fondation”

    Stefan Iliescu - CDS
      As we anticipated with the first episode of this small foray into the history of AI, in this second part we will try to present some essential theoretical achievements in the field. The way we consider to be the most appropriate is to exemplify two representative algorithms of their time in the field of computational and cognitive science. And who is more appropriate to start with than John McCarthy, the man who introduced the term "Artificial Intelligence"? (at the famous Dartmouth conference in Summer 1956, which also marked the beginning of AI as a field)   [separator]   One of the most critical American researchers of the time, McCarthy contributed massively in related fields such as mathematics, logic, information technology, cognitive sciences, artificial intelligence. As it is very difficult for us to mention now all his major contributions, we limit ourselves to listing some of them, such as:   •  creation in the 50s of the LISP language, which, based on lambda calculus becomes the preferred language of the AI ​​community; inventing the concept of "garbage collector" in 1959 for LISP •  participation in the committee that gave birth to the ALGOL60 language, for which he proposed in 1959 the use of recursion and conditional expressions •  significant contribution in defining three of the very earliest time-sharing systems (Compatible Time-Sharing System, BBN Time-Sharing System, and Dartmouth Time-Sharing System) •  the first promoter of the idea of ​​a computer utility, a prevalent concept in the 60s to the 90s, and returned to a new youth today in various forms: cloud computing, application service provider, etc.   But his genius (for which McCarthy is referred to as “Father of AI”) came to light even better through papers such as "Artificial Intelligence, Logic and Formalizing Common Sense", "Making Conscious Robots of their Mental States", "The Little Thoughts of Thinking Machines", "Epistemological Problems of Artificial Intelligence", "On the Model Theory of Knowledge", "Creative Solutions to Problems", or "Appearance and Reality: A challenge to machine learning" - papers that we hope will arouse the curiosity of many of our readers.     From the article "Free Will - Even for the Robots," we present below a sample of his deterministic approach to free will. The aim was to propose a theory of Simple Deterministic Free Will (SDFW) in a deterministic world. The theory splits the mechanism that determines action into possible actions and their consequences first and then decides which action is preferred the most. AI requires the formal expression of phenomena through the mathematical logic of situation calculus. The equation:  

    s’ = Result(e, s)

      asserts that s’ is the situation that results when event e occurs in the situation s. Since there may be many different events that can occur in s, and the theory of the function Result does not say which occurs, the theory is nondeterministic. Having some preconditions for the event to occur, we will get to the formula:  

    Precond(e, s) → s’ = Result(e, s).

      McCarthy added a formula Occurs(e, s) to the language that can be used to assert that the event e occurs in situation s. We have:  

    Occurs(e, s) → (Next(s) = Result(e, s)).

      Adding occurrence axioms makes a theory more deterministic by specifying that certain events occur in situations satisfying designated conditions. The theory still remains partly non-deterministic, but if there are occurrence axioms specifying what events occur in all the possible situations, then the theory becomes deterministic (i.e. has linear time).     We can now give a situation calculus theory for SDFW illustrating the role of a non-deterministic theory in determining what will deterministically happen, i.e. by saying what choice a person or machine will make.   In the following formulas, lower case term represents a variable and capitalized term represents a constant. Let us assume that an actor has a choice of just two actions a1 and a2 that may be performed in situation s. It means that event Does(actor, a1) or Does(actor, a2) occurs in situation s according to which of Result(Does(actor, a1), s) or Result(Does(actor, a2), s) that actor prefers.   The formulas that declare that an actor will do the preferred action are   (1)

                                                Occurs(Does(actor, Choose(actor, a1, a2, s), s), s),                                         

     

    and

     

    Choose(actor, a1, a2, s) =

    (2)

    if P refers(actor, Result(a1, s), Result(a2, s)).

     

    then a1 else a2.

      Prefers(actor, s1, s2) means that actor prefers s1 to s2 (and therefore made his choice) and this way it makes the theory determinist. Now let us take a non-deterministic theory of “greedy John”:  

    Result(A1, S0) = S1,

    Result(A2, S0) = S2,

    Wealth(John, S1) = $2.0 × 106 ,

    Wealth(John, S2) = $1.0 × 106 ,

    (3)

    (∀s s’)(Wealth(John, s) > Wealth(John, s’) → Prefers(John, s, s’).

      It is obvious that greedy John prefers a situation in which he has greater wealth by making the right action from situation S0 to situation S1. From equations ¹-³ it can be inferred   (4)

    Occurs(Does(John, A1, S0))

      Prefers(actor, s1, s2) means that actor prefers s1 to s2 and there were used two actions to keep the formula for Choose as short as possible. This illustrates briefly the role of the non-deterministic theory of Result within a deterministic theory of what occurs. Equation (1) represents the non-deterministic of Result used to asses which action leads to the better situation. Equation (2) represents the deterministic part that indicates which action occurs.   McCarthy makes four conclusions:   •  “1. Effective AI systems, e.g. robots, will require identifying and reasoning about their choices once they get beyond what can be achieved with situation-action rules (i.e. chess programs always have). •  The above theory captures the most basic feature of human free will. •  Result(a1, s) and Result(a2, s), as they are computed by the agent, are not full states of the world but elements of some theoretical space of approximate situations the agent uses in making its decisions. Part of the problem of building human level AI lies in inventing what kind of entity Result(a, s) shall be taken to be. •  Whether a human or an animal uses simple free will in a type of situation is subject to experimental investigation.”   We can consider that formulas (¹) and (²) illustrate a person making a choice. Nothing about person knowing it has some choices or preferring situations in which there are available more choices. So, for the situations when we need to take into considerations these phenomena we have to extend SDFW – as a partial theory. The importance of this theory is enormous both in terms of the interest given to the understanding of cognitive processes by man and as an aggregating result of some essential minds of the time who supported McCarthy in its realization.   The second algorithm proposal comes from the well-known Alan Turing.  One of the pioneers and most prominent promoters of theoretical computer science, Alan Turing was a mathematician, logician, cryptanalyst, philosopher, and British botanist. Perhaps his best-known contribution to the field is the Turing Machine - a mathematical model that has received over time numerous theoretical variants and alternatives as well as practical implementations. To fully understand the context of its creation, it must first be mentioned that in the 1930s, there were no computers, but this did not prevent the scientists of the time from proposing extremely bold theoretical objectives regarding the opening of the application area (i.e., “Halting Problem”).   The Turing Machine has the following parts:   1. an infinite roll of tape over which the symbols can be written, deleted and rewritten 2. the head that moves left and right on the tape as the symbols are written/rewritten or deleted (i.e. the head of a Hard Disk Drive) 3. the state register that represents a memory area which stores the state of the machine   The machine can read a symbol on the tape at some point, then write a symbol and then reposition the writing head to the left or right.  Although it has only implemented these simple routines, we will prove in the following that this model – and therefore, the Turing Machine – provides the theoretical basis for implementing any algorithm in any known language. The table of instructions for use of the machine is presented in the table below.  
    Current state Current symbol Action Move Next state
    S0 “0” Write “1” Right S1
    S0 “1” Write “0” Right S1
    S1 “0” Write “0” Right S0
    S1 “1” Write “1” Right S0
      The first two columns determine the input combinations that the machine can receive, and it consists of the state of the car and the symbol read. The next three columns determine the action performed by the machine, consisting of the symbol to be written, the direction of movement of the writing head and the future state of the machine. For example, the second line in the table above tells us that, being in the state S0, with the write end positioned on the symbol “0”, the machine will write the symbol “1” in that position after which it will move to Right transitioning to state S1.   The analysis of the table with instructions shows the following:   •  from any state S0, the symbols 0 and 1 are interchanged •  from any state S1, the symbols 0 and 1 remain the same •  based on the two points above we deduce that a string of type "111111" will be processed resulting in the string "010101". Let's take a more complex example, which allows us to perform additions like "000+00=00000", the equivalent of "3+2= 5". For this we will consider the following instruction table - a variant of the instruction table above.  
    Current state Current symbol Action Move Next state
    S0 “0” Write “Blank” Right S1
    S0 “+” Write “Blank” Right S5
    S1 “0” Write “0” Right S1
    S1 “+” Write “+” Right S2
    S2 “0” Write “0” Right S2
    S3 “0” Write “0” Left S3
    S3 “+” Write “+” Left S4
    S4 “0” Write “0” Left S4
    S4 “Blank” Write “Blank” Right S0
      Applying the calculation method from the previous example, we can see that the machine does the following major steps:   •  STEP 1: replaces the first “0” (in the group of three blank spaces) with a blank space •  STEP 2: moves to the end of the string (after the group of two blank spaces) •  STEP 3: add a “0” at the end of the string (after the last “0”) •  STEP 4: return to the beginning of the string and resume STEP 1 •  STEP 5: if the first symbol is "+" it is removed and the algorithm ends successfully.   We start with a step-by-step addition. The initial input (written on tape) is "000+00", and once the machine is started, the writing head is positioned on the first "0" - in the S0. S0 has two transitions, one for “0” and another for “+”; read the first "0", replace it with blank space and then move the head one position to the right. From S1, the machine can have again two transitions. The first of these is a loop, and involves replacing all "0" with "0" with the repeated movement of the head to the right - keeping the machine in the S1. The transition to S2 is made once the "+" is passed, after which moves are made to the left of the writing head until S0 is reached; then the specific transitions towards S1 and S2 are resumed (similar to above). Once the blank space is found, the write head will replace it with a “0” and move to the left, towards S3. Similarly, from S3 it will jump over all “0”s again until the head reaches a "+", but this time the machine will move to the left. Once "+" is reached, the machine will move a space to the left and transition to S4 state. From S4 the machine will jump over all “0”s and move back to S0 and move to the right if it reaches an empty space (i.e. a space past the beginning of the row) - that is, the entire loop repeated. In fact, the machine replaces a “0” in front of a "+" with a blank space, moves its head to the end of the string and adds a "0". Then go back to the first “0” on the left and repeat. It keeps doing this until all the "0" characters from the left of "+" will be replaced with blank spaces.   •  Loop 1: "000+00" •  Loop 2: “00+000” •  Loop 3: “0+0000” •  Loop 4: “+00000” •  Loop 5: “00000”.   After four loops, the machine is in S1, but this time the head reads a "+". In fifth loop the machine replaces "+" with an empty space and moves to S5, the final state. The conclusion is that, without a real computer, through a simple set of tools and rules, we can build a machine that can calculate! And it will work for any length of the "0"s (string). The algorithms presented above were in a simplified form, and of course, perhaps more examples would have been needed for a thorough understanding of them. We hope, however, that the chosen examples will arouse your curiosity to read more about them and their authors. We will return with a third and last part of this short history of AI with some examples of the most representative achievements in the field, real turning points in the human-machine relationship.  
    Read more >

      thumbnail

      Enigma machine: the device that changed WWII

      Cristian Gal - CSO
      The Enigma machine is the creation of dr. Arthur Scherbius. This device was capable of transcribing coded information for secure communications. In 1923 he set up his Chiffriermaschinen Aktiengesellschaft (Cipher Machines Corporation) in Berlin to manufacture this product. [separator] The German military however was producing its own version. The German navy introduced their version as well in 1926, followed by the army, in 1928, and the air force, in 1933. The military Enigma version allowed an operator to type in a message, then scramble it by means of three to five notched wheels, or rotors, which displayed different letters of the alphabet. The receiver needed to know the exact settings of these rotors in order to reconstitute the coded text. The Poles managed to crack the commercial Enigma versions by reproducing the internal parts of the machine, but that was not useful for decoding the military versions. During the World War II, the military versions of Enigma were heavily used by the Germans, convinced that it couldn't be decoded. The allies established a special divison at Bletchley Park, Buckinghamshire, whose task was to decode the German communications. The best mathematicians were recruited here and, with the intelligence from the Poles, they build early computers with the task to work out the vast number of permutations in Enigma settings. In the mean time, the Germans were upgrading their machine by improving the hardware used for setting the code in each machine. Also, the use of daily codes for the machine made the allies’ job a lot harder. One of the briliant mathematicians involved in decoding Enigma was Alan Turing. Born in 1912, in London, he studied at Cambridge and Princeton universities. Turing played a key role inventing, along with fellow code-breaker Gordon Welchman, a machine known as the „Bombe”. This device helped to significantly reduce the work of the code-breakers. From mid-1940, German Air Force signals were being read at Bletchley and the intelligence gained from this was quite helpful. From 1941, messages sent using the army's Enigma were read also. The one used by the German navy, on the other hand, was not that easy to crack. Capturing Enigma machines and codes from different German units helped decipher communication, but with a considerable delay. To compesate for this, allies started hunting for ships and planes that carried Enigma codes in order to decode communications faster. In July 1942, Turing developed a complex code-breaking technique he named „Turingery”. This method helped the team at Bletchley understand another device that enciphered German strategic messages of high importance - the „Lorenz” cipher machine. Bletchley division’s ability to read these messages contributed greatly to the Allied war effort. Alan Turing’s legacy came to light long after his death. His impact on computer science was widely acknowledged: the annual „Turing Award” has been the highest accolade in that industry since 1966. But the work done at Bletchley Park – and Turing’s role there in cracking the Enigma code – was kept secret until the 1970s. Actually, the full story was not known until the 1990s. It has been estimated that the efforts of Turing and his fellow code-breakers shortened the war by several years. What is certain is that they saved countless lives and helped determine the course and outcome of the conflict.
      Read more >

      thumbnail

      How Hackers Benefit from the Coronavirus Crisis

      Sergiu Popa - Director of Cybersecurity Research
      Computer hacking – a fascinating subject populated with tales from the scholars of trivia who often heard about hacking from TV, seen it in a movie or acquired a couple of certifications which they believe allow them to call themselves so. [separator] We give you hacking insights based on experience, not hypothetical scenarios created in labs. How can one hacker exploit corona? In the times of the COVID-19 crisis, forecasts estimated that cyber-crime will increase 400%. And these estimations went low. They actually increased way more than that.   Let's delve into the subject. Usually, social engineering is probably the most potent way of delivering attack payloads to corporate environments whose users’ only training consists in less than mentally challenging security mantras (change your password, don’t click on these links, click on these other links, etc.). Furthermore, the psychological nature of a crisis such as the one we are facing now attempts to, at the very least, excite a basic human trait: curiosity. Throw in curiosity and a cunning manner of delivering a message and the result is called “victims”. Let’s analyze the following examples, which we introduce in a somewhat random fashion, but they will eventually make sense in the end.   The crisis kind of pushed companies to adopt working from home as the way to move forward. This move by itself is self-obvious in terms of how it can be exploited by the clever hacker. Hackers identify the first element that creates an exploit: confusion. A study indicates that oral communication when perpetuated to a chain of more than 5 people dilutes itself to 20% or less. It is quite easy to imagine an IT department training. “Guys, do not click on phishing links. No spamming links. We may update our VPN to incorporate multi factor authentication.” Most people are unable to identify phishing links. It can be quite hard sometimes, as some of these links are actually legitimate, but their purpose is to lead to spear-phishing. Please consult https://www.phishtank.com/ and test your phishing “street smarts”. Then, people are told that they may update their VPN. Well, that right there can break all hell loose. If a user receives an email from their IT department, being asked to download a new VPN client, 95% of the users will attempt to do it, while only 30% of that 95% will succeed in installing the malicious package (lack of computer literacy when it comes to installing programs).   Imagine the next scenario: a hacker wants to break into a bank, but their security is quite strong and he may not want to create mathematical models of deception for their network analysis software. What can he do? Quite simple. All their profiles are listed on LinkedIn. Great. What’s next? Gathering social media information on these people, he can somehow obtain a score of who is tolerant to a degree of hypochondria. Then he emails them as being the hospital and tells them that according to their records, there is a high degree of probability that they may be infected with COVID-19 and they may want to register for a free COVID test at their website, https://ExampleHospital.com, where they will be asked to fill in their address, DOB, phone number, email and eventually fax in a copy or upload a copy of their NI document. The skilled operator (hacker) will now go and brute-force the Wi -Fi password to their house. Or they might get more creative, and eventually offer some chatting software, support software which enables the victim to talk to others in their category or consult with a live doctor. Of course, the “get-you-well” software is nothing more than a trojan, a RAT (remote administration tool).   This is just a casual example of what a hacker might do. But let’s consider the following scenario: The employees of company X receive an email from the IT department stating that their picture has to be uploaded to the new SharePoint directory for a work from home directory creation and the distribution of COVID-19 testing toolkits. This attachment containing the picture might be ransomware or adware or some other malware. Usually, the common criminal will send ransomware. The average criminal will send some malware/adware and the smart criminal will send an APT, whose purpose is to lie dormant and probably redirect TB of Google traffic to their benefit to shortening links and this situation can go on for years.   As we can see, the COVID-19 crisis, if played on the right soft psychological side of people, can have devastating effects on a company’s security systems. As always, knowledge is power. At Metaminds, we pay close attention to every requirement our clients express and make sure we address their concerns with a flawless, custom-designed solutions to ensure the safety of their operations.
      Read more >

      thumbnail

      Telstar Satellite: the Launch of the Modern World

      Marius Marinescu - CTO
      Trans-Atlantic television and other communications became a reality as the Telstar communications satellite was launched. A product of AT&T Bell Laboratories, the satellite was the first orbiting international communications satellite that sent information to tracking stations on both sides of the Atlantic Ocean. Initial enthusiasm for making phone calls via the satellite waned after users realized there was a half-second delay as a result of the 25,000-mile transmission path. [separator] Even if nowadays it seems like a phone call is a regular thing, IT professionals have dealt with many difficulties in the past to make fixed phone calls a reality. Nowadays we are concerned with making our conversations safer by solving the different security breaches we are confronted with, but back then people had other issues. Quick recap for the millennials: long before everyone had a smartphone or two, the implementation of a telephone was quite different than today. Most telephones had real, physical buttons. Even more bizarrely, these phones were connected to other phones through physical wires. Weird, right? These were called “landlines”, a technology that is still employed in many households around the world. It gets even more bizarre. Some phones were wireless (quite just like your smartphone) but they couldn’t get a signal more than a few hundred feet away from your house for some reason. These were “cordless telephones”. Many hackers are working on deconstructing the security behind these cordless phones for a few years now and found these cordless phones aren’t secure at all. While nothing is 100% secure, many people thought that DECT and 5.8GHz phones were safe, at least more so than the cordless phones from the 80s and 90s. While DECT has been broken for a long time, 5.8GHz phones were considered to be safer than 900mhz phones, as scanners are harder to come by in the microwave bands, because very few people have a duplex microwave transceiver sitting around. But everything is bound to happen eventually. With the advent of cheap SDR, hackers demonstrated that listening to and intercepting any phone call they want is actually possible. Using a duplex microwave transceiver (very cheap at ~$300 for the intended purpose) they freely explored the radio system inside these cordless phones. After taking a duplex microwave transceiver to a cordless phone, hackers found the phone technically didn’t operate in the 5.8 GHz band. Control signals, such as pairing a handset to a base station, happened at 900 MHz. Here, a simple replay attack was enough to get the handset to ring. It gets worse: simply by looking at the 5.8 GHz band with a transceiver, they found an FM-modulated voice channel when the handset was on. That’s right: the phone transmits the voice signal without any encryption whatsoever. This isn’t the first time hackers found a complete lack of security in cordless phones. A while ago, they explored the DECT 6.0 standard, a European cordless phone standard for PBX and VOIP. There was no security there, either. It would be chilling if landlines were as spread today as they were 20 some years ago, because the tools to perform a landline hack are freely available and thoroughly documented.
      Read more >

      thumbnail

      `Tron: Legacy’ – a Data Project Turned Blockbuster

      Cristian Gal - CSO
      Few people realize that making movies like Tron: Legacy is also a huge data project. Doing a movie with that much computer generated content creates an enormous amount of data, amount that now is measured in petabytes. Also, because the computer generated content is integrated in the filmed content, usually the CGI companies involved are using at some point a more or less finished version of the movie. That makes them a prime target for hacking attempts. [separator] The HBO hack of 2017, when Game of Thrones scripts and episodes of Curb Your Enthusiasm and Ballers were released online before their air dates, caused chaos for the premium cable network. The hackers were motivated by greed. The organization that went by the name Mr. Smith was seeking a ransom in the range of $6 million to prevent the release of this highly sensitive information. And this data breach is far from the first example the entertainment industry has faced.   The Sony hack of 2014, in which thousands of confidential company documents and emails were released, had a long-lasting impact on the company. It resulted in the ouster of Amy Pascal, head of Sony Pictures Entertainment, turned „The Interview” into a box-office bomb, resulted in a slew of lawsuits and, in general, caused a lot of pain and embarrassment to a lot of people.   And then there’s the release of Quentin Tarantino’s „The Hateful Eight” script. The Oscar-winning director closely guard his material. When it turned out that someone had leaked an early draft of the Western whodunit, Tarantino actually considered shelving the project altogether. Even though Tarantino went on with making the movie after all, it underscores an issue that many in Hollywood face, whether working in production or at a studio. That issue is: “how to ensure the security of information and intellectual property?”.   A movie or TV production can employ hundreds of people. And with each production there are countless documents and files – scripts, budgets, payroll documents and video – that could be very detrimental to the production and its staff if leaked out. Knowing hackers are looking for high-value targets, having a strong data security system in place is of the utmost importance. Unfortunately, most in the entertainment industry – be they productions or studios – aren’t using the enterprise-grade protection they need to keep their information safe. Especially when it comes to productions, they’re simply using the most rudimentary of storage and security services.   To secure such a great amount of movie data against hacking and premature leaking, Hollywood had to embrace digital security. As many other industries before it, Hollywood turned to a new class of technology companies, that for the last few years have been offering ways to manage the data slipping into employees’ personal smartphones and Internet storage services. They wrap individual files with encryption, passwords and monitoring systems that can track who is doing what with sensitive files.   The most sensitive Hollywood scripts were — and, in many cases, still are — etched with watermarks, or printed on colored and even mirrored paper to thwart photocopying. Letter spacing and minor character names were switched from script to script to pinpoint leakers. Plot endings were left out entirely. The most-coveted scripts are still locked in briefcases and accompanied by bodyguards whose sole job is to ensure they don’t end up in the wrong hands.   But over the last decade, such measures have begun to feel quaint. Watermarks can be lifted. Color copiers don’t care what color a script is. Even scripts with bodyguards linger on a computer server somewhere. And once crew members started using their personal smartphones on set, people started leaving with everything they had created for the movie production.   So the movie studios had to employ security solutions that give file creators the ability to manage who can view, edit, share, scan and print a file, and for how long. If hackers steal the file off someone’s computer, all they will see is a bunch of encrypted characters. Also, some Hollywood studios are removing their movie editing software from the Internet employing a process known as “air-gapping”— so that if hackers breach their internal network, they can’t use that access to steal the data.   One of the quirkier features that some studios use is adding a digital spotlight view that mimics holding a bright flashlight over a document in the dark. Everything beyond the moving circular spotlight is unreadable. The feature makes it difficult for anyone peering over your shoulder — or a hacker pulling screen shots of your web browser — to read the whole document.
      Read more >

      thumbnail

      Brief History Of Artificial Intelligence, Part I – “Early Contributors”

      Stefan Iliescu - CDS
        In this first article of our series dedicated to the brief history of AI, we will focus on essential achievements in this field in the pre-computer age period. The dominant method of research at the time was to look in nature for ideas for solving severe problems. In the absence of an understanding of the functioning of natural systems, the research could only be experimental. So the most daring of the researchers approached the creation of mobile automatons (pre-robots) as the first attempt to create artificial intelligence.   [separator]   Grey Walter’s “Tortoise”   Born in the United States but educated in England, Walter failed to obtain a research fellowship in Cambridge and started neurophysiological research in various places over the world. Heavily influenced by the work of the Russian physiologist Ivan Pavlov and Hans Berger (the inventor of the electroencephalograph for measuring electrical activity in the brain), Walter made several discoveries using his version of EEG machine in the field of brain topography. The most notable was the introduction of triangulation as a method of locating the strongest alpha waves within the occipital lobe, thus facilitating the detection of brain tumors or lesions responsible for epilepsy. He pioneered the brain topography based on EEG machine with a multitude of spiral-scan CRTs coupled to high-gain amplifiers.   Walter remained famous as an early contributor to the AI field mainly for making some of the first mobile automatons in the late ’40s, named tortoises (after the tortoise in “Alice in Wonderland) because of their slow speed and shape. These battery-powered automatons were prototypes to test his theory that a small number of cells can induce complex behavior and choice. As a very simple model of the nervous system, they implemented two neuron architecture by incorporating only two motors, two relays, two valves, two condensers, and one sensor (ELSIE had sensor for light and ELMER had sensor for touch). ELSIE scanned the surroundings continuously with the rotating photoelectric cell until a light source was detected. If the light was too bright, it moved away. Otherwise, ELSIE moved toward the light source. ELMER explored the surroundings as long as it didn’t encounter any obstacles; otherwise, ELMER retreated after the touch sensor had registered a contact. Both versions of the tortoise moved toward an electric charging station when the battery level was low.   Walter noted that the automatons “explore their environment actively, persistently, systematically, as most animals do”. This is what happened most of the time, except when a light source was attached to ELSIE’s nose. The automaton started “flickering, twittering and jigging like a clumsy narcissus” and Walter concluded that this was a sign of self-awareness. Even though many scientists today believe that robots will not achieve self-awareness, Walter’s experiment succeeded in proving that complex behaviours can be generated by using only a few components and that biological principles can be applied to robots.   Subsequent developments, some remaining only in a theoretical phase, promised substantial improvements in the direction of intelligent behaviour, Walter trying to add “learning” skills – even if they were in a primary form, such as Pavlovian conditioning. For example, the incorporation of an auditory sensor and the whistle immediately before contact between ELMER and an obstacle will cause ELMER to subsequently perform an obstacle avoidance maneuver before contact occurs – if it “heard” the whistle. Although it seems that Walter materialised this attempt, it seems that the echo was not noticeable in the scientific world at that time.   John Hopkins’ “Beast”   Another well-known realisation of a mobile automaton is the “Beast” project from the ’60s of a team of engineers from Johns Hopkins University Applied Physics Laboratory, including Ron McConnell (Electrical Engineering) and Edwin B. Dean, Jr. (Physics). By having a height of half a meter, over 200 cm diameter, and a weight of almost 50 kilograms, “Beast” was built to perform two tasks only: explore the surroundings and survive on its own. Initially equipped with physical switches, “Beast” moved “freely” following the white walls of the laboratory and avoiding potential obstacles encountered. When the battery level was low, “Beast” “looks for” a black wall socket and plugs it in for power. Without a central processing unit, its control circuitry consisted of multiple transistor modules that controlled analogue voltages; three types of transistors allowed three classes of tasks:   – Make a decision when activating a sensor, by emulating Boolean logic; – Specify a period to do something, by creating timing gates; – Control the pressure for the automaton’s arm and the charging mechanism by using power transistors.   A second version also received a photoelectric cell in addition to an improved sonar system. With the help of two ultrasonic transducers, “Beast” could now determine the distance, location within the perimeter, and obstructions along the path – thus exposing a significantly more complex “behaviour” than those of Walter’s tortoises. Performances such as stopping, slowing down or bypassing an obstruction or recognising doors, stairs, installation pipes, hanging cables or people through taking the appropriate actions are perhaps the most significant technical achievement of the pre-computer age.   In his response to Bill Gates, who predicted in 2008 that the “next” hot field would be robotics, McConnell humorously stated about their work from the ’60s: “The robot group built two functioning prototypes that roamed and “lived” in the hallways of the lab, avoiding hazards such as open stairwells and doors, hanging cables and people while searching for food in the form of AC power on the walls to recharge their batteries. They used the senses of touch, hearing, feel and vision. Programming consisted of patch cables on patch boards connecting hand-built logic circuits to set up behaviour for avoidance, escape, searching and feeding. No integrated circuits, no computers, no programming language. With a 3-hour battery life, the second prototype survived over 40 hours on one test before a simple mechanical failure disabled it.”   Ashby’s “Mobile Homeostat”   Indeed, the most intriguing prototype of care saw the light of day before the computer age was The Homeostat¹, created by W. Ross Ashby, Research Director at the Barnwood House Hospital in Gloucester, in 1948 and presented at the Ninth Macy’s. Conference on Cybernetics in 1952. The Homeostat contained four identical control switch-gear kits that came from WW2 bombs (with inputs, feedback, and magnetically driven, water-filled potentiometers), and each transformed into an electro-mechanical artificial neuron. The purpose of this prototype was extremely challenging for that time, namely to be an example for all types of behaviour – by addressing all living functions.   During the presentation, The Homeostat was able to perform tasks that indicate some cognitive abilities, i.e., the ability to learn and adapt to the environment. But the approach was at least strange: while other automaton of the time exhibited a dynamic character by exploring the environment, the goal of the Homeostat was to reach the perfect state of balance (i.e. homeostasis). This approach was intended to support the author’s principle of ultra-stability and the law of a variety of requirements. Based on the concept of “negative feedback,” the Homeostat approached incrementally the path between the current state and the final state of equilibrium, the steps representing the concrete responses of the automatons to changes in the environment (which affected the state of equilibrium). In detail, the principle of “Law of Requisite Variety” (as the author called it), stated that in order to break the variety of disturbances from the external environment, a system needs a “goal-seeking” strategy and a wide variety of possibilities to respond to them. For the animal world, a final goal like “no goal” was equivalent to achieving immortality. The part of “cognitive intelligence” embedded in the activity of automatons was precisely this “goal-seeking” approach, and, from a technical standpoint, “its principle is that it uses multiple coils in a milliammeter & uses the needle movement to dip in a trough carrying a current, so getting a potential which goes to the grid of a valve, the anode of which provides an output current”. But the audience was not very convinced of this principle, and, on the whole, its activity could be classified as a “goal-less goal-seeking machine.” It was Gray Walter, who called The Homeostat a “Machina sopor,” of which he said “fireside cat or dog which only stirs when disturbed, and then methodically finds a comfortable position and goes to sleep again,” in contrast with his creation, “The Tortoise,” called “Machina speculatrix,” which embodies the idea that “a typical animal propensity is to explore the environment rather than to wait passively for something to happen.” It was later learned that Alan Turing advised Ashby to implement a simulation on the ACE² computer instead of building a special machine.   However, The Homeostat received a significant comeback in the 1980s, when a team of cognitive researchers from the University of Sussex led by Margaret Boden created several practical robots incorporating Ashby’s ultrastability mechanism. Boden was fascinated by the idea of ​​modeling an autonomous goal-oriented creature, arguing that the future of cognitive science is one based on The Homeostat.   Conclusions   The cybernetics of the ’60s are long gone, and the current possibilities of computer simulation are infinitely more capable than anything that could be imagined or created by the geniuses of those times, and within reach of any school student. Suffice it to say that the level of tropism of Tortoises is equivalent to that of a simple bacteria and The Beast equals the ability to coordinate of a large nucleated cell’s like Paramecium, which is a bacterial hunter; or that what was then presented as a continuous adaptation of responses to external stimuli is far from what we understand and have today in terms of learning – supervised or unsupervised. But evolution has not been just the result of the appearance of computer technology and its fantastic development. As I mentioned in the introduction, the history of AI overlaps the history of cognitive science. So at today’s AI level, achievements in multiple fields have contributed, including linguistics, psychology, philosophy, neuroscience, anthropology, and, of course, mathematics. Simply put, even though in most cases it was agreed that it was a success, we can say that these mobile automatons of the pre-computer-era were nothing more than experiments before theoretical research and not during it. The rudimentary means of construction, the lack of a common language in the field and the non-adjustment between the model and the implementation mechanisms have often made the researchers of the time doubt each other’s achievements³; unimaginable today, when everyone understands that an self-driving car can anticipate complex accidents better than all the drivers involved or that a software robot crushes the world chess champion without even training by playing with someone other than himself.   [separator]   Footnotes:
      1. In biology, homeostasis is the state of steady internal, physical, and chemical conditions maintained by living systems.
      2. The Automatic Computing Engine (ACE) was a British early electronic serial stored-program computer designed by Alan Turing.
      3. With regard of The Homeostat of Ashby, the cyberneticist Julian Bigelow famously asked, “whether this particular model has any relation to the nervous system? It may be a beautiful replica of something, but heaven only knows what.”
        References:
      1. Steve Battle – “Ashby’s Mobile Homeostat”
      2. Margaret A. Boden – “Mind as Machine, A History of Cognitive Science”
      3. Margaret A. Boden – “Creativity & Art, Three Roads to Surprise”
      4. Stefano Franchi, Francesco Bianchini – “The Search for a Theory of Cognition: Early Mechanisms and New Ideas”
      5. http://cyberneticzoo.com/cyberneticanimals/1962-5-hopkins-beast-autonomous-robot-mod-ii-sonarvision-jhu-apl-american/
      6. http://www.rutherfordjournal.org/article020101.html
      Read more >

      thumbnail

      The Hawking Radiation: Passport to Escape From a Black Hole

      Stefan Iliescu - CDS
        “My goal is simple. It is a complete understanding of the universe, why it is as it is, and why it exists at all”, said Stephen Hawking, the famous theoretical physicist and cosmologist of the 20th century. The quote emphasizes that he was not one to settle for an easy challenge, a trait that we hope is the basis of every individual in our team. The task he set for himself was too large for an individual to complete in a lifetime, but, even so, the renowned British physicist accomplished substantial parts of it by leading the world to understand the bits of the universe.   Stephen Hawking devoted all his resources to the study of black holes, individually and in collaboration with other acclaimed researchers. His debut took place in 1970, when, together with Sir Roger Penrose, established the theoretical basis (the Penrose – Hawking singularity theorems) for the formation of black holes. Their prediction was proven by recent observational experiments (2015-2019) at the Laser Interferometer Gravitational-Wave Observatory (LIGO) that detected gravitational waves emitted by colliding black holes (or emerging ones).   The same theoretical basis was the expansion of the black hole (this translates into an increase in the area of ​​a black hole's event horizon) with the absorption of matter and energy from its vicinity. According to the second law of thermodynamics, the entropy of the black hole can only increase, and, as the entropy is an energy-dependent function that possesses the temperature, the scientists wanted to know how high the temperature of a black hole can go. Here comes perhaps the most significant contribution so far in the field, namely the Hawking radiation, which may be responsible for keeping the temperature bellow a „certain limit”. He uncovered that black holes, once thought to be static, unchanging, and defined only by their mass, charge, and spin, are actually ever-evolving engines that emit radiation and evaporate over time. Although this contribution has not yet been proven by any experiment, which is why Hawking did not win the Nobel Prize in his lifetime, it is seen as the only widely recognized result by physicists in the field as support for a unifying theory of quantum mechanics and gravity.   The next question for the scientific world was, logically, whether the radiation emitted by the black hole preserves the information that came with the ingestion of matter, even in a scrambled form. For many years Hawking did not believe so, and proposed in 1997, characteristically for him, a bet (Thorne – Hawking – Preskill bet). In 2004 Hawking updates his own theory stating that the black hole event's horizon is not really a "firewall" but rather an "apparent horizon" that enables energy and information to escape (from the quantum theory standpoint), thus declaring himself the loser of the bet. Moreover, he considers that he has thus corrected the biggest mistake of his life in the field. Neither Kip Thorne, who was with him in the bet against John Preskill nor half of the scientific world, is considered convinced of this update, today, two years after Hawking's death. In the absence of solid experimental evidence (which, among other things, will support a quantum theory of gravity), the question of whether and how information leaks from a black hole (through Hawking radiation) remain open.
      Read more >

      thumbnail

      Web Browser Security: From Netscape Navigator to Microsoft Edge

      Marius Marinescu - CTO
        The Internet has become an intrinsic part of our everyday life, both if you are interested in the threats it poses from a cybersecurity point of view or if you are only enjoying the many advantages it offers. Not so long ago though, you had to be a visionary to imagine the power it was going to hold in the future. Microsoft wanted to get into the browser game as soon as possible after Netscape Communications Corporation became the web browser industry leader, a little after the release of its flagship browser, Netscape Navigator, in October 1994. [separator] Soon after, Microsoft licensed from Spyglass Inc. the Mosaic software that would be furtherly used as the basis for the first version of Internet Explorer. Spyglass was an Internet software company founded by students at the Illinois Supercomputing Center that managed to develop one of the earliest browsers for navigating the web. They waited an entire year to go public after they began distributing their software and making up to $7 million out of it, which happened exactly on this day, 25 years ago.   Microsoft developed the functionality of the Internet Explorer browser and embedded it in the core Windows operating system for the better part of the last 25 years. They are still providing to this day the old Windows Internet Explorer 11 (latest supported version) with security patches, but they are replacing it on the newer operating systems with their own Microsoft Edge browser, which in turn, they are replacing this year with a brand new Microsoft Edge browser. Confusing, right? The main difference between the old Edge browser and the new Edge browser is that the latter is based on Google’s Ghromium web engine and has nothing to do with Microsoft’s old code-base.   But until the new Edge browser will be the default choice on Microsoft OS’s, let’s take a look at the current Edge browser and his relationship with the old Internet Explorer. The already „old” Microsoft Edge has more in common with Internet Explorer than you might think especially when it comes to security flaws.   Given that the number of vulnerabilities found in Edge is far below Internet Explorer, it's reasonable to say Edge looks like a more secure browser. But is Edge really more secure than Internet Explorer? According to a Microsoft blog post from 2015, the software giant's Edge browser, an exclusive for Windows 10, is said to have been designed to "defend users from increasingly sophisticated and prevalent attacks."   In doing that, Edge scrapped older, insecure, or flawed plugins or frameworks, like ActiveX or Browser Helper Objects. That already helped cut a number of possible drive-by attacks traditionally used by hackers. EdgeHTML, which powers Edge's rendering engine, is a fork of Trident, which still powers Internet Explorer.   However, it's not clear how much of Edge's code is still based off old Internet Explorer code. When asked, Microsoft did not give much away. They said that "Edge shares a universal code base across all form factors without the legacy add-on architecture of Internet Explorer. Designed from scratch, Microsoft does selectively share some code between Edge and Internet Explorer, where it makes sense to do so."   Many security researchers are saying that overlapping libraries are where you get vulnerabilities that aren't specific to either browser, because when you're working on a project as large as a major web browser, it's highly unlikely that you would throw out all the project specific code and the underlying APIs that support it. There are a lot of APIs that the web browser uses that will still be common between the browsers. If you load Microsoft Edge and Internet Explorer on a system, you will notice that both of them load a number of overlapping DLLs.   The big question is how much of that Internet Explorer code remains in Edge, and crucially, if any of that code has any connection to the overlap of flaws found in both browsers that poses a risk to Edge users. The bottom line is that it's hard, if not impossible to say if a browser is more or less secure than another browser.   A "critical" patch, which fixes the most severe of vulnerabilities, is a moving scale and has to consider the details of the flaw, as well as if it's being exploited by attackers. With an unpredictable number of flaws found each month coupled with their severity ratings, a browser's security worth can vary month by month.   As history showed us, in the last 5 years the Edge browser had no fewer than 615 security vulnerabilities and Internet Explorer almost doubles that – 1030.   Microsoft's decision to adopt the Chromium open-source code to power its new Edge browser could mean a sooner-than-expected end of support for Internet Explorer and the end of support for the shared code-base with the „old” Edge browser. And that’s a good thing for the security of users that are only using the browser provided by the operating system itself (7.76% - Microsoft Edge, 5.45% - Internet Explorer as of April 2020).
      Read more >

      thumbnail

      Siri Shortcuts: Hey, Siri! Watch Out For Scareware!

      Cristian Gal - CSO
      Some of us can’t imagine life without Siri or another virtual assistant to help, guide and save time throughout the day. Even though it has so many advantages, the fact that, in order to work properly, it must always be listening, raises serious privacy concerns. [separator] The first step that led to the creation of today’s speaking devices was an educational toy named the Speak & Spell, announced back in 1978 by Texas Instruments. It offered a number of word games, similar to the hangman, and a spelling test. What was revolutionary about it was its use of a voice synthesis system that electronically simulated the human one.  

      The system was created as an offshoot of the pioneering research into speech synthesis developed by a team that included Paul Breedlove as the lead engineer. Breedlove was the one that came up with the idea of a learning aid for spelling. Breedlove’s plan was to build upon bubble memory, another TI research effort, and as such it involved an impressive technical challenge: the device should be able to speak the spelling word out loud.

      The team analyzed several options regarding how to use the new technology and the winner was this 50$ toy idea.

          With Apple’s introduction of iOS 12 for all their supported mobile devices came a powerful new utility for automation of common tasks called Siri Shortcuts. This new feature can be enabled via third-party developers in their apps, or custom built by users downloading the Shortcuts app from the app store. Once downloaded and installed, the it grants the power of scripting to perform complex tasks on users’ personal devices.   Siri Shortcuts can be a useful tool for both users and app developers who wish to enhance the level of interaction users have with their apps. But this access can potentially also be abused by malicious third parties. According to X-Force IRIS research, there are security concerns that should be taken into consideration in using Siri Shortcuts.   For instance, Siri Shortcuts can be abused for scareware, a pseudo-ransom campaign trying to trick potential victims into paying a certain a criminal by convincing them their data is in the hands of a remote attacker. Using native shortcut functionality, a script could be created to transmit ransom demands to the device’s owner by using Siri’s voice. To lend more credibility to the scheme, attackers can automate data collection from the device and have it sent back the user’s current physical address, IP address, contents of the clipboard, stored pictures/videos, contact information and more. This data can be displayed to the user to convince them that an attacker can make use of it unless they pay a ransom.   To move the user to the ransom payment stage, the shortcut could automatically access the Internet, browsing to a URL that contains payment information via cryptocurrency wallets, and demand that the user pay-up or see their data deleted, or exposed on the Internet.   Apple prefers quick access over device security for Siri, which is why the iOS default settings allow Siri to bypass the passcode lock. However, allowing Siri to bypass the passcode lock could allow a thief or hacker to make phone calls, send texts, send e-mails, and access other personal information without having to enter the security code first.   There is always a balance that must be struck between security and usability. Users and software developers must choose how much perceived security feature-related inconvenience are they willing to endure in order to keep their devices safe versus how quickly and easily they want to be able to use them.   Whether you prefer instant access to Siri without having to enter a passcode is completely up to you. In some cases, while you're in the car, for example, driving safely is more important than data security. So, if you use your iPhone in hands-free mode, keep the default option, allowing the Siri passcode bypass.   As the Siri feature becomes further advanced and the amount of data sources it is tapped into increases, the data security risk for the screen lock bypass may also increase. For example, if developers tie Siri into their apps in the future, Siri could provide a hacker with financial information if a Siri-enabled banking app is running and logged in using cached credentials and a hacker asks Siri the right questions.
      Read more >

      thumbnail

      SSL/TLS Vulnerabilities Leave Room for Security Breaches

      Marius Marinescu - CTO
      By integrating cybersecurity and complex architectures in the IT field, we cannot appreciate enough the unprecedented security developed by Netscape Corporation. Besides developing Navigator, the browser that would change the way the Internet was used by the masses, it also pioneered the Secure Sockets Layer (SSL) Protocol that enabled privacy and consumer protection. [separator] The underlying technology used for their browsers at that time, Navigator and Communicator, still powers today’s security standard, Transport Layer Security (TLS).   Back in 1996, Washington Post published an article in which they speculated that Netscape might one day turn into a challenge for Microsoft, due to the fact that the software startup was growing very fast. It seems like they were right since, years later, the source code used for Netscape Navigator 4.0 would lead to the creation of Mozilla and its Firefox browser. This is one of the best alternatives to Google Chrome which, in 2016, managed to dethrone Internet Explorer, the browser created by Microsoft. Although all modern browsers are using the SSL and TLS protocols pioneered by Netscape Corporation, these protocols had their fair share of vulnerabilities over the years. So, remember that using the latest browser, without any other security solution, doesn’t mean that you are protected against the latest attacks. Here are some of the most prominent attacks involving breaches of the SSL/TLS protocols that had surfaced in recent years:   POODLE The Padding Oracle On Downgraded Legacy Encryption (POODLE) attack was published in October 2014 and exploits two aspects: the fact that some servers/clients still support SSL 3.0 for interoperability and compatibility with legacy systems and a vulnerability within SSL 3.0 that is related to block padding. The client initiates the handshake and sends a list of supported SSL/TLS versions. An attacker intercepts the traffic, performing a man-in-the-middle (MITM) attack, and impersonates the server until the client agrees to downgrade the connection to SSL 3.0. The SSL 3.0 vulnerability is in the Cipher Block Chaining (CBC) mode. Block ciphers require blocks of fixed length. If data in the last block is not a multiple of the block size, extra space is filled by padding. The server ignores the content of padding. It only checks if padding length is correct and verifies the Message Authentication Code (MAC) of the plaintext. That means that the server cannot verify if anyone modified the padding content. An attacker can decipher an encrypted block by modifying padding bytes and watching the server response. It takes a maximum of 256 SSL 3.0 requests to decrypt a single byte. This means that once every 256 requests, the server will accept the modified value. The attacker does not need to know the encryption method or key. Using automated tools, an attacker can retrieve the plaintext character by character. This could easily be a password, a cookie, a session or other sensitive data.   BEAST The Browser Exploit Against SSL/TLS (BEAST) attack was disclosed in September 2011. It applies to SSL 3.0 and TLS 1.0 so it affects browsers that support TLS 1.0 or earlier protocols. An attacker can decrypt data exchanged between two parties by taking advantage of a vulnerability in the implementation of the Cipher Block Chaining (CBC) mode in TLS 1.0. This is a client-side attack that uses the man-in-the-middle technique. The attacker uses MITM to inject packets into the TLS stream. This allows them to guess the Initialization Vector (IV) used with the injected message and then simply compare the results to the ones of the block that they want to decrypt.   CRIME The Compression Ratio Info-leak Made Easy (CRIME) vulnerability affects TLS compression. The compression method is included in the Client Hello message and it is optional. You can establish a connection without compression. Compression was introduced to SSL/TLS to reduce bandwidth. DEFLATE is the most common compression algorithm used. One of the main techniques used by compression algorithms is to replace repeated byte sequences with a pointer to the first instance of that sequence. The bigger the sequences that are repeated, the higher the compression ratio. All the attacker has to do is inject different characters and then monitor the size of the response. If the response is shorter than the initial one, the injected character is contained in the cookie value and so it was compressed. If the character is not in the cookie value, the response will be longer. Using this method an attacker can reconstruct the cookie value using the feedback that they get from the server.   BREACH The Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) vulnerability is very similar to CRIME, but BREACH targets HTTP compression, not TLS compression. This attack is possible even if TLS compression is turned off. An attacker forces the victim’s browser to connect to a TLS-enabled third-party website and monitors the traffic between the victim and the server using a man-in-the-middle attack.   Heartbleed Heartbleed was a critical vulnerability that was found in the heartbeat extension of the popular OpenSSL library. This extension is used to keep a connection alive as long as both parties are still there. The client sends a heartbeat message to the server with a payload that contains data and the size of the data (and padding). The server must respond with the same heartbeat request, containing the data and the size of data that the client sent. The Heartbleed vulnerability was based on the fact that if the client sent false data length, the server would respond with the data received by the client and random data from its memory to meet the length requirements specified by the sender. Leaking unencrypted data from server memory can be disastrous. There have been proof-of-concept exploits of this vulnerability in which the attacker would get the private key of the server. This means that an attacker would be able to decrypt all the traffic to the server. Server memory may contain anything: credentials, sensitive documents, credit card numbers, emails, etc.   Bleichenbacher This relatively new cryptographic attack can break encrypted TLS traffic, allowing attackers to intercept and steal data previously considered safe and secure. This downgrade attack works even against the latest version of the TLS protocol, TLS 1.3, released in 2018 and considered to be secure. This cryptographic attack is a variation of the original Bleichenbacher Oracle attack and represents yet another way to break RSA PKCS#1 v1.5, the most common RSA configuration used to encrypt TLS connections nowadays. Besides TLS, this new Bleichenbacher attack also works against Google's new QUIC encryption protocol as well. The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations. Even the newer version of the TLS 1.3 protocol, where RSA usage has been kept to a minimum, can be downgraded in some scenarios to TLS 1.2, where the new Bleichenbacher attack variation works.   In most cases, the best way to protect yourself against SSL/TLS-related attacks is to disable older protocol versions. This is even a standard requirement for some industries. For example, June 30, 2018, was the deadline for disabling support for SSL and early versions of TLS (up to and including TLS 1.0) according to the PCI Data Security Standard. The Internet Engineering Task Force (IETF) released advisories concerning the security of SSL. Deprecation of TLS 1.0 and 1.1 by IETF is expected soon.
      Read more >

      thumbnail

      Anonymous’ Hacking Tactics – Revealed In The Attack On Vatican

      Marius Marinescu - CTO
      The Los Angeles Times reported that Father Leonard Boyle was working to put the Vatican’s Library on the World Wide Web through a site funded by IBM. “Bringing the computer to the Middle Ages and the Vatican library to the world.” Boyle computerized the library’s catalog and placed manuscripts and paintings on the website, which was in part funded by IBM. Today, thousands of manuscripts and incunabula have been digitized and are publicly available on the Vatican Library website. A number of other offerings are available, which include images and descriptions of the Vatican’s extensive numismatic collection that dates back to Roman times. [separator] The Vatican’s digital presence soon caught the hacker’s attention and in August 2011, when by the elusive hacker movement known as Anonymous launched a cyber-attack against it.  Although the Vatican has seen its fair share of digital attacks over the years, what makes this particular one special is the fact that this was the first Anonymous attack to be identified and tracked from start to finish by security researchers, providing a rare glimpse into the recruiting, reconnaissance and warfare tactics used by the shadowy hacking collective.   The campaign against the Vatican, which has not received wide attention at the time, involved hundreds of people, some with hacking skills and some without. A core group of participants openly drummed up support for the attack using YouTube, Twitter and Facebook. Others searched for vulnerabilities on a Vatican Web site and, when that failed, enlisted amateur recruits to flood the site with traffic, hoping it would crash.   Anonymous, which first gained widespread notice with an attack on the Church of Scientology in 2008, has since carried out hundreds of increasingly bold strikes, taking aim at perceived enemies including law enforcement agencies, Internet security companies and opponents of the whistle-blower site WikiLeaks.   The group’s attack on the Vatican was confirmed by the hackers and it may be the first end-to-end record of a full Anonymous attack. The attack was called “Operation Pharisee” in a reference to the sect that Jesus called hypocrites. It was initially organized by hackers in South America and Mexico before spreading to other countries, and it was timed to coincide with Pope Benedict XVI’s visit to Madrid in August 2011 for World Youth Day, an annual  international event that regularly attracts more than a million young Catholics.   Hackers initially tried to take down a website set up by the church to promote the event, handle registrations and sell merchandise. Their goal – according to YouTube messages delivered by an Anonymous figure in a Guy Fawkes mask – was to disrupt the event and draw attention.   The hackers spent weeks spreading their message through their own website and social media channels like Twitter and Flickr. Their Facebook page encouraged volunteers to download free attack software so that they might join the attack. It took the hackers 18 days to recruit enough people. Then the reconnaissance began. A core group of roughly a dozen skilled hackers spent three days poking around the church’s World Youth Day site looking for common security holes that could let them inside. Probing for such loopholes used to be tedious and slow, but the advent of automated tools made it possible for hackers to do this around the clock.   In this case, the scanning software failed to turn up any gaps. So, the hackers turned to a brute-force approach – a DDoS attack. Even unskilled supporters could take part in this from their computers or smartphones. Over the course of the campaign’s final two days, Anonymous enlisted as many as a thousand people to download attack software, or directed them to custom-built websites that let them participate using their cellphones. Visiting a particular web address caused the phones to instantly start flooding the target website with hundreds of data requests each second, with no special software required.   On the first day, the denial-of-service attack resulted in 28 times the normal traffic to the church site, rising to 34 times the next day. Hackers involved in the attack, who did not identify themselves, said, through a Twitter account associated with the campaign, that the two-day effort succeeded in slowing the site’s performance and making the page unavailable “in several countries”. Anonymous moved on to other targets, including an unofficial site about the pope, which the hackers were briefly able to deface.   In the end, the Vatican’s defenses held up because, unlike other hacker targets, it invested in the infrastructure needed to repel both break-ins and full-scale assaults, using some of the best cybersecurity technology available at the time. Researchers who have followed Anonymous say that despite its lack of success in this and other campaigns, their attacks show the movement is still evolving and, if anything, emboldened.
      Read more >

      meta data