How many spam emails do you receive in a day? Do they still bother you or you accept them as a part of the normal digital life?
Imagine the culture shock, the impact of having to read the first ever spam on the screen of your computer.
In 1978, nobody knew what digital advertising meant. Gary Thuerk from Chicago, who considers himself the father of e-commerce, used the limited Internet of that time called ARPANET, wrote manually hundreds of email addresses (about 15% of the people who had a connection then) and sent a promo email about the DECSYSTEM-20 computer made by Digital Equipment Corporation, its employer. It said: ”WE INVITE YOU TO COME SEE THE 2020 AND HEAR ABOUT THE DECSYSTEM-20 FAMILY AT THE TWO PRODUCT PRESENTATIONS WE WILL BE GIVING IN CALIFORNIA THIS MONTH…”
This unsolicited email caused a lot of negative feedback, but also important sales: about $14 million worth. All in all, the author of the spam promised to his superiors to never do this thing again. Too bad the others liked the idea and are still using it today.
Nowadays many businesses don’t consider the dangers of spam email and see it as merely a nuisance. Whilst unsolicited emails are certainly frustrating, in many scenarios they can also be dangerous and contain malicious content.
Spam email is often easy to spot. It tends to come in the form of advertising get rich quick schemes or charity appeals. Whatever the content though, the key to spotting spam email is the unsolicited nature of it.
Working on the assumption that you’re only using your email address for legitimate purposes, you should know what sort of emails you’ll be receiving in your inbox. For example, you can expect to receive promotional emails from an online bookmaker, if you’ve recently registered for their services. However, it would be odd to receive such an email, if you’ve never gambled online and you should consider an email in such a situation as spam.
The issue with being able to spot spam easily is that it leads to complacency. It has become accepted that even with the best spam filters in place, you will receive some unsolicited emails. This has resulted in more sophisticated spam email, some of which can go unnoticed if you’re not careful.
Spam emails often have malicious intent and therefore you should be aware of the potential risks. Some of the risks to be aware of are:
The dangers of spam emailSpyware. Spyware is software that allows a third party, unknown to yourself, to gain information about the activity on your computer. This software might be tracking your emails, usernames, and passwords and is an easy way for cyber criminals to gain information about your online bank accounts. Just clicking a malicious link is all it takes to install spyware. In all likelihood, you’ll be none the wiser that this has happened.
Phishing. Phishing is a more direct attempt to gain sensitive information such as username details and passwords. Instead of taking this information from you, scammers will ask you for it by posing as a legitimate source. However, as mentioned before, the key to spotting a phishing attempt, should the email look convincing, is your expectations. Are you expecting an email of this sort? Would you expect this company to ask you for this information? Even if the answer to these questions is yes, should you have any doubt at all, find contact details for that company via Google (not the email!) and ask them directly whether the email is legitimate or not. They should be able to tell you definitively. If they can’t – delete the email.
Ransomware. Ransomware is a means of locking your computer or your IT network and essentially holding it to ransom. Simply clicking a link or downloading a file can install ransomware. Unlike spyware, you’ll very quickly be aware that you have ransomware on your system. Once it has locked all of your files, you won’t be able to use the computer. A ransom note will then appear, either on your screen or in each of your files. It will contain an explanation and request a ransom, usually in Bitcoins. In the event that you are a victim of a ransomware attack, do NOT pay the ransom. There is no guarantee your files will be safe. You’re dealing with criminals after all. Involve experts as soon as possible and let them help you.
Avoiding the dangers of spam email. Whilst spam filters and email protection software will help remove the majority of spam from your inbox, the best way to stop spam email is to behave sensibly. Be aware of how you’re browsing and what you’re doing and you should minimize the amount of spam email you receive. Here is a list of practical steps you can take to stop spam email arriving in your inbox.
• Don’t click links or load images from spam email
If you receive an email from a source you don’t recognize, do not click any links contained within. Malicious links are one of the biggest dangers of spam email, but even if the link isn’t malicious it could show those responsible that your email address is active. If scammers can identify your email address as active, the amount of spam email you receive will probably increase. A number of email clients will block images if they don’t trust the original source as images can also tip off scammers. The email might not even contain an image, just a single pixel tracking bug, and this is all it takes to show that your email address is active.
• Don’t sign up for newsletters or promotions from unknown companies
A number of companies will share or sell their marketing lists to other businesses, so be careful what you sign up for and who you trust with your email address. If you’re likely to sign up to multiple websites or newsletters, consider the use of a different email address just for this cause. Whilst most legitimate sources won’t sell your data, be wary of the terms and conditions when registering. There are often tick boxes that allow you to opt out of any list sharing. Make sure you’re aware of companies that will pass your email address on. Whilst you may trust the company you’re signing up to, you can’t always trust who they will pass your email address on to.
• Don’t reveal your email address
Scammers routinely use software to scrape email addresses from the internet for malicious purposes. Therefore, it is important to be careful when you reveal your email address. Keep your email address hidden, unless you absolutely have to reveal it, whether it be on your own website, social media or forums. You can always send your email address to legitimate individuals via other means, should they request it. The more cautious you are with sharing your email address, the easier it is to avoid the dangers of spam email.
• Train your spam filter
The vast majority of spam filters and email clients will have a means of reporting spam email. Make sure you use this feature. When you report spam email your software improves by learning what to look out for. If you realize that you’re frequently receiving similar spam emails, it is likely that you’re on a list and by reporting these emails, your spam filter will become more efficient. By training your spam filter, you’ll see less of the same spam emails, reducing the overall number in the process. This is one of the quickest ways to stop spam email.
• Change your email address
The most drastic step you can take to stop spam email is to change your email address. Obviously, this method comes with a lot of downsides. Most notably, you will have to inform all of your contacts about the change and this will usually mean you need to work with two inboxes for a set period of time. This method can be frustrating. However, it is effective. It will allow you to start from scratch and as long as you change your behavior and implement good software to help, you shouldn’t have to change your email address again. Only consider this method in worst case scenarios
Are you ready for a little bit of techno-nostalgia? Do you remember when you held your first mobile in your hand and dialed the first number? Who did you call? Do you still have that person's number?
If your memory is blurry, stick with the history of telecommunications' facts: on this day in 1973, a Motorola engineer named Martin Cooper made the first call ever on a handheld cellular phone that looked like a shoe, while taking a walk on a New York street. Well, now imagine you work for the company that created this tech miracle. No wonder the first person Martin called on this mobile was an engineer from the competition, Joel Engel from Bell Laboratories / AT &T. And, like a typical rival, Joel still doesn't remember this call.
And if you always complain about your smartphone that needs constant charging, remember that the first Motorola cell phone required 10 hours of recharging for a 35 minute conversation.
Today everybody uses at least one mobile phone and when it comes to cybercrime, your mobile phone isn't exempt. When any device is connected to the internet, as most phones are, the users of those devices face many of the same threats as desktop computer users.
Many of the cyber threats that face mobile devices are simply the mobile version of threats that face desktop computers. Still, it's helpful to review these threats and some of the ways the attacks are customized for mobile devices.
Mobile Ransomware. Ransomware is a type of malware that locks up your device. Once you've been infected, you lose your ability to access all of the data on your phone until you pay a ransom to the criminal. Depending on the type of ransomware, you could lose your call history, contacts, photos, messages, and many basic phone functions. Even if you pay the ransom, there's no guarantee that your device will be fixed, so it's best not to buy any software that pops up during a ransomware attack.
Scareware is similar to ransomware. The difference with scareware is that you don't lose your access to data. Instead, a pop-up or similar message attempts to scare you into believing you've been infected by a virus. The scareware will advertise software to combat the viruses, but that software itself is the virus. The key is to do nothing—as long as you don't download the scareware or give out any personal information, you won't get a virus.
Spyware and Drive-By Downloads. Not all malware is as obvious as ransomware. Some malware is designed to go unnoticed, and these viruses are known as spyware. Spyware can be installed on your device without your knowledge by hackers. It can also be accidentally installed while browsing the internet. This is known as a "drive-by download." You think you're simply visiting a website, but the site clandestinely installs spyware on your device. Once it's on your device, spyware can track your device use and extract personal data like locations and passwords. Whatever the spyware collects is sent back to the cybercriminal who created it.
Malicious Apps or "Riskware". There's an app for everything, but not all of those apps are convenient tools or benign entertainment. That time-killing game you downloaded might be fun, but it might also be collecting intimate details about you and sending them to advertisers or bad actors. These apps ask for permissions and data access under the guise of improving the app experience, but what they're actually doing is mining data to sell. Falling victim to these scams is known as "data leakage." At best, this scam results in increasingly invasive ads. At worst, sensitive data could end up in the hands of criminals who use it to steal your identity.
Phishing and Smishing Scams. Phishing is a common cyber scam that costs victims millions of dollars every year.2 Phishing can be broad and crude or targeted and specific, but in general, the scam starts as an email that appears to be from a business or person you know. It contains a link and asks you to input some information, such as a confirmation of account information. However, the email isn't actually from the entity you know, and any information you enter goes straight to the scammer.
This may sound like an easy scam to avoid, but phishing emails can be advanced. It's easy to mistake them with the real thing. In some ways, mobile devices heighten this threat. Users may be more likely to quickly open up an email if they get an alert on their phone, as opposed to desktop users who purposefully sift through their inbox. "Smishing" or "SMiShing" is a new take on the phishing scam. The scam plays out the exact same way, but instead of using email, the scammers use text messages (the "SMS" in "SMiShing").3Free Wi-Fi Can Pose Threats. It may seem like a nice perk for a coffee shop or transit terminal to offer free wireless internet, and it is, but it's also a potential threat. Free Wi-Fi is often unsecured, which allows hackers to place themselves between your device and the Wi-Fi hotspot.4 Anything you do online while using the free connection could be intercepted by bad actors.
Luckily, you aren't powerless when it comes to cyber threats. In many cases, due diligence will go a long way in stopping the attack before it begins. In order to protect yourself from these mobile device attacks, keep the following steps and tips in mind.
• Consider Security Software for Your Device
Just like how you can download antivirus software for your computer, you can do the same thing for your mobile device. Consider using security software that will protect your phone from malware and riskware. Some security software also comes with password managers, which can help keep your login information safe.
• Create Better Passwords
If you still use passwords such as your pet’s name or address, you have to start getting serious about your security. Make passwords at least eight characters long (the longer, the better), and combine letters, numbers, and symbols. Do not include any information that might be guessed, such as the name of your child or dog. Long chains of random characters are best. If you have trouble remembering passwords, don't make the passwords simpler. Instead, consider using a password manager.
• Keep Software Updated
Update your software on your device when prompted. These updates often include fixes to security vulnerabilities. They're usually quick, too, and failing to run them can create an easy opening for hackers.
• Check Bank Statements and Mobile Charges
The vast majority of identity theft cases and cybercrimes involve financial fraud. That's why you need to regularly check your mobile charges, bank statements, and any other financial accounts you have.
Scrutinizing financial records goes beyond mobile device security, and it should be a routine part of your security habits.
• Beware of Unfamiliar Apps
Before downloading a new game to kill time, do a little research on the app and the app's developer. Carelessly downloading apps invites spyware, ransomware, and data leakage. By carefully researching what you're downloading before you download it, you can prevent many of these attacks. Simply plugging the developer's name into a search engine could help raise red flags on suspicious software.
• Turn Off Unnecessary Features
Turn off any features you don't need at that moment. For instance, if you are not using GPS, Bluetooth, or Wi-Fi, turn them off. This is especially important in public spaces, such as in places with free Wi-Fi. If you do decide to use free Wi-Fi, avoid accessing sensitive information through the network. For example, don't do your banking or pay bills on a public, unsecured network.
TX-2 was an experimental digital computer created at MIT in 1958. It was one of a few first-generation large electronic digital computers in which transistors largely supplanted vacuum tubes. It was designed to facilitate and enhance real-time human-computer interaction. When first implemented, TX-2 had inherited the ferrite-core memory from its predecessor TX-0 (there was no TX-1). It also had two other random-access memory modules that could work concurrently to provide increased computing speeds. TX-2 was an experimental tool to test many techniques and devices, among which were a magnetic-core memory unit and the first thin-magnetic-film memory unit.William Kantrowitz, a systems programmer on the TX-2 computer, provides the following list of some of the highlights of the TX-2 computer:• Much of computer graphics [Sketchpad] began on TX-2;• Early pioneering speech research was carried out on TX-2;• TX-2 had one of the first, if not the first, two-level memory paging systems;• Pioneering work in large memories was done with TX-2;• The Advanced Research Projects Agency network (ARPANet) derived from experiments on TX-2 with a prototype net between TX-2 and a computer at System Development Corporation in California;• The feasibility of using the ARPANet for packet speech transmission was first demonstrated on TX-2.The Sketchpad system was the first graphical computer interface. It made it possible for a man and a computer to converse rapidly through the medium of line drawings. Heretofore, most interaction between man and computers had been slowed down by the need to reduce all communication to written statements that could be typed; in the past, we had been writing letters to, rather than conferring with our computers. For many types of communication, such as describing the shape of a mechanical part or the connections of an electrical circuit, typed statements can prove cumbersome. The Sketchpad system, by eliminating typed statements in favor of line drawings, opened up a new area of man-machine communication. It allowed users to visualize and control program functions and became a foundation for computer graphics, computer operating system interfaces and software applications that are used in many facets of modern technology. The currently used graphical user interface, or GUI, was based on Sketchpad.In 1961, Massachusetts Institute of Technology (MIT) graduate student Sutherland developed a primitive application, Sketchpad, that would run on the TX-2, at MIT’s Lincoln Laboratory. The TX-2 had twice the memory capacity of the largest commercial machines and impressive programmable capabilities. The computer possessed 320 KB (kilobytes) of memory and powered a 23-cm (9-inch) cathode-ray tube (CRT) display. Sketchpad displayed graphics on the CRT display, and a light pen was used to manipulate the line objects, much like a modern computer mouse. Various computer switches-controlled aspects of the graphics such as size and ratio. In 1963, Sutherland published his doctoral thesis, “Sketchpad: A Man-Machine Graphical Communications System.” for which he received the Turing Award in 1988, and the Kyoto Prize in 2012.Sketchpad’s process for drawing lines and shapes was quite complicated. The system’s functionality was heavily electrical and used electronic pulses shared between the photoelectric cell of the light pen and an electronic gun fired from the CRT. The timing of the pulse displayed a cursor to represent the light pen’s position on the screen and thus converted the computer screen into a sketchpad upon which objects could be drawn. Of the 36 bits available to store each display spot in the display file, 20 gave the coordinates of that spot for the display system and the remaining 16 gave the address of the n-component element responsible for adding that spot to display.The clever way the program organized its geometric data pioneered the use of "master" ("objects") and "occurrences" ("instances") in computing and pointed forward to object-oriented programming. The main idea was to have master drawings which one could instantiate into many duplicates. If the user changed the master drawing, all the instances would change as well.Geometric constraints was another major invention in Sketchpad, letting the user easily constrain geometric properties in the drawing - for instance, the length of a line or the angle between two lines could be fixed.How objects in Sketchpad could be visualized and modeled on a screen became the foundation for modern graphical computing used in advertising, business, entertainment, architecture, and Web design. In 1964, Sutherland collaborated with David Evan at the University of Utah in Salt Lake City to initiate one of the first educational computer-graphics labs. Sketchpad also led to the advanced development of other imaging software, such as computer-aided design programs used by engineers.
BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks.
BCPL (Basic Combined Programming Language) was designed by Martin Richards of the University of Cambridge in 1966 and it was a response to difficulties with its predecessor CPL, created during the early 1960s. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference.
BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Further, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only 1⁄5 of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 man-months. Soon afterwards this structure became fairly common practice, but the Richards BCPL compiler was the first to define a virtual machine for this purpose.
The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit.
The interpretation of any value was determined by the operators used to process the values. (For example, “+” added two values together, treating them as integers; “!” indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking. Hungarian notation was developed to help programmers avoid inadvertent type errors.
The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by “%”).
BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead there is a global vector, similar to "blank common" in Fortran.
All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively BCPL gives the programmer control of the linking process.
The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid.
BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) instead of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99.
It is reputedly the language in which the original “hello world” program was written. The first MUD was also written in BCPL.
Several operating systems were written partially or wholly in BCPL (for example, TRIPOS or Amiga Kickstart). BCPL was also the initial language used in the seminal Xerox PARC Alto project, the first modern personal computer; among many other influential projects, the ground-breaking Bravo document preparation system was written in BCPL.
By 1970, implementations existed for the Honeywell 635 and 645, the IBM 360, the TX-2, the CDC 6400, the Univac 1108, the PDP-9, the KDF 9 and the Atlas 2. In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favor as C became popular on non-Unix systems.
Martin Richards maintains a modern version of BCPL on his website, last updated in 2018. This can be set up to run on various systems including Linux, FreeBSD, Mac OS X and Raspberry Pi. The latest distribution includes Graphics and Sound libraries and there is a comprehensive manual in PDF format. He continues to program in it, including for his research on musical automated score following.
The field of geographic information systems (GIS) started in the 1960s as computers and early concepts of quantitative and computational geography emerged. Roger Tomlinson’s pioneering work to initiate, plan, and develop the Canada Geographic Information System resulted in the first computerized GIS in the world, in 1963. The Canadian government had commissioned Tomlinson to create a manageable inventory of its natural resources. He envisioned using computers to merge natural resource data from all provinces. Tomlinson created the design for automated computing to store and process large amounts of data.
Today GIS gives people the ability to create their own digital map layers to help solve real-world problems. GIS has also evolved into a means for data sharing and collaboration, inspiring a vision that is now rapidly becoming a reality—a continuous, overlapping, and interoperable GIS database of the world, about virtually all subjects. Today, hundreds of thousands of organizations are sharing their work and creating billions of maps every day to tell stories and reveal patterns, trends, and relationships about everything.
With its movement to web and cloud computing, and integration with real-time information via the Internet of Things, GIS has become a platform relevant to almost every human endeavor—a nervous system of the planet. As such the GIS role in cybersecurity is well established and continues to expand as more businesses discover the value of geospatial problem-solving for stopping an evolving array of dangers. Geographic information science offers resources that can help organizations analyze potentially compromised systems and develop stronger defenses.
Systems detect more infections with every passing second around the world. GIS helps us to understand the scale of this problem and detect meaningful trends. Mapping cyberattacks in real time reveals just how common such incidents are and how important it is for organizations to have updated countermeasures in place.
Fortunately, spatial information also helps more directly, allowing security experts to discover unauthorized activity early. To minimize the consequences of a data breach or malware attack, stakeholders need to communicate clearly and coordinate an immediate response. GIS can provide clear visualizations of the systems involved in an incident and promote situational awareness across multiple departments.
An Esri white paper showed how organizations can map out the connections between devices and coordinate their responses to intrusions. In this example, cyberspace is visualized in five layers:
The social/persona layer, including all the employees using a network
The device layer of those individuals’ computers and phones
The logical network layer showing the connections between devices
The physical network layer displaying the underlying infrastructure
The geographic layer revealing the physical locations of all the relevant devices and systems
A detailed perspective on the flow of data through an organization’s network leads to actionable intelligence about any disruptions or device failures that may interfere with operations. Spatial information ties an incident to specific places, allowing experts to judge whether the issue stems from an intentional attempt to compromise the system and assess the effects. Maps can then guide cybersecurity and IT personnel as they set priorities and decisively head off the intrusion.
In our globally connected world, cybersecurity is crucial to keep essential infrastructure functioning properly. For example, a 2018 report from the U.S. Department of Energy noted that even as electrical power systems become more reliant on connections to the Internet, the safeguards at many energy companies have not kept pace with cyber threats. The DOE warned that, without proactive steps to address vulnerabilities in the power grid, compromised systems could prove disastrous for communities.
Cyberattacks on energy providers may take various forms, such as sending inaccurate information about the demand for power in particular areas. Systems responding to these false estimates of electricity use might cause imbalances and power outages. Fortunately, GIS can help to address this vulnerability.
Detection software uses GIS mapping to monitor the distribution of energy, giving energy companies greater visibility into operations throughout the power grid. Meanwhile, security detection algorithms can spot issues in the distribution load that might indicate that operators are receiving deceptive information. If any anomalies show up, energy providers can evaluate whether they are the result of a hack and respond accordingly.
A huge wealth of spatial information, like many of the findings gathered by NASA probes, is readily available to the public and researchers. However, some organizations retain spatial data that is proprietary or must be kept confidential due to security or privacy concerns. For example, geographic details may compromise the privacy of individuals who participate in healthcare or social science studies.
In these cases, cybersecurity professionals must implement a layer of security that prevents unauthorized access to geospatial information and metadata. Effective access control mechanisms may include:
Clearly defined policy specifications for who can use geospatial features;
Identity management systems to check the credentials of users;
Data authenticity verification.
As our world faces problems from expanding population, loss of nature, and pollution, GIS will play an increasingly important role in how we understand and address these issues and provide a means for communicating solutions using the common language of mapping.
Photoshop is arguably the most widely used, most popular and most powerful photo-editing software in the world and although many of today's Photoshop users probably can't imagine a world without the application, it's important to remember that Photoshop has only been around for 34 years.
Today, Photoshop is an extremely powerful piece of software but it hasn't always been this way. If you rewind 34 years, Photoshop didn't exist at all and even when the application was initially created, it was a far-cry from the hugely powerful application that we know and love today.
From the humble beginnings in 1987 as a small subroutine for a program on Apple II Plus that allowed the translation of monochromatic images to greyscale to the release of Adobe Photoshop CC in June 2013, Photoshop is now used by amateurs and professionals alike for everything from simple image retouching to website design. The software has truly changed the world of photography and design but we mustn't forget, it took 34 years of constant improvements to get to this stage.
Adobe Photoshop CC released in 2013 marked a new era for the application as it moved away from the Creative Suite series and began a new journey under the Creative Cloud (CC) name. With previous versions of Photoshop (including the Creative Suite series), users would pay a fixed fee for the software which would then be installed on their local machines. With Photoshop CC, Adobe introduced a subscription-based pricing model, allowing users to access the software without the initial hefty cost. Photoshop CC also allowed users to sync their Photoshop preferences to the cloud which was a first at the time.
It also brought an increase in security vulnerabilities from the prior years. The total vulnerabilities identified and patched from 1999 to 2019 stand at 3313 according to CVE. Most of these vulnerabilities were identified between 2013 to 2019 (2496 of them to be more precise).
Due to this, in recent years Adobe made some architectural changes to the whole cloud architecture and software stack as to better protect the users from this increasing number of vulnerabilities. As per Adobe, the current Cloud stack architecture is built with security considerations at its core, and utilizes industry standard software security methodologies for both development and management of the Creative Cloud.
Creative Cloud leverages multi-tenant storage in which customer content is processed by an Amazon Elastic Compute Cloud (EC2) instance and stored on a combination of Amazon Simple Storage Service (S3) buckets and through a MongoDB instance on an Amazon Elastic Block Store (EBS).
Creative Cloud is deployed regionally and each region contains two VPC (Virtual Private Cloud) instances, a Creative Cloud VPC and a Shared Cloud VPC. Both VPCs are logically isolated networks within an AWS region. The Creative Cloud VPC hosts the websites and APIs where end-users interact with the solution, and the Shared Cloud VPC hosts the services that perform common tasks across Creative Cloud, such as storage.
In practice, availability zones exist as isolated locations within a region. However, from a network architecture perspective, they reside in a VPC. Physically, each availability zone has multiple different redundant data centers, enabling all data to be replicated across all data centers as well as within multiple servers within each data center. This redundant backup ensures that Creative Cloud customer data is safe from disasters, floods, power failures, etc.
Everything within each VPC is locked down by an AWS Security group, represented by orange keys in the chart above. A security group is another layer of security that allows Adobe to control the inbound and outbound traffic through the VPC, much like a virtual firewall.
The actual code within the VPC is housed in Amazon EC2 instances in specific subnets (or ranges of IP addresses). While public subnets are connected to the internet, private subnets are not and are only accessible through authenticated connections originating from the public subnet. This prevents an unauthorized user from connecting directly to the Creative Cloud storage service, for example, and allows Adobe to make sure that only authorized users can perform certain actions, such as storing UGC.
UGC is stored in Amazon S3 buckets and the metadata about the content is stored in Amazon EBS via MongoDB. The UGC is then protected by Identity and Access Management (IAM) roles within that AWS region. Implementing per-user content security, IAM roles ensure that any content an end-user uploads to the cloud is considered private and is only accessible by that user, unless they take explicit steps to share it.
Content and assets stored in S3 are encrypted with AES 256-bit symmetric security keys that are unique to each customer and their claimed domain. The dedicated keys are managed by the Amazon Key Management Service (KMS), which provides additional layers of control and security for key management. Adobe automatically rotates the key on an annual basis. If necessary, IT administrators can revoke their key via the Admin Console, which will render all data encrypted with that key inaccessible to end-users.
Metadata and support assets are stored in EBS using AES 256-bit encryption and Federal Information Processing Standards (FIPS) 140-2 approved cryptographic algorithms, both of which are consistent with National Institute of Standards and Technology (NIST) 800-57 recommendations.
An example of user generated content data flow in the Adobe Creative Cloud cand be seen below:
1. End-users store the content they create in a Creative Cloud folder on their system’s hard drive. If the user chooses to upload this content to cloud storage in Creative Cloud, a background process uploads the UGC to the cloud. All UGC is encrypted in-transit using AES 128-bit GCM over TLS 1.2.
2. The Adobe Identity Service validates the user and their entitlements.
3. Adobe Creative Cloud for enterprise scans the content for viruses and sends the content to the AWS Key Management System (KMS) for encryption.
4. AWS KMS encrypts the user’s content with the customer-managed encryption key.
5. Creative Cloud stores the encrypted content in AES 256-bit Amazon S3 storage. In order to update or retrieve the content, the user must use Creative Cloud; there are no external links to the content.
6. Metadata about the content is stored in MongoDB on an Amazon EBS using AES 256-bit encryption.
User generated content is redundantly stored in multiple data centers within a region and on multiple devices in each data center. All network traffic undergoes systematic data verification and checksum calculations to prevent corruption and ensure integrity. Finally, stored content is synchronously and automatically replicated to other data center facilities within the customer’s region so that data integrity is maintained even in the event of data loss in two locations.
UGC created using Creative Cloud can be stored in the US (US-East VA), Europe (EMEA-West IE), or Japan (APAC-West JP) regions. An end-user’s regional data store is determined when the user is created in the Adobe Admin Console and remains consistent throughout the user’s lifetime. In other words, content created by a user account in the US will always be stored in the US data center, regardless of where the user is located when they upload the content.
As mentioned above, Adobe encrypts all UGC stored in Creative Cloud at rest. For an additional layer of control and security, IT administrators can enable a dedicated encryption key for some or all the domains in the organization. Content is then encrypted using that dedicated encryption key which, if required, can be revoked from the Admin Console. Revoking the key will render all content encrypted with that key inaccessible to all end-users and will prevent both content upload and download until the encryption key is re-enabled.
The key service employed utilizes FIPS 140-2 validated hardware security modules (HSMs) to protect key integrity and confidentiality. Plain-text keys are never written to disk and are only used in the HSM volatile memory on the server in the regional data store.
In this article, I will talk briefly about how the understanding of language has been transformed by statistical approaches and statistical learning. In particular, the focus will be on the language of translations, without going too deep into the history of this field. It is obvious that translations have been an important part of human interactions ever since the first phrase was uttered, therefore, understanding all the aspects of this process is essential in producing better translations and facilitating communication.
Corpus-based translation studies emerged early in the ’80s when researchers observed that translations have a specific distributional pattern of word occurrences that is significantly different from the one observed in original texts written in the same language. Translations were nicknamed “the third code” - a language variety/lect that is clearly different from the source language and especially different from texts written in original form. The term translationese was coined from translation + -ese (as in Chinese, Portuguese, Legalese) to highlight that a translation is a type of language variety that emerges from the contact of the source and target languages. These differences appear regardless of the proficiency of the translator or the quality of translation, but rather they are visible as a statistical phenomenon in which certain patterns of the source language are transferred into the target texts.
Second-language acquisition research has been well-acquainted with the terms “language transfer” or “interlanguage”, denoting a similar process in which native language features of a speaker are transferred during the language acquisition phase. In 1979, Gideon Toury proposed a theory of translation based exactly on language acquisition principles, adding that some translation phenomena are linguistic universals and appear with a strong tendency regardless of the source and target languages. Unlike language learners, professional translators always translate into their mother tongue, therefore ensuring that the output is as close as possible to the actual target language norms.
Together with the development of computational approaches, more and more research has been shifted towards empirical hypothesis testing at the corpus level. More exactly, corpus studies became central in the development of translation studies and linguistic hypotheses revealing several distributional phenomena that characterize translated texts across corpora, genre, and languages. Among the most important phenomena, we count 1) simplification - the tendency of translators to make do with less words or to create a language production that is closer to conventional grammaticality and 2) explicitation - described as a rise in the level of cohesive explicitness in the target-language texts.
Corpus linguistics changed drastically over the years, as John McHardy Sinclair stated in 1991: Thirty years ago when this research started it was considered impossible to process texts of several million words in length. Twenty years ago it was considered marginally possible but lunatic. Ten years ago it was considered quite possible but still lunatic. Today it is very popular.
The methodology employed to study translation-related phenomena also changed across years: in the beginning, word counts combined with basic statistical modeling would dominate the analysis. The majority of interpretations were drawn from specific examples and from a thorough manual investigation. With the development of statistical learning, new methods such as text classification methods, regression analysis, neural networks, and generic tools of artificial intelligence have been employed in the analysis of the differences between translations and originals. Processing billion-word-corpora is a feasible task nowadays and so (computational) linguists employ AI tools such as BERT or language models fine-tuned on different language varieties to identify whether the data presents statistically learnable differences. Besides being useful for a handful of researchers who aim to contribute to our understanding of language phenomena, what other use cases may we find for such investigations, you may ask.
Machine translation is one of the most important fields where translationese keeps adding a significant impact. At some point, having more data does not make an AI model smarter, but having the right kind of data can make a model learn faster, better, and closer to the processes activated during human-generated translation.
We have reached a point in which linguists may rely on statistical learning to understand whether, why, and how two texts belong to different linguistic categories. This is also one of the reasons why it is of utmost importance to build explainable machine learning in order for the models to justify their decisions and to draw the correct interpretation of the results. The way to build explainable and unbiased AI is still an open problem of high interest, but we will leave that for another time.
It is time for the third foray in our short history of AI, dedicated to presenting the top achievements in the field. It is not an easy thing to make a significant selection if we think about what makes an achievement to be considered great. In some cases it is about the innovative character of the algorithms, in others, the direct benefits brought to people are the most appreciated, or, in others, the psychological impact predominates. The following examples are categorized according to these three criteria, which, simplified, are: innovation, benefits, and impact.
[separator]
1. INNOVATION
The area of innovative contributions is closely linked to the creation of the concept of a neural network. Although some efforts to create mathematical models date back to the 1930s, it was not until 1943 when Warren McCulloch and Walter Pitts created the first recognized computational model for neural networks, namely threshold logic, based on algorithms which mimics the functionality of a biological neuron.
Also in the 40’s the famous Canadian psychologist Donald Hebb made a hypothesis of learning built on neural plasticity mechanism, called Hebbian learning, which is largely considered as the ancestor of unsupervised learning models.
Another remarkable contribution appeared in 1958 when Frank Rosenblatt from Cornell University created the first algorithm for supervised learning called perceptron. Using a linear classifier that combines a set of weights with the feature vector, the algorithm has immediate applicability in areas such as document classification, and more generally for problems with a large set of variables.
The earliest functional networks with multiple layers are derivatives of a family of inductive algorithms, called the Group Method of Data Handling, were created by Alexey G. Ivakhnenko from Glushkov Institute of Cybernetics in 1965. These algorithms proved extremely useful in areas such as data mining, knowledge discovery, prediction, complex systems modeling, optimization, and pattern recognition.
An widely used algorithm for training feedforward neural networks and other artificial neural networks is backpropagation. The basis of it resulted from control theory by Henry J. Kelley in 1960 but was derived by some other researchers in the early 60's and implemented to run on computers by Seppo Linnainmaa as the subject of his master’s thesis (general method for automatic differentiation of discrete connected networks of nested differentiable functions) at University of Helsinki in 1970.
Paul Werbo's contribution to backprop in 1975 helped to effectively solve the problem reported by Marvin Minsky and Seymour Papert in 1969, that single-layer networks are incapable of processing the exclusive-or circuit. They also highlighted the fact that computers lacked the needed power to process large neural networks, a problem mitigated in the 1980s by the development of metal-oxide-semiconductor (MOS) and the very-large-scale integration (VLSI), in the form of complementary MOS.
The simulation of massive neural networks received a boost in 1986 with the introduction of distributed processing by American psychologists David Everett Rumelhart and James McClelland. The basis of convolutional neural networks (CNN) was laid in 1979, with the publication by Kunihiko Fukushima of the work on neocognitron, a type of artificial neural network (ANN).
Mathematical models for Deep Learning have been developed through joint or independent contributions by scientists such as Geoffrey Everest Hinton, Yoshua Bengio, and Yann LeCun, since 1986 and continues to this day. Of their most important achievements we mention here only a few:
• Geoffrey Everest Hinton co-invented Boltzmann machine with David Ackley and Terry Sejnowski, contributions to distributed representations, time-delay neural network, mixtures of experts, Helmholtz machine and Product of Experts, capsule neural networks;
• Yoshua Bengio combined neural networks with probabilistic models of sequences, an idea that was incorporated into a system used by AT&T/NCR for reading handwritten checks in the 1990s; also he introduced high-dimension word embeddings as a representation of word meaning, with a huge impact on natural language processing tasks including language translation, question answering, and visual question answering;
• In the 1980s, LeCun developed convolutional neural networks, an underlying principle that made deep learning more efficient; In the late 1980s he trained the first CNN system on images containing handwritten digits; also LeCun proposed one of the first implementable versions of the backpropagation algorithm, and is credited with developing a more expansive vision for neural networks as a computational model for a broad spectrum of tasks, introducing in his early works many standard concepts today in AI.
2.BENEFITS Medicine
Regarding the beneficial achievements of mankind, a leading place is occupied by the application of AI in medicine.
Zebra Medical announces a new deep learning algorithm on its medical image analysis platform. Thus, the company has an algorithm for identifying vertebral fractures also, in addition to existing algorithms that can detect bone density, fatty liver, and coronary artery calcification.
The platform provides support for diagnosis by analyzing a variety of medical images allowing to reduce response times in radiology services and increase their accuracy. The goal is to detect diseases from the stage when their signs are not obvious in imaging. A common example is vertebral compression fractures for which less than a third of them are "effectively diagnosed," the company said in a statement. The Zebra VCF algorithm uses deep learning to highlight the difference between vertebral compression factors and other conditions, such as degeneration of the vertebral plate or bone spurs.
A Swiss company, Sophia Genetics, has created an AI technology for reading and aggregating the genetic code of DNA to help diagnose and predict genetic diseases, such as cancer. Named Sophia, the system uses AI to combine genome data with analysis, medical knowledge databases, and expert suggestions to create the best diagnosis to help healthcare professionals customize the patient treatment. At the moment, the system collects data from 170 major hospitals globally, continuously improving its capacity for early detection and diagnosis of genomic diseases.
Autonomous driving
Another area that could benefit enormously from AI would be transportation, by introducing autonomous driving capabilities.
The Google (now Weymo) self-driving car system has been officially recognized as a "driver" in the US since 2016, becoming the pioneer of autonomous driving systems. This recognition can be the necessary lever to change the legislation in the field of cars that do not require a human driver, so that they can meet the safety standards for driving on public roads without conventional driving mechanisms, namely steering wheel and pedals. Thus, "the driver in the context of the design of the vehicle described by Google" is the automatic driving system itself and not any of the occupants of the vehicle, "the government agency explained in a public letter. The agency acknowledged not only this change but the fact that no human occupant of the Google autonomous vehicle could meet the common definition of "human driver" because of the design of the car - even if he wants it. The recognition of the Google autonomous computer as a driver could be the legal basis for establishing liability in the event of car accidents, in the context in which US Department of Transportation has unveiled a plan to reduce accidents on public roads by increasing the number of autonomous vehicles.
Waymo recently said (2020) that the start of driverless travel will first begin with members of Waymo One in Arizona, after which it will gradually expand by registering on a smartphone.
Tesla boss Elon Musk – as Waymo's direct rival, responded by saying that while Waymo's autonomous driving technology is "impressive", Tesla's technology has a wider range of applications.
Moreover, Musk also ventured into a comparative appreciation of the two technologies. If Waymo technology uses a suite of sensors - including LiDAR - located above the cars, Tesla technology uses a system of sensors consisting of 8 video cameras, radar, and sonar - Musk stated that "anyone relying on the laser-based sensors is doomed to failure because of their expense and drain on power". According to the estimates of many specialists in the field, the next years will be decisive in the gradual introduction of autonomous management services, so we do not have to wait long to find out which technology will be superior.
Agriculture
We appreciate that agriculture is a good candidate for benefits, especially when we talk about less developed areas, threatened by famine. A team of researchers from Pennsylvania State University and the École Polytechnique Fédérale de Lausanne, Switzerland, use in-depth learning algorithms to detect crop diseases before they spread. In the poor regions, up to 80% of agricultural production is made by small farmers, and they are most prone to the devastating effects of crop diseases, which can lead to famine.
The team has developed a program capable of running efficiently on a smartphone. They trained the algorithm on huge data sets - over 50,000 images - collected using PlantVillage, an online open-access archive dedicated to images of plant diseases. As a result, the algorithm identifies 26 diseases in 14 plant species with an accuracy of 99.35%, and to benefit from this service you only need to have a smartphone.
Earthquake prediction
Another area of great interest for the protection of human lives is earthquake prediction; it was conceivable that AI-based approaches would appear here as well. The team of Phoebe DeVries, a seismologist at Harvard University in Cambridge, Massachusetts, conducted a first experiment with remarkable results in aftershock forecasting analyzing more than 131,000 mainshock and aftershock earthquakes, including some of the most terrifying earthquakes in the world, such as the devastating magnitude-9.1 event that hit Japan in March 2011. In general, the magnitude of an aftershock is deductible, but no satisfactory results are obtained for the location.
The researchers used the data to drive a neural network that modeled a network of cells surrounding each major shock. The data indicate the cell in which the earthquake occurred and how the shock wave affected the center of the cell. The question was what is the probability that each network cell will generate one or more replicas. The network treated each cell in isolation, contrary to the traditional method aimed at propagating stress sequentially through rocks. The neural network predicted replica locations with significantly greater accuracy than the traditional method, and at the same time, other effects were highlighted that researchers did not normally take into account.
Cybersecurity
The area of benefits could not miss the area of cybersecurity, an area of impact on modern life in terms of online fraud. Recent research by the Association of Certified Fraud Examiners (ACFE), KPMG, PwC, and others highlights how organized crime modernizes its attack vectors and their magnitude and speed. Sadly, this modernization in most cases involves the use of machine learning to commit frauds undetectable by legacy cyber protection systems (systems based on inefficient rules and predictive models. Thus, the detection of new generations of fraud online needs the same Machine learning mechanisms applied to fight with equal weapons against remain equal to the complexity and extent of fraud today.
The modern strategy to combat online fraud focuses on the following 3 basic aspects:
• actively use supervised machine learning to train models so they can detect fraud attempts quicker than legacy systems;
• combine supervised and unsupervised machine learning into a single risk score (for fraud prevention) because anomalies are easier to detect in emerging data;
• take advantage of wide-ranging data networks of transactions to tweak and scale supervised machine learning algorithms, thus improving risk scores for fraud prevention.
3. IMPACTGames
In terms of public impact, all selections include the Deep Blue phenomenon. It was the first official victory of a "machine" against a reigning world champion under regular tournament conditions. After a first match rapidly wrapped up by chessgrandmasterGarry Kasparov in 1996 (with a score of 4 - 2), the IBM team returned after 1 year with an upgraded version of Deep Blue to win the rematch (with a score of 3 – 2 ). Although many specialists of the time claimed that artificial intelligence eventually had caught up man, Deep Blue barely met the requirements to be considered an intelligent machine. It used a custom VLSI chip topology to run a sort of brute-force search algorithm (alpha-beta pruning) which involves at its core not a neural network architecture but a decision tree classifier. How Deep Blue made a decision (regarding a move) involves finding the optimal values for a wide set of parameters, and for this thousands of games played by professionals (grandmasters) were analyzed - thus meaning the study of thousands of openings and endgames and tens of thousands of positions. Given that computer chess programs were still in their early stage, the match was more of a race to exploit the other's weaknesses: the machine knew Kasparov's style in depth (the IBM team was also allowed to adjusts the parameters between games) and Kasparov relied on the fact that the Deep Blue is greedy for material advantage, so he found it appropriate to set such traps. Long story short, there were enough arguments for those into conspiracy theories (Kasparov included) who were left with suspicions about the fairness of the match, especially since IBM refused the rematch and dismantled the machine.
A completely different story was in 2015 when no skepticism existed following the crushing victory of AlphaGo over the world champion at Go, Lee Sedol. A quite astonishing piece of distributed software supported by a team of more than 100 Google DeepMind scientists, AlphaGo relies entirely on a full-stack AI (neural network at the bottom, machine learning next, and deep learning on top). Running on 48 TPU's distributed on several machines, AlphaGo's decision-making approach (in terms of moves) differs greatly from previous efforts, in the sense that the evaluation heuristic is not modeled by a library of thousands of master matches (of professional players) but results from the experience of playing with an identical instance of AlphaGo. The versions represent optimized variations of both the hardware and the time allowed for the play, AlphaGo continuously raising its level of play (the current version, AlphaZero that runs on 4 TPU's on a single machine is an optimized version of AlphaGo Zero which is an optimized version of Alpha Go Lee - the version who defeated Lee Sedol). This decisional approach, freed from the need for human hard-coding, was the only one that could have paid off, knowing that Go's level of complexity is far superior to chess. Given that the number of possible positions in Go is greater than the estimated number of atoms in the universe (and after the first two moves of a Chess game, there are 400 possible next moves. In Go, there are close to 130,000), it is clear For that reason, AI researchers can't use traditional brute-force AI, which is when a program maps out the breadth of possible game states in a decision tree, because there are simply too many possible moves.
Although initially many experts (including Elon Musk) estimated that, due to the complexity of the Go game, it will take at least 10 years for the car to win a world champion, this occurred at the first opportunity (unlike Deep Blues). and Garry Kasparov). The fact that AlphaGo is based on pure learning mechanisms and less on human examples indicates a huge potential in the application of algorithms and other areas necessary for humanity. Thre years after the memorable match, Lee announced his retirement from professional play, arguing that he no longer feels like a top competitor in the Go world that will be so authoritatively dominated by AI (Lee referred to AlphaGo machine that defeated him as "an entity that cannot be defeated.").
Another area of great impact we consider to be "sentiment analysis" (or Emotion AI) –
Sentiment analysis
The first achievement that comes to mind is Microsoft Emotions, a sentiment detection from photos platform launched as a service in 2016. Part of a larger project of the company, namely the Oxford Project, it uses "world-class machine learning" to interpret people's feelings as a cognitive service. The recognition engine is trained to detect eight emotions, and, for each of them, calculates a score associated with the analyzed image (the emotions are: Anger, Contempt, Disgust, Fear, Happiness, Neutral, Sadness, and Surprise).
Although the platform isn't available for free, it is declared by Microsoft to be in an experimental state, and is expected to gradually add other capabilities such as "spell check", "speaker recognition" as well as emotions recognition from movies.
VocalisHealth pioneers a complementary approach, aiming to detect vocal biomarkers. The biomarker is an indicator that signals the presence of a disease, and, most often, the severity of the disease. VocalisHealth has also built a significant database of biomarkers for many known diseases (COVID-19 included), in the form of voice samples collected through more than 250,000 records from more than 50,000 people. Advanced machine learning and deep learning mechanisms are used for the analysis of new voice samples, integrated into a customized platform for healthcare screening, triage, and continuous remote monitoring of health.
Astronomy
Another area of impact is astronomy, where recently a team of astronomers and computer scientists from the University of Warwick identified 50 new planets using AI techniques, marking a technological breakthrough in astronomy. To do this, they have built a machine learning algorithm for analyzing old NASA data containing thousands of potential candidates for planet status. The classic method of searching for exoplanets (planets outside our solar system) is to detect decreases in the amount of light from a star under observation, a sign that a planet has passed through the telescope and that star. But these drops can also be caused by background interference or even camera errors.
The merit of the deep learning algorithm is that, by training, it manages to accurately separate the planets from the false-positives, and this on old, unconfirmed data, resulting in this set of new "registered" planets. The approach is a first, in the sense that such techniques have been used in astronomy only for the classification of planets, but never for their validation in the probabilistic realm.
CONCLUSIONS
We believe that the conclusions at the end of this short history of AI are multiple and we encourage you to find them for yourself and share them with us. We appreciate that perhaps the most important thing to recognize is that, whether it is welcome or feared, or whether it is considered to be truly competent and useful or not, AI makes its presence felt in more and more areas, with smaller or larger steps. Despite many current shortcomings, including the cumbersome generalization or the need for huge computing power, there are certain advantages, such as the increasing attention enjoyed by researching mathematical models, the existence of a large dataset for analysis, and the sympathy it receives among the public so that we are entitled to believe that AI will be a partner in our lives for a long time to come. It depends only on our choices that this partner will be a guarantor of improving the quality of life and not a threat.
To celebrate the 100th anniversary of magnetic recording, IBM announced, in 1998, the world's highest capacity hard drive for desktop PCs. This PC came with a new breakthrough technology called Giant Magnetoresistive (GMR) heads, which enabled the further miniaturization of disk drives. The 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg for the discovery of the GMR effect in 1988.
[separator]
There is little known fact that the GMR effect can be successfully used to extract digital data from a hard drive as part of a cybersecurity forensic analysis (the data written on a modern high-density hard disk drive can be recovered via magnetic force microscopy of the disks' surface. To this end, a variety of image processing techniques are utilized to process the raw images into a readily usable form, and subsequently, a simulated read channel is designed to produce an estimate of the raw data corresponding to the magnetization pattern written on the disk.
Hard disk drives (HDDs) have had a remarkable history of growth and development, starting with the IBM 350 disk storage unit in 1956, which had a capacity of 3.75MB and weighed over a ton, to the latest 4TB 3.5 inch form factor drive as of 2011. Clearly, the technology underlying hard drives has changed dramatically in this time frame, and is expected to continue on this path. Despite all the change, the basic concept of storing data as a magnetization pattern on a physical medium which can be retrieved later by using a device that responds as it flies over the pattern is still the principal idea used today.
The main components of a modern hard drive are the platters, head stack, and actuator, which are responsible for physically implementing the storage and retrieval of data. Data are stored as a magnetization pattern on a given side of a platter, many of which are typically in a given drive. The data is written onto and read from the platter via a head, and each platter requires two heads, one to read from each side. At this basic level, the disk drive appears to be a fairly simple device, but the details of what magnetization pattern to use to represent the data, and how to accurately, reliably, and quickly read and write from the drive have been the fruits of countless engineers’ labor over the past couple of decades.
Early hard drives used a method called longitudinal recording, where a sequence of bits is represented by magnetizing a set of grains in one direction or the other, parallel to the recording surface. By 2005, to allow the continuing push for increasing recording density, a method called perpendicular recording began to be used in commercially available drives. As the name suggests, the data is represented by sets of grains magnetized perpendicular to the recording surface. To allow this form of recording, the recording surface itself has to be designed with a soft under layer that permits a monopole writing element to magnetize the grains in the top layer in the desired manner. As the push for density is still increasing, newer technologies are being considered, such as shingled magnetic recording (SMR) and bit patterned recording (BPMR), which both take different approaches to representing data using higher density magnetization patterns.
Once data are written to the drive, the retrieval of the data requires sensing the magnetic pattern written on the drive, and the read head is responsible for the preliminary task of transducing the magnetization pattern into an electrical signal which can be further processed to recover the written data. Along with the changes in recording methods, the read heads necessarily underwent technological changes as well, from traditional ferrite wire-coil heads to the newer magneto-resistive (MR), giant magneto-resistive (GMR), and tunneling magneto-resistive (TMR) heads. All of these, when paired with additional circuitry, produce a voltage signal in response to flying over the magnetization pattern written on the disk, called the readback or playback signal. It is this signal that contains the user data, but in a highly encoded, distorted, and noisy form, and from which the rest of the system must ultimately estimate the recorded data, hopefully with an extremely low chance of making an error.
At the system level abstraction, the hard disk drive, or any storage device for that matter, can be interpreted as a digital communication system, and in particular one that communicates messages from one point in time to another (unfortunately, only forwards), rather than from one point in space to another. Specifically, the preprocessor of user data which includes several layers of encoders and the write head which transduces this encoded data into a magnetization pattern on the platter compose the transmitter. The response of the read head to the magnetization pattern and the thermal noise resulting from the electronics are modeled as the channel. Finally, the receiver is composed of the blocks necessary to first detect the encoded data, and then to decode this data to return the user data. This level of abstraction makes readily available the theories of communication systems, information, and coding for the design of disk drive systems, and has played a key role in allowing the ever increasing densities while still ensuring the user data is preserved as accurately as possible.
Hard disk drives are nowadays commonplace devices used to provide bulk storage of the ever increasing amounts of data being produced every moment. Magnetic force microscopy is one form of the class of modern microscopy called scanning probe microscopy, which images the minute details of magnetic field intensities on the surface of a sample.
Using this forensic process to obtain the raw data written to disk, first the relationship between the images acquired through magnetic force microscopy and the signals involved in the read channel that is employed by the hard disk drives themselves to read the data written on the disks must be determined. Once this has been done, a system can be developed that takes advantage of this relationship and uses the design principles involved in read channel engineering.
Data recovery
Providing the service of data recovery from hard disk drives that are unable to be read in a normal fashion composes a significant industry in which many companies are involved. Generally, these services fall into two camps - those for personal data recovery and those for forensic data recovery, where the goal is to recover legal evidence which might have been intentionally deleted by the perpetrator. Personal recovery services exist because hard drives, like any other complex system, have some probability of failing due to one reason or another, and when there are millions of drives being used in a variety of conditions, some inevitably fail. The modes of failure (or intentional destruction) are varied, and can allow recovery with as simple a procedure as swapping its printed circuit board (PCB) with that of another drive of a similar model, or render any form of recovery completely intractable.
The most common and successful methods of data recovery from a failed drive are to replace selected hardware from the drive that has failed with the same part from a drive of the same model. Note that all of these methods assume that the data recorded on the magnetic surfaces of the disks is completely intact. Examples include replacing the PCB, re-flashing the firmware, replacing the headstack, and moving the disks to another drive. The latter two need to be performed in a clean-room environment, since it is required that the disks be free of even microscopic particles, since the flying heights of the heads are usually on the order of nanometers! As the bit density of hard disk drives is continually increasing, each drive is “hyper-tuned” at the factory, where a myriad of parameters are optimized for the particular head and media characteristics of each individual drive. This decreases the effectiveness of part replacement techniques when a particular drive fails, as these optimized parameters might vary significantly even among drives of the same model and batch.
A more general approach to hard drive data recovery involves using a spin-stand. On this device, the individual disks of the platter are mounted, a giant magnetoresistive head (GMR) is flown above the surface, and the response signal is captured. This allows rapid imaging of large portions of the disk’s surface, and the resulting images have to then be processed to recover the data written on the disk. Specifically, the readback signal produced by the GMR head as the disk is spun and the head is moved across the diameter of the disk is composed into a contiguous rectangular image, covering the entire drive. This is then processed to remove intersymbol interference (ISI), and from this the data corresponding to the actual magnetization pattern is detected, and further processed to ultimately yield user data. Some of the main challenges of this approach are encountered first in the data acquisition stage, where the absence of perfect centering of the disk on the spin-stand yields a sinusoidal distortion of the tracks when imaged. This can be combated using proper centering, or track following, where the head position is continuously adjusted to permit the accurate imaging of the disk. The precoding in the detected data is inverted, the ECC and RLL coding is decoded, and descrambling is then performed to give the decoded user data. Finally, user files are reconstructed from the user data in different sectors, based on the knowledge of the file systems that are used in the drive. This process has been demonstrated to be effective in recovering with high accuracy a user JPEG image that was written to a 3 GB commercial hard drive from 1997.
Compared to MFM, the spin-stand approach clearly is better for recovering significant amounts of data, as it allows rapid imaging of the entire surface of a disk. However, it is obvious that the data can only be recovered from a disk in spinnable condition. For example, if the disk is bent (even very slightly) or if only a fragment of the disk is available, this would preclude the use of the spin-stand method. Using MFM to image the surface of the disk would still be possible, even in these extreme situations. Once MFM images are acquired, they must still be processed in a similar manner to that described above, but the nature of MFM imaging provides some different challenges.
Data sanitization
At the other end of the spectrum is data sanitization, where the goal is to prevent the recovery of confidential information stored on a hard drive by any means. This is of primary importance to government agencies, but also to private companies that are responsible for keeping their customers confidential information secure. It should be of significant concern to personal users as well, since when a user decides to replace a hard drive, not properly sanitizing it could result in personal information, for example medical or financial records, being stolen. Before a hard drive containing sensitive information is disposed of, it is necessary to clear this information to prevent others from acquiring it. Performing simple operating system level file deletion does not actually remove the data from the drive, but instead merely deletes the pointers to the files from the file system. This allows the retrieval of these “deleted” files with relative ease through the operating system itself with a variety of available software. A more effective level of cleansing is to actually overwrite the portions of the disk that contained the user’s files, once or perhaps multiple times. Yet more effective is to use a degausser, which employs strong magnetic fields to randomize the magnetization of the grains on the magnetic medium of each disk. Most effective is physical destruction of the hard drive, for example by disintegrating, pulverizing or melting. Generally, the more effective the sanitization method, the more costly in both time and money it is. Hence, in some situations, it is desired to utilize the least expensive method that guarantees that recovery is infeasible.
As mentioned, since operating system “delete” commands only remove file header information from the file system as opposed to erasing the data from the disk, manually overwriting is a more effective sanitization procedure. The details of what pattern to overwrite with, and how many times, is somewhat a contentious topic. Various procedures are (or were) described by various organizations and individuals, ranging from overwriting once with all zeros, to overwriting 35 times with several rounds of random data followed by a slew of specific patterns. More important is the fact that several blocks on the drive might not be logically accessible through the operating system interface if they have been flagged as defective after the drive has been in use for some time. In modern disk drives, this process of removing tracks from the logical address space that have been deemed defective, known as defect mapping, is continuously performed while tine drive is in operation. To resolve this issue, an addition to the advanced technology attachment (ATA) protocol called Secure Erase was developed by researchers at CMRR. This protocol overwrites every possible user data record, including those that might have been mapped out after the drive was used for some period. While overwriting is significantly more secure than just deleting, it is still theoretically possible to recover original data using microscopy or spin-stand techniques. One reason for this is that when tracks are overwritten, it is unlikely that the head will traverse the exact same path that it did the previous time the data was written, and hence some of the original data could be left behind in the guardbands between tracks. However, with modern high density drives, the guardbands are usually very small compared to the tracks (or non-existent in the case of shingled magnetic recording) making this ever more difficult. Finally, it should be noted that the drive is left in usable condition with the overwriting method.
The next level of sanitization is degaussing, which utilizes an apparatus known as a degausser to randomize the polarity of magnetic grains on the magnetic media of the hard drive. There are three main types of degaussers: coil, capacitive, and permanent magnet. The first two utilize electromagnets to produce either a continuously strong, rapidly varying magnetic field or an instantaneous but extremely strong magnetic field pulse to randomly set the magnetization of individual domains in a hard drive’s media. The last utilizes a permanent magnet that can produce a very strong magnetic field, depending on the size of the magnet, but produces a constant field, that is not time-varying. Depending on the coercivity of the magnetic medium used in the drive, different levels of magnetic fields may be necessary to fully degauss a drive. If an insufficient field strength is used, some remanant magnetic field may be present on the disks, which can be observed using MFM for example. One important difference between degaussing and overwriting is that the drive is rendered unusable after degaussing, since all servo regions are also erased. In fact, if the fields used in degaussing are strong enough, the permanent magnets in the drive’s motors might be demagnetized, clearly destroying the drive. The most effective sanitization method is of course physical destruction, but degaussing comes close, and often is performed before additional physical destruction for drives containing highly confidential information.
Forensic recovery
The first step in the forensic recovery process is scanning probe microscopy. Modern microscopy has three main branches: optical or light, electron, and scanning probe. Magnetic force microscopy (MFM) is one example of scanning probe microscopy, where a probe that is sensitive to magnetic fields is used. Scanning probe microscopy utilizes a physical probe that interacts with the surface of the drive as it is scanned, and it is this interaction which is in turn measured while moving the probe in a raster scan. This results in a twodimensional grid of data, which can be visualized on a computer as a gray-scale or false color image. The choice of the probe determines the features of the sample the probe interacts with, for example magnetic forces (in MFM). The characteristics of the probe also determine the resolution, and specifically the size of the apex of the probe is approximately the resolution limit. Hence, for atomic scale resolution, the probe tip must terminate with a single atom! Another important mechanism is to be able to move the probe in a precise and accurate manner on the scale of nanometers. Piezoelectric actuators are typically employed, which respond very precisely to changes in voltage and thus are used to move the probe across the surface in a precisely controlled manner.
The final step in this forensic process is digital imaging. Digital Imaging is a ubiquitous technology these days and has achieved the state of being both the cheapest form of capturing images as well as providing high quality images that can be readily manipulated in software to accomplish a wide array of tasks. In some cases, such as in electron and scanning probe microscopy, the resulting images can only be represented digitally–there is no optical image to begin which that can be captured on film. The key motivation of digital imaging however, is digital image processing. Once an image is represented digitally, it can be processed just like any other set of data on a computer. This permits an endless number of ways to alter images for a variety of different purposes. Some of the basic types of image processing tasks include enhancement, where the image is made more visually appealing or usable for a particular purpose, segmentation, where an image is separated into component parts, and stitching, where a set of several images related to each other are combined into a composite. At a higher level, these basic tasks can be used as preprocessing steps to perform image classification, where the class of the object being imaged is determined, or pattern recognition, whereby patterns in sets of images are sought that can be used to group subsets together. Fundamental to all of these is first the representation of some aspect of the world as a set of numbers–how these numbers are interpreted determines what the image looks like, and also what meaning can be construed from them.
The system to reconstruct data on a hard disk via MFM imaging is broadly characterized by three steps. The first is to actually acquire the MFM images on a portion of the disk of interest, and to collect them in a manner that readily admits stitching and other future processing. Next is to perform the necessary image processing steps (preprocessing, aligning, stitching, and segmenting) to compose all the images acquired into a single image and separate the different tracks. Finally, a readback signal is acquired from each track and this is then passed through a PRML channel to give an estimate of the raw data corresponding to the magnetization pattern written on the drive. If the user data is to be recovered, additional decoding and de-scrambling is required, and the specific ECC, RLL, and scrambling parameters of the disk drive must be acquired to perform this.
This whole process sound fairly complicated, but it has been demonstrated that it can be used, and in some extreme cases, it has been used to recover sensitive data from hard drives. So, dispose of your hard drive accordingly using a readily available data sanitization technique.
The plan for Multics was presented to the 1965 Fall Joint Computer Conference in a series of six papers. It was a joint project with M.I.T., General Electric and Bell Labs. Bell Labs dropped out in 1969, and in 1970 GE's computer business, including Multics, was taken over by Honeywell (now Bull).
[separator]
MIT's Multics research began in 1964, led by Professor Fernando J. Corbató at MIT Project MAC, which later became the MIT Laboratory for Computer Science (LCS) and then Computer Science And Artificial Intelligence Laboratory (CSAIL).
Starting in 1969, Multics was provided as a campus-wide information service by the MIT Information Processing Services organization, serving thousands of academic and administrative users.
It was conceived as a general purpose time-sharing utility and was a commercial product for GE, which sold time-sharing services. It became a GE and then a Honeywell product. About 85 sites ran Multics. However, it had a powerful impact in the computer field, due to its many novel and valuable ideas.
Since it was designed to be a utility, such as electricity and telephone services, this dictated a number of Multics' features, including the modular structure of the hardware (with multiple CPUs and main memory banks, fully interconnected, and with the ability to take individual units out of service for maintenance, or to simply add additional units as demand increased over time), extremely robust security (so that individual users in a facility open to all comers would be protected from each other), etc.
In addition to the modular hardware and robust security, Multics had a number of other major technical features, some commonplace now (and some still not too common – alas!), but major advances when it was first designed, in 1967. They include:
• A single-level store
• Dynamic linking for libraries, etc
• A command processor implemented entirely in user code
• A hierarchical file system
• Separate access control lists for each 'file'
The single-level store architecture of Multics was particularly significant: it discarded the clear distinction between files (called segments in Multics) and process memory. The memory of a process consisted solely of segments that were mapped into its address space. To read or write on them, the process simply used normal instructions; the operating system took care of making sure that all the modifications were saved to secondary storage (disk).
In modern UNIX terminology, it was as if every file were 'mmap()'ed, however, in Multics there was no concept of process memory, separate from the memory used to hold mapped-in files: all memory in the system was part of some segment, which appeared in the file system; this included the temporary scratch memory of the process, such as its kernel stack, etc.
Multics also implemented virtual memory, which was very new at that time (only a handful of other systems implemented it at that point); but this was not a new idea with Multics.
The segmentation and paging in Multics are often discussed together, but it is important to realize that they were not fundamentally connected. One could theoretically have an SLS system which did not page. Paging was added as well for practical reasons.
Multics also made popular the now-common technique of having separate per-process stacks in the kernel and this was apparently first seen in the Burroughs B5000, but was not well known.
This is an important kernel structuring advance since it greatly simplifies code. If a process discovers, somewhere deep inside a subroutine call stack that it needs to wait for an event, it can simply do so right there, instead of having to unwind its way out, and then return later when the waited-for event has happened.
The entire system was written almost entirely in higher-level language (PL/I) - which was quite rare at the time. The Burroughs B5000 had an OS written in ALGOL, but this was the only previous system to do so.
Multics ran only on special hardware, which provided hardware support for its single-level store architecture.
It initially ran on the GE 645, a modified version of the GE 635. After GE was bought by Honeywell, a number of models of the Honeywell 6000 series systems were produced to run Multics on.
Altough Multics introduced many innovations, it also had many problems, and in the end of 1960s, Bell Labs, frustrated by the slow progress and difficulties, pulled out of the project. Thus a young engineer at AT&T Bell Labs, Kenneth (Ken) Thompson, with the help of his colleagues, Dennis Ritchie, Douglas McIlroy and Joe Ossanna, decided to experiment with some Multics concepts and to redo it on a much smaller scale. Thus in 1969 the idea of now ubiquitous Unix was born.
While Ken Thompson still had access to the Multics environment, he wrote simulations for the new file and paging system on it. Later the group continued his work on blackboards and scribbled notes.
Also in 1969, Thompson developed a very attractive game, Space Travel, first written on Multics, then transliterated into Fortran for GECOS, and finally for a little-used PDP-7 at Bell Labs. The same PDP-7 he then decided to use for the implementation of the first UNIX. On this PDP-7, and using its assembly language, the team of researchers (initially without financial support from Bell Labs) led by Thompson and Ritchie, developed a hierarchical file system, the concepts of computer processes and device files, a command-line interpreter and some small utility programs.
The name Unics was coined in 1970 by the team member Brian Kernighan, who played on Multics name. Unics (Uniplexed information and computing system) could eventually support multiple simultaneous users, and was later shortened to Unix.
Structurally, the file system of PDP-7 Unix was nearly identical to today's, for example it had:
• An i-list: a linear array of i-nodes each describing a file. An i-node contained less than it does now, but the essential information was the same: the protection mode of the file, its type and size, and the list of physical blocks holding the contents.
• Directories: a special kind of file containing a sequence of names and the associated i-number.
• Special files describing devices. The device specification was not contained explicitly in the i-node, but was instead encoded in the number: specific i-numbers corresponded to specific files.
In 1970, Thompson and Ritchie wanted to use Unix on a much larger machine than the PDP-7, and traded the promise of adding text processing capabilities to Unix to some financial support from Bell, porting the code for a PDP-11/20 machine. Thus for the first time in 1970, the Unix operating system was officially named and ran on the PDP-11/20. It added a text formatting program called roff and a text editor. All three were written in PDP-11/20 assembly language. Bell Labs used this initial „text processing system”, made up of Unix, roff, and the editor, for text processing of patent applications. Roff soon evolved into troff, the first electronic publishing program with a full typesetting capability.
In 1972, Unix was rewritten in the C programming language, contrary to the general notion at the time „that something as complex as an operating system, which must deal with time-critical events, had to be written exclusively in assembly language” (although Unix was not the first OS, written in high-level language, it was Burroughs B5000 from 1961). C language was created by Ritchie as an improved version of B language, created by Thompson as a translation of BCPL from Martin Richards. The migration from assembly language to the higher-level language C resulted in much more portable software, requiring only a relatively small amount of machine-dependent code to be replaced when porting Unix to other computing platforms.
AT&T made Unix available to universities and commercial firms, as well as the United States government, under licenses. The licenses included all source code including the machine-dependent parts of the kernel, which were written in PDP-11 assembly code. Copies of the annotated Unix kernel sources circulated widely in the late 1970s in the form of a much-copied book, which led to considerable use of Unix as an educational example. At some point, ARPA (Advanced Research Projects Agency) adopted Unix as a standard language for the Arpanet (the predecessor of Internet) community.
During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (particularly of the BSD version, originating from the University of California, Berkeley) by many commercial startups, for example Solaris, HP-UX and AIX. Today, in addition to certified Unix systems such as those already mentioned, Unix-like operating systems such as Linux and BSD descendants (FreeBSD, NetBSD, and OpenBSD) are commonly encountered.
Stuxnet is an extremely sophisticated computer worm that exploits multiple previously unknown Windows zero-day vulnerabilities to infect computers and spread. Its purpose was not just to infect PCs but to cause real-world physical effects. Specifically, it targets centrifuges used to produce the enriched uranium that powers nuclear weapons and reactors.
[separator]
Stuxnet was first identified by the infosec community in 2010, but development on it probably began in 2005. Despite its unparalleled ability to spread and its widespread infection rate, Stuxnet does little or no harm to computers not involved in uranium enrichment. When it infects a computer, it checks to see if that computer is connected to specific models of programmable logic controllers (PLCs) manufactured by Siemens. PLCs are how computers interact with and control industrial machinery like uranium centrifuges.
If it is deconstructed, they are simply the control element of a control system. If you are building a motion detected lighting system, you must have 3 parts to that; a sensor, a controller and an actuator. In a lighting system, the sensor would be a thermal sensor that detects human presence or movement; the controller would be a circuit or something more complex that the logic of the system would be built in, and the actuator would be the lights. The end result would be the controller sensing the presence of a human through the sensors, and turning on the switch to turn on the lights. This is a very simple control system that gives the ability to program the controller without changing the circuitry, or electrical system associated with it.
Modern PLCs are programmed using the proprietary OEM software that comes along with the system. This software incorporates graphical programming interfaces such as ladder programming that enable automation engineers with limited programming knowledge to program the PLCs that will automate the connected hardware. In a factory setting, combinations of PLCs are connected using SCADA (Supervisory Control and Data Acquisition) systems that are also programmed using OEM software provided by the system manufacturers creating an ecosystem of Operational Technology software.
The biggest jaw-drop comes when we analyze the security of this software, with it being developed for engineers by OT software developers. By some, the vulnerabilities of the software have been thrashed as Insecure by Design, especially when looking at the access privileges and protocol vulnerabilities. There are significant amounts of vulnerabilities reported on leading OEM software vendors that question the very competency of the hardware giants to develop secure OT software. It is these vulnerabilities combined with OS vulnerabilities that were exploited by Stuxnet to carry out massive damages to selected critical infrastructure.
It's now widely accepted that Stuxnet was created by the intelligence agencies of the United States and Israel. The classified program to develop the worm was given the code name "Operation Olympic Games" and it was begun under President George W. Bush and continued under President Obama. While neither government has ever officially acknowledged developing Stuxnet, a 2011 video created to celebrate the retirement of Israeli Defense Forces head Gabi Ashkenazilisted Stuxnet as one of the successes under his watch.
While the individual engineers behind Stuxnet haven't been identified, we know that they were very skilled, and that there were a lot of them. Kaspersky Lab's Roel Schouwenberg estimated that it took a team of ten coders two to three years to create the worm in its final form.
The U.S. and Israeli governments intended Stuxnet as a tool to derail, or at least delay, the Iranian program to develop nuclear weapons. The Bush and Obama administrations believed that if Iran was on the verge of developing atomic weapons, Israel would launch airstrikes against Iranian nuclear facilities in a move that could have set off a regional war. Operation Olympic Games was seen as a nonviolent alternative. Although it wasn't clear that such a cyberattack on physical infrastructure was even possible, there was a dramatic meeting in the White House Situation Room late in the Bush presidency during which pieces of a destroyed test centrifuge were spread out on a conference table. It was at that point that the U.S. gave the go-ahead to unleash the malware.
Stuxnet was developed as a computer malware that only attacked SCADA systems that were developed by Siemens, the German industrial devices giant. The malware was designed to exploit zero-day vulnerabilities in Microsoft Windows operating system, and the software of Siemens, SIMATIC STEP 7 and SIMATIC WinCC. In terms of Microsoft Windows, the creators of the virus exploited 4 zero-day vulnerabilities of Microsoft Windows to spread. The main objective of Stuxnet was to increase the speed of the Iranian nuclear centrifuges at Natanz, resulting in a melt-down, thus damaging the nuclear infrastructure.
It is important to note that most operational technology systems of modern critical infrastructure are built with direct cyber attacks in mind, thereby air gapping the systems in most cases. What it means is that the local networks of SCADA systems are not connected to the unsecured systems such as the Internet. This prevents a direct remote cyber attack without the engagement of a physical agent impossible, thus reducing the vulnerability of the system. The above argument was taken into consideration by the developers of Stuxnet.
Stuxnet mainly had 3 components that worked in sync: a worm to deliver the payload, a link file to replicate the worm, and a rootkit to hide all the malicious code. The malware famously exploited the Windows shortcut vulnerability from where it is spread to removable devices such as flash drives.
The sophistication in Stuxnet’s design makes it interesting to study how it affected Natanz nuclear centrifuges. A rough idea of what happened is as follows:
1. Stuxnet spreads to millions of devices through the internet, infecting computers and copying itself to the removable devices such as USB flash drives.
2. Stuxnet malware infects the computer of the maintenance engineer through the USB flash drive. Since an air gap is installed to block direct cyber attacks by the external networks to the internal network of the Natanz facility, this was the only way such an infection was possible.
3. The malware is executed in the local host computer without any indication and replicates rapidly within the local network exploiting a Windows network vulnerability.
4. The malware has found the control computer running Siemens software and has infected its configuration files. There are varying reports of this software being SIMATIC STEP 7 — the Siemens PLC software or SIMATIC WinCC — the Siemens SCADA software. The infection results in malicious lines of code being executed by the system.
5. The code changes the programming to increase the centrifugal speed of Natanz centrifuges thus controlling the hardware. These lines of code are said to be executed once in 27 days to make it undetectable.
6. Code changes the output of the system to hide the increased centrifugal speeds. For example, if the centrifugal speeds are increased from 10,000rpm to 15,000rpm over a period of 3 months, the output from the SCADA system would only display 10,000rpm as the current centrifugal speed. This is to increase the damage to the infrastructure by delaying the date of discovery.
The complexity of Stuxnet lead to it being named the world’s first digital weapon.
Despite how well Stuxnet was designed, in its payload it is simply a logic bomb; a malware that is executed only when logic is met, in this case, the control computer of a Siemens S7–400 PLC, running SIMATIC WinCC and SIMATIC STEP 7 software. This was the configuration at Natanz nuclear centrifuges, but not only there. Stuxnet was never intended to spread beyond the Iranian nuclear facility at Natanz. However, the malware did end up on internet-connected computers and began to spread in the wild due to its extremely sophisticated and aggressive nature, though, as noted, it did little damage to outside computers it infected. Many in the U.S. believed the spread was the result of code modifications made by the Israelis.
The malware ultimately affected 115 countries damaging thousands of industrial equipments running the machines with said configuration.
Symantec, who was the first that unraveled Stuxnet, said that Stuxnet was "by far, the most complex piece of code that we've looked at - in a completely different league from anything we’d ever seen before". And while you can find lots of websites that claim to have the Stuxnet code available to download, you shouldn't believe them: the original source code for the worm, as written by coders working for U.S. and Israeli intelligence, hasn't been released or leaked and can't be extracted from the binaries that are loose in the wild. (The code for one driver, a very small part of the overall package, has been reconstructed via reverse engineering, but that's not the same as having the original code.)
Since then several other worms with infection capabilities similar to Stuxnet, including those dubbed Duqu and Flame, have been identified in the wild, although their purposes are quite different than Stuxnet's. Their similarity to Stuxnet leads experts to believe that they are products of the same development shop, which is apparently still active.
Since its inception in 2004, Ubuntu has been built on a foundation of enterprise-grade, industry leading security practices. From the toolchain to the software suite and from the update process to the industry standard certifications, Canonical never stopped working to keep Ubuntu at the forefront of safety and reliability.
[separator]
In 2014, the UK government security arm CESG had published a report of its assessment on the security of all ‘End User Device’ operating systems.
Its assessment compared 11 desktop and mobile operating systems across 12 categories including: VPN, disk encryption and authentication. These criteria are roughly equivalent to a standard set of enterprise security best practices, and Ubuntu 12.04 LTS came out on top – the only operating system that passed nine requirements without any “Significant Risks”.
The security assessment included the following categories:
• VPN
• Disk Encryption
• Authentication
• Secure Boot
• Platform Integrity and Application Sandboxing
• Application Whitelisting
• Malicious Code Detection and Prevention
• Security Policy Enforcement
• External Interface Protection
• Device Update Policy
• Event Collection for Enterprise Analysis
• Incident Response
At that time no operating system met all of those requirements. Ubuntu however, scored the highest in a direct comparison.
Only 3 sections from the security assessment had comments: VPN, Disk Encryption and Secure Boot.
VPN
The comments made by CESG were that “The built-in VPN has not been independently assured to Foundation Grade.” This means that the software does meet all the technical requirements of security to pass the assessment, but that the software itself has not been independently assessed to make sure that it hasn’t been tampered with during the development process.
Disk Encryption
Disk encryption is a similar case to the VPN assessment. For Ubuntu 12.04, CESG states:
“LUKS and dm-crypt have not been independently assured to Foundation Grade.”
LUKS and dm-crypt are used on Ubuntu to encrypt the data on the hard disk and to decrypt the data when starting up, by requesting a password from the user. Without the password, the computer cannot start the operating system or access any of the data.
Secure Boot
Secure boot is a Microsoft technology invented in cooperation with OEMs to ensure that software cannot be tampered with after the hardware has been shipped from the factory. It has provoked much debate in security circles, as the ability to install any software which you can control is desirable from a security perspective. The German government recently criticised secure boot as preventing installation of specialised secure operating systems after sale of hardware.
Ubuntu’s response, from Ubuntu 12.10 onwards is to adopt Grub2 as the default bootloader, with support for Secure Boot, but with an ability to turn off secure boot to modify the OS, if required.
Since then Ubuntu followed a steady release schedule, each new version introducing new security features and improving on the existing ones.
In 2020 Canonical delivered an update to its Ubuntu 20.04 version, that makes available a wide range of cybersecurity capabilities, including an open source virtual private network (VPN) tunnel dubbed WireGuard that provides better performance than IPsec and OpenVPN tunneling protocols because it runs on the Linux kernel.
Ubuntu 20.04 Long Term Support (LTS) also adds Kernel Self Protection measures, assures control flow integrity and includes stack-clash protection, a Secure Boot utility, the ability to isolate and confine applications built using Snap containers, and support for Fast ID Online (FIDO) multi-factor authentication that eliminates the need for passwords.
This release also adds native support for AMD Secure Encrypted Virtualization with accelerated memory encryption.
These advances will help make IT environments more secure by adding capabilities into the base operating system that are readily accessible. Naturally, as more applications start taking advantage of the security capabilities embedded in Ubuntu 20.04 LTS, the overall state of DevSecOps should improve. In general, DevSecOps is a powerful idea that is still in its infancy and as more security capabilities are embedded into the operating system, the easier it will become for organizations to incorporate cybersecurity functions into the application development and deployment process.
The two primary benefits of embedding more security capabilities into the operating system are, of course, reduced costs and increased performance. The closer security functions run to the kernel, the less overhead that gets generated, which makes more processing power available to applications.
The move to embed more security capabilities into the base Ubuntu operating system also comes at a time when IT organizations are under increased pressure to reduce costs in the wake of the economic downturn brought on by the COVID-19 pandemic.
Less clear right now is the degree to which organizations are choosing to standardize on an operating system because of the degree of cybersecurity enabled. However, with developers exercising more influence over the entire IT stack these days, many of them are acutely aware of any performance trade-offs that historically have been made to ensure application security. As such, many developers have a vested interest in cybersecurity functions that can be programmatically invoked at the kernel level.
Of course, cybersecurity teams are not always aware of what security functions are embedded in the operating system level. That may change, however, as more organizations embrace DevSecOps, which shifts much of the responsibility for security on to the shoulder of developers. That so-called shift to the left provides developers with more incentive to address a wide range of cybersecurity issues much earlier in the application development process.
Longer-term, it remains to be seen how the relationship between cybersecurity teams and developers will evolve. As more cybersecurity capabilities are embedded into operating systems and the IT infrastructure they are deployed on, the overall IT environment will, in time, become much more secure than it is today.
There may never be such a thing as perfect security. However, many of the low-level security issues that routinely plague IT today soon may no longer be as big an issue as they are today.
Everybody in the cyber community should know what is an APT. Advanced Persistent Threat.
[separator]
It is a threat. It is advanced. And it is persistent. All threats are supposed to be persistent. So what makes an APT so special?
First, an APT is actually a stealthy threat actor, typically a nation state or state-sponsored group, which gains unauthorized access to a network and remains undetected for a long period of time. Recently, the term may also refer to non-state sponsored groups conducting large-scale targeted intrusions for specific goals, not necessarily government oriented.
It is advanced because the operators/creators have a plethora of ideas and concepts in their arsenal. They also dispose of a myriad of methods of intelligence gathering capabilities
It is persistent because it targets specific intelligence.
It is a threat because the elements involved are organized, motivated and most importantly skilled.
The discussion this article creates revolves around the tools that APTs use.
Naturally, as a first entry point in our brief analysis one might state that zero days are used.
However, this is not always the case, as I can confirm from an offensive security standpoint.
There might be zero days available which cannot be utilized to achieve the scope of the mission.
The question is:
What is there to be done?
First of all, let’s define the steps usually taken in an APT operation:
• Initial compromise– performed by the use of SE(social engineering) and SP(spear phishing)
• Establish foothold– foothold in victim's network (RAT-remote admin), create net backdoors and tunnels allowing stealth access to its infrastructure.
• Escalate privileges– use whatever means to turn root or Domain Admin
• Internal reconnaissance– gather as much info on the infrastructure mostly OT(operational technology) to the point where the actions can be mimicked effortlessly
• Move laterally– once a sound knowledge of the environment is obtained compromise everything that could offer further info
• Maintain presence– ensure continued control over access channels and credentials acquired in previous steps.
• Complete mission– exfiltrate stolen data from victim's network.
All these steps are extremely important and one cannot make the statement that one step is of more importance than another.
Today, we shall focus on probably what makes an exception to the above statement (to an extent): the INITIAL COMPROMISE.
Let’s suppose our entry point is via a network user with the role of head of compliance in a company.
We shall not delve at this point in the social engineering details which make the operation successful, but rather on the technical aspects concerning his/her workstation.
What EDR (endpoint detection and response) is in use?
What telemetry is collected and taken where?
First, we would have to build a tool that from all practical purposes is legitimate in front of an AV(anti-virus) and simultaneously can collect telemetry in the form of what AV is used and where is the data sent.
How is such a tool built? By using ingenious methods where practically, all elements involved are native OS mechanisms. Living off the land is always the preferred approach. If the advanced operator can also introduce a behavioural dimension to the initial data gathering operation, then BINGO!
Now what? We have somewhere the data collected in stage one. We know they use antivirus X. What is there to be done?
Create a ZERODAY against X. How fast can that be done?
Once, Abraham Lincoln was asked:
“How long need a man’s legs be?”
He answered:
“As long as his legs touch the ground he is tall enough”.
Once X is known, it is a question of at most 50 hours before an exploit is created and tested in all the appropriate environments.
If X ever finds out about it(post operation) then it won’t be a zero day anymore.
If X never finds out about it then it stays a zeroday, though these zerodays being so common they should be classified into a zeroday category of their own.
This describes how the Initial Compromise is performed. However, the game just started.
Stay tuned for the other parts to follow
For an unidentified group, the hacker collective called Anonymous has made the news quite a few times since its inception both for good and for bad. Some say that they might just be the most powerful non-government hacking group in the world. They are also largely considered to be the most famous one. So, exactly how did Anonymous start, where do they come from, and what are they trying to do?
[separator]
The group, which is composed of a loosely organized international network of hacktivists, has its roots in the online image-based bulletin board 4chan, that was publicly launched on October 2003. The site was inspired by 2channel, a massive Internet forum, with seemingly random content, which is especially popular in Japan. 2channel was launched in 1999. It has over 600 boards which cover wide ranging subject matters, such as cooking, social news, and computers. Visitors to 2channel usually post anonymously, and most of the content on this site is in Japanese. In the spirit of 2channel, 4chan allows people to post anonymously as well. Unlike 2channel, the vast majority of 4chan is in English. Any poster who doesn’t post text in the name field automatically gets credited as “Anonymous”.
The majority of the forums on 4chan are based on Japanese pop culture, but their most popular forum is /b/. /b/ has a fascinating culture onto itself. A lot of the user created graphical memes you may see circulating around the Internet, like LOLcats, “All your base are belong to us”, and Pedobear, originated in the /b/ forum. As it is an image board, its content is mostly made up of user generated graphics. Usually, they’re intended to amuse, offend or do both at the same time. The majority of the postings are with unknown author (“Anonymous”), so the „Anonymous” name was inspired by the perceived anonymity under which users posted on 4chan.
The group’s two symbols - the Guy Fawkes mask that they wear in public and the „man without the head” image - both underscore the group’s inscrutability and lack of any formal leadership. Members of the group call themselves “hacktivists”, a word coined from the combination of hacker and activist. When people have technical skills, have access to the Internet and understand how network infrastructure and servers work, it can be tempting to put that knowledge into having some effect on the world. The “activist” part of “hacktivist” means that they don’t do their hacking and cracking without a cause. The various people behind Anonymous worldwide are united in a belief that corporations and organizations they consider to be corrupt should be attacked.
Not all of Anonymous’ activities involve attacking networks or websites. Anonymous has also been active in initiating public protests. But the web and IRC channels are the lifeblood of the group. If it weren’t for the Internet, Anonymous would’ve never existed.
The hacker collective’s first cause to make headlines was a 2008 effort called „Project Chanology”. On January 2008, a video from the Church of Scientology was leaked onto YouTube. It was a propaganda video featuring Tom Cruise laughing hysterically. As the clip is arguably unflattering to Scientology, the cult tried to get YouTube to remove the video due to “copyright infringement”. In response a video was posted on YouTube credited to Anonymous titled “Message to Scientology”. Thus, began Project Chanology.
A press release was written explaining the intentions behind Anonymous’ Project Chanology. The release covers why Scientology is a dangerous organization and how the cult’s attempt to have the Tom Cruise video removed from YouTube violated the freedom of speech.
Scientology has a reputation for financially exploiting its members, engaging in threatening blackmail against people who try to leave the cult and various other abuses. “Call to Action”, also credited to Anonymous, was posted on YouTube calling for protests outside of Church of Scientology centers around the world. At some point in January, a DDoS attack was also launched on the cult’s website.
During the various Anonymous protests against Scientology that year, many protestors wore Guy Fawkes masks, in the spirit of the popular film “V for Vendetta”, and also to protect their identities from the cult, which is known for attacking dissenters that Scientology calls “Suppressive Persons”.
Between marches outside of Scientology churches and the videos the group posted, they managed to establish their power and resolve in this first project.
In February 2010, the Australian government was in the process of passing legislation that would make certain online content illegal. In response, Anonymous engaged in Operation Titstorm using DDoS attacks to bring down various Australian government websites.
In June 2010, President Mahmoud Ahmadinejad was elected in Iran, which triggered protests across the country. In response, Anonymous Iran was formed, an online project between Anonymous and The Pirate Bay, a popular, but persecuted torrent search engine site. Anonymous Iran offered Iranians a forum to the world which was kept safe amidst the Iranian government’s crackdowns on online news about the riots. Project Skynet was launched by Anonymous the same month to fight Internet censorship worldwide.
Operation Didgeridie started in September 2010. The Australian government had plans to censor the Internet at the ISP level. An Anonymous initiated a DDoS attack on Prime Minister Kevin Rudd’s website and brought it down for about an hour.
Operation Payback commenced in also in September 2010. The MPAA (Motion Picture Association of America) and the RIAA (Recording Industry Association of America) hired Indian software firm AIPLEX to launch DDoS attacks on The Pirate Bay and other websites related to file sharing. Anonymous executed DDoS attacks of their own, targeting websites linked to all three organizations, the MPAA, the RIAA and AIPLEX.
Operation Payback continued in December, but this time the targets were Mastercard,Visa, Paypal, the Bank of America and Amazon. Those corporations were targeted for blocking charitable donations for the WikiLeaks.org. This is a website for whistleblowers to post insider information about corrupt government activities around the world.
In December 2010, it was reported that the wife of Zimbabwean dictator Robert Mugabe, Grace Mugabe profited from illegal diamond mining. The information was revealed via a cable leak to WikiLeaks. Anonymous brought down Zimbabwean websites via DDoS attacks, as a response to Zimbabwean government corruption.
Starting on January 2011, websites for the Tunisian Stock Exchange and the Tunisian Ministry of Industry were brought down by more Anonymous DDoS attacks. It was a reaction to Tunisian government censorship. The Tunisian government had tried to restrict the Internet access of its citizens and arrested many bloggers and cyberactivists who had criticized the government.
Also in January 2011 the Egyptian government became the next target. Efforts started with the intention of removing Egyptian President Hosni Mubarak from office. Once the government blocked the citizens’ access to Twitter, Anonymous brought down Egyptian government’s websites with DDoS attacks.
On February 2011, Aaron Barr of security firm HBGary Federal claimed to have infiltrated Anonymous and said he would release information in a press conference. HBGary’s website was powered by a CMS (content management system) that had several security loopholes. Because of those loopholes, Anonymous were able to access the site’s databases via SQL injection. Usernames, e-mail addresses and password hashes were retrieved. The MD5 hash algorithms were cracked with rainbow tables, so eventually the entire database became accessible.
By April 2011, Sony became the next Anonymous target. Sony’s PlayStation Network banned user GeoHot for jailbreaking and modifying his PS3 console. GeoHot attracted Sony’s attention by posting information about how to mod PS3s to the Internet. Throughout April, the PlayStation Network and various Sony websites were brought down via organized DDoS attacks. This was Anonymous’ way of coming to GeoHot’s defense. It took a number of weeks until the PlayStation Network was operating normally.
Mid-July 2011, people from Adbusters, the anti-consumerism magazine, started discussing what could be done in response to corporate corruption on Wall Street. The “Occupy Wall Street” movement was planned from there, for mass protests on Wall Street starting on September. On August 2011 Anonymous expressed its support for this with a video post on YouTube to rally many thousands of people to be involved in the protest. The ubiquitous and now Anonymous related Guy Fawkes masks can often be seen on protestors.
These are just a few prominent examples from their early years of “hacktivity” but, since then, the hacker collective has been involved in everything from “Occupy Wall Street” to the recent violent protests in Minneapolis over the death of George Floyd.
While Anonymous initially was lambasted in the media for cyberattacks on the government and businesses, the group’s reputation has shifted recently. There are reports that the group is now even being praised for its work, particularly its mission to combat cyber jihadists. Some even went so far as to call the collective “the digilantes” for their efforts to retaliate against acts of injustice.
“Hacktivism” is now a major phenomenon, and Anonymous is far from the only “hacktivist” group. Networks, servers and databases which may become targets must audit for security. Harden networks from DDoS attacks, use virtualization and proxy servers when possible, and assure that passwords and hashes are difficult to crack. Special care must be applied to servers which contain encryption keys.
In the meantime, whoever they are, wherever they are, with their philosophy of activism, hopefully Anonymous continues to use their powers for good, rather than evil.
Cybersecurity is a major issue for every business with any kind of internet presence, and that's pretty much every single one. Cybersecurity can affect everything from compliance and data safety to staffing budgets and much more.
[separator]
Today, cybersecurity is top of mind for just about everyone. But when the internet’s first draft appeared a half-century ago, security wasn’t in the outline. The technical focus was how to make this new packet-based networking scheme work. Security did not occur to the close-knit crew of academic researchers who trusted each other; it was impossible at the time for anyone else to access the fledgling network.
With today’s pervasive use of the internet, a modern surge in cyberattacks and the benefit of hindsight, it’s easy to see how ignoring security was a massive flaw.
Looking back at security events, the relatively short history of cybersecurity reveals important milestones and lessons on where the industry is heading.
1971: The first computer virus is discovered
You might assume that computers had to be invented before the concept of the computer virus could exist, but in a certain sense, this isn't quite right at all. It was mathematician John von Neumann who first conceptualized the idea with his paper released in 1949, in which he suggested the concept of a self-replicating automatic entity working within a computer.
It wasn't until 1971 that the world would see a real computer virus. DEC PDP-10 computers working on the TENEX operating system started displaying messages saying "I'm the creeper, catch me if you can!". At the time, users had no idea who or what it could be. Creeper was a worm, a type of computer virus that replicates itself and spreads to other systems; it was created by Bold, Beranek and Newman. While this virus was designed only to see if the concept was possible, it laid the groundwork for viruses to come.
A man named Ray Tomlinson (the same guy who invented email) saw this idea and liked it. He tinkered with the program and made it self-replicating - the first computer worm. Then he wrote another program - Reaper, the first antivirus software which would chase Creeper and delete it.
1983: The first patent for cybersecurity in the US
As computers and systems became more advanced, it was not long until technology experts around the world were looking for ways to patent aspects of computer systems. And it was in 1983 that the first patent related to cybersecurity was granted.
In September of that year, the Massachusetts Institute of Technology (MIT) was granted U.S. patent 4,405,829 for a cryptographic communications system. The patent introduced the RSA (Rivest-Shamir-Adleman) algorithm, which was one of the first public key cryptosystems. Interestingly, given that this was the very first patent, it is actually still quite relevant today, as cryptography forms a major part of cybersecurity strategies.
1993: The first DEF CON conference runs
This conference is well-known as the major cybersecurity technical conference, and a fixture in the calendar of professionals, ethical hackers, technology journalists, IT experts, and many more.
The conference first ran in June 1993. It was organized by Jeff Moss and attended by around 100 people. However, it wouldn't stay that small for very long. Today, the conference is attended by over 20,000 cybersecurity professionals from around the world every year.
1995: SSL is created
There is a security protocol that we are often guilty of taking for granted. The Secure Sockets Layer (SSL) is an internet protocol that makes it safe and possible to do things that we think of as commonplace, such as buying items online securely.
After the first-ever web browser was released, the company Netscape began working on the SSL protocol. It was in February 1995 that Netscape launched SSL 2.0, which would become the key language for securely using the internet - the Hyper Text Transfer Protocol Secure (HTTPS). Today, when you see “HTTPS” in a website address, you know its communications with your browser are encrypted. This was perhaps the most important cybersecurity measure for many years.
2003: Anonymous is created
Perhaps the most famous hacking group in the word, Anonymous made a name for themselves by committing cyberattacks against targets that were considered to generally be bad. The group has no specific leader and is in fact a collection of a large number of users, which may contribute in big or small ways. Together, they exist as an anarchic, digitized global brain.
The group came to prominence in 2003 and has carried out many successful hacking attempts against organizations such as the Church of Scientology. Anonymous hackers are characterized by their wearing of Guy Fawkes masks and continues being linked to numerous high-profile incidents. Its main cause is protecting citizens’ privacy.
2010: Hacking uncovered at a national level
Google surprised the world in 2010, when it disclosed a security breach of its infrastructure in China - a project it named "Operation Aurora". Before 2010, it had been very unusual for organizations to announce data breaches.
Google's initial belief was that the attackers were attempting to gain access to the Gmail accounts of Chinese human rights activists. However, analysts discovered the true intent was identifying Chinese intelligence operatives in the U.S. who may have been on watch lists for American law enforcement agencies. The attacks also hit more than 50 companies in the internet, finance, technology, media and chemical sectors.
Today: Cybersecurity is more important than ever
It has never been more important for businesses to take cybersecurity seriously. It has the power now to affect just about everything from search engine optimization (SEO) to overall company budgets and spending needs.
Organizations must learn from the fast growth in the history of cybersecurity in order to make smart decisions for the future.
In recent years, massive breaches have hit name brands like Target, Anthem, Home Depot,Equifax, Yahoo, Marriott and more, compromising data for the companies and billions of consumers. In reaction, stringent regulations to protect citizen privacy like the EU General Data Protection Regulation (GDPR) and the new California Consumer Privacy Act are raising the bar for compliance. And cyberspace has become a digital battleground for nation-states and hacktivists. To keep up, the cybersecurity industry is constantly innovating and using advanced machine learning and AI-driven approaches, for example, to analyze network behavior and prevent adversaries from winning. It’s an exciting time for the market, and looking back only helps us predict where it’s going.
The list of high-tech tools in continuous use since the early 1950s isn't very long: the Fender Telecaster, the B-52 and Fortran.
[separator]
Fortran (which started life as FORTRAN, or FORmula TRANslator) was first created by IBM programmer John Backus in 1950. By the time John F. Kennedy was inaugurated, FORTRAN III had been released and FORTRAN had the features with which it would become the predominant programming language for scientific and engineering applications. To a nontrivial extent, it still is.
Whereas COBOL was created to be a general-purpose language that worked well for creating applications for business and government purposes in which reports and human-readable output were key, FORTRAN was all about manipulating numbers and numeric data structures.
Its numeric capabilities meant that Fortran was the language of choice for the first generation of high-performance computers and remained the primary development tool for supercomputers: Platform-specific versions of the language power applications on supercomputers from Burroughs, Cray, IBM, and other vendors.
Of course, if the strength of Fortran was in the power of its mathematical processing, its weakness was actually getting data into and out of the program. Many Fortran programmers have horror stories to tell, most centering upon the "FORMAT" statement that serves as the basis of input and output.
While many scientific applications have begun to move to C++, Java, and other modern languages because of the wide availability of both function libraries and programming talent, Fortran remains an active part of the engineering and scientific software development world.
If you're looking for a programming language in use on everything from $25 computers that fit in the palm of your hand to the largest computers on earth you only have a couple of choices. If you want that programming language to be the same one your grandparents might have used when they were beginning their careers, then there's only one option. But that option is not necessarily the safest one.
Some professionals argue that the legacy systems significantly increase security incidents in the organizations. Other professionals disagree with this claim and argue that the legacy systems are “secure by antiquity”. Due to lack of adequate documentation on the legacy systems, they argue that it is very difficult and costly for potential attackers to discover and exploit security vulnerabilities in the systems.
New research is turning on its head the idea that legacy systems such as Cobol and Fortran are more secure because hackers are unfamiliar with the technology.
Current studies found that these outdated systems, which may not be encrypted or even documented, were more susceptible to threats.
By analyzing publicly available federal spending and security breach data, the researchers found that a 1% increase in the share of new IT development spending is associated with a 5% decrease in security breaches.
In other words, federal agencies that spend more in maintenance of legacy systems experience more frequent security incidents, a result that contradicts a widespread notion that legacy systems are more secure. That’s because the integration of legacy systems makes the whole enterprise architecture too complex, too messy.
A significant amount of public IT budgets is spent maintaining legacy systems although these systems often pose significant security risks, such as the inability to utilize current security best practices, including data encryption and multi-factor authentication, which make them particularly vulnerable to malicious cyber activity.
There is no simple solution in addressing these legacy systems, but one option could be moving them to the cloud. Migration of legacy systems to the cloud offers some security advantages versus running the legacy systems on premise because cloud vendors have more resources and capabilities to build effective guardianship of valuable information than clients. Cloud vendors use common IT platforms to achieve economies of scale and scope in the production and delivery of IT services to a large number of client organizations.
Thanks to economies of scale and scope, it is more feasible for the vendors to use dedicated information security teams to protect the clients’ systems over the common IT platforms. By comparison, a client organization is unlikely to have adequate resources to afford even a fraction of the dedicated information security team of the vendors. In addition, the cloud vendors are better able to attract, motivate, promote, and retain the top security talent, which is necessary as the security threat landscape dynamically evolves. On the other hand, the legacy system environment of a client organization is unlikely to offer attractive and sustainable career paths for security professionals who look for opportunities to continuously develop and advance their professional skills and knowledge. In the legacy environments, IT professionals spend most of their careers in maintaining and operating specific legacy systems and have fewer opportunities to learn about emerging new technologies.
Migration of legacy systems to the cloud requires standardization of IT interfaces in the client organization, which can in turn make it easier for the cloud vendors to effectively guard information flows at the access and interaction points around the enterprise architectures. To be able to connect to the cloud and make use of its common standardized IT services and interfaces, a client organization needs to adhere to the standards mandated by the vendors. Thus, migrating legacy systems to the cloud often requires the standardization of the IT interfaces in the client’s enterprise architectures. The highly standardized interfaces with the client make it easier and less costly for the cloud vendor to apply common security governance and control mechanisms to guard the sensitive information exchanged through those interfaces.
Bank of America in the early 1950s decided to automate their rapidly expanding check handling business. Can you imagine a time when you could take a piece of paper of any practical size and color, hand write your bank name, the payee, the amount, and add your signature (maybe legible) and use that as your bank draft or your check?
[separator]
Eventually this "document" would arrive at your bank for someone to process it. Bank of America had set up a chain of banks in California and their first problem was to determine, from your signature, to which branch the document should be forwarded to. (It was impractical to send the account summaries of each branch to all other branches on a daily basis.)
In any case, the system was large, shaky, error prone, tardy and labor intensive. A number of bank employees figured there had to be a better way, and their ideas were effective and deemed worthy of further study.
The Bank of America was good at banking, had deep enough pockets, but did not claim automation expertise. They hired Stanford Research Institute of Menlo Park, CA to design a system for them. (Stanford Research Institute was "requested" by Stanford University to not use their name, so the name SRI was chosen and is still used).
Among other problems that SRI addressed was the fact that there was no effective machine (computer) method of reading documents (OCR is still not reliable enough for financial transactions). Ken Eldredge of SRI invented the MICR method of encoding and reading data from documents. This method prevailed over other competing methods and the American Banking Association finally adopted it.
At the same time, transistors became generally available for practical computer use, and SRI proposed a system using these new transistors instead of the vacuum tubes of the era. General Electric prevailed in suggesting general purpose computers instead of hard-wired special purpose logic, which it designed, built and programmed.
Machines for encoding documents with MICR, as well as machines for reading/sorting documents had to be developed. SRI made a suitable prototype that was promising enough for the Bank of America to want up to 36 commercial versions.
SRI did not want to get into the manufacturing business, so Bank of America requested major computer manufacturers to bid on making 30 banking systems for them based on SRI's ideas and prototype.
To everyone's surprise, the General Electric Computer Department, (a department that was non-existent at General Electric at that time) won the $31,000,000 Bank of America ERMA contract. General Electric corporate headquarters didn't know of the bid and didn't know of this new "department". The same day the contract was signed, the bid team received a stern letter from G.E. president Ralph Cordiner stating that "under no circumstances will the General Electric Company go into the business machine business."
The General Electric Computer Department chose Phoenix as headquarters, had a manufacturing establishment built, refined the prototype, built and/or OEMed the system elements, delivered the first system and passed acceptance tests on December 31, 1958. Some "tightening up" of the equipment and operating procedures was necessary to reach the design goal of 55,000 accounts/day.
Bank of America "encouraged" its clients and others to choose preprinted checks using the new MICR along the bottom edge. By March, 1959, the machine(s) were processing 50,000 accounts/day and on September 14, 1959, the Bank of America and General Electric presented 4 of the proposed 30 systems running in a transcontinental closed-circuit TV press conference. These 4 systems were capable of processing over 220,000 customer accounts in the Los Angeles area. The machines were using the newly developed standard E13B magnetic ink font that GE had developed which was more human readable. This E13B font is used on the bottom line of your checks today.
This is the E13B font which is the banking standard and first used on the ERMA.
The ink utilzed for the MICR characters can be magnetized, as part of the reading process, to create machine-readable information The alphas "A" to "D" mark the beginning of various fields, such as issuing bank number, customer number and dollar amount. Most fields are preprinted, but the dollar amount is printed after the customer writes it.
The ERMA system served the Bank of America well for 8 years (a long time for a commercial data processing system). Unfortunately, when the time for replacement came, General Electric was no longer providing banking-oriented systems or peripherals. The now obsolete GE-225 series had been popular with banks. The GE 4xx series was not suitable (could not respond to interrupts fast enough to handle the document handlers) and the GE 6xx series was too large and expensive for handling documents. General Electric took themselves right out of the banking business.
In any case, Bank of America went with IBM. Bank of America ordered the IBM 360/65 and took delivery in July, 1966 with conversion scheduled to be completed in December 1966. But conversion was deferred as a result of IBM's continued delay in providing a multi-tasking operating system and severe tape drive problems. Situation had not improved by spring of '67. Ray of hope came in late May '67 with the successful "start" of the demand deposit conversion, the banks largest ERMA application. But damage had been done - delays had a direct economic impact on the bank’s profit amounting to $1,471,000. Total impact was estimated to be on the order of millions of dollars offset partially by IBM's contribution in the form of paying all equipment costs, providing professional help valued at $2,700,160 and due to the fact that IBM maintained an account balance of more than $14,000,000 in the bank this entire period.
During the conversion, IBM had invested 66-man years of field engineers and 10-man years of tape specialists to make the tape system operable. After the conversion, IBM accepted the GE equipment as a trade in, allowing credit for the remaining book value of the ERMAs. A first for IBM, the allowance was kept confidential to avoid starting a trend. The IBM contract to replace the ERMA systems had a delivery penalty. IBM was to pay the ERMA maintenance until their system was up and running.
Despite the initial high-cost and technological set-backs, MICR was so successful in its design that it was adopted as the industry standard by the American Banking Association (ABA) in 1956. Bank of America made MICR technology available to all banks and printers without royalty charges. In 1984, American Banker stated that “the development of the MICR line, which enabled checks to be sorted and processed at high speeds, has been recognized as one of the great breakthroughs in banking”.
Today, the MICR methodology remains the standard around the world.
Developing a solid backup plan requires an investment of time and money, but the cost is far less than the burdensome task of recreating data for which no backup exists.
[separator]
With rising malware attacks and the escalating cost of a data breach – pegged at an average of $3.92 million - cybersecurity has emerged as a top business priority. However, even with tightened security measures, breaches have increased by 67% over the past 5 years. As a result, the need to have a solid backup strategy in place has become more important than ever. To be truly protected, organizations must form a well-defined plan that can aid in the quick and seamless recovery of lost data and guarantee business continuity when all preventive measures fail.
A comprehensive backup strategy is an essential part of an organization’s cyber safety net. Ensuring critical organizational data is backed up and available for restore in the case of a data loss event can be considered an administrator’s prime concern. A backup strategy, along with a disaster recovery plan, constitute the all-encompassing business continuity plan which is the blueprint for an organization to withstand a cyberattack and recover with zero-to-minimal damage to the business, reputation, and data.
What are the typical threats?
Typical data threatening situations are accidental deletions, hard disk failures, computer viruses, thefts, fire and flood accidents. Data storage equipment has become more reliable over time, but hard drive failure rate is still around 4.2-4.8% annually. The risk of a fire accident is about 0.32% annually. Expressed in percentages, they do not seem like huge risks taken individually, but to receive total risk level, you need to sum them up.
As technological risks, like hardware failure, may be quite well-defined constants, other risks may vary quite a lot by different factors. For example, the risk of flooding in your house is quite serious if you are living at the seaside or on the banks of a bigger river. What people often forget is that there can also be smaller man-made "flooding", which may not be so dramatic but happen even more often. Some examples are accidents with water pipes, forgetting a laptop in the rain, spilling coffee all over the computer or dropping a laptop into a swimming pool. You might want to establish some common-sense rules for eliminating some of those risks, like not drinking coffee near your laptop, but some unforeseeable risks still remain.
If you add up all possible risks (and there are many of them), you may have as high as 25% probability of losing some of your data during the next year.
Here we’ll detail the steps to develop a dependable backup strategy:
1. Determine what data has to be backed up
“Everything” would probably be your answer. However, the level of data protection would vary based on how critical it is to restore that particular dataset. Your organization’s Recovery Time Objective (RTO), which is the maximum acceptable length of time required for an organization to recover lost data and get back up and running, would be a reliable benchmark when forming your backup strategy.
Assess and group your applications and data into the following:
• Existentially-critical for the business to survive
• Mission-critical for the organization to operate
• Optimal-for-performance for the organization to thrive
• Once all pertinent data is identified, layer the level of protection accordingly.
Of course, you should back up the data on all of the desktops, laptops, and servers in your office. But what about data stored on staff members' home computers? Or on mobile devices? Is your website backed up? What kind of data is your organization storing in the cloud? How is your email backed up?
It's not usually necessary to back up the complete contents of each individual computer's hard drive — most of that space is taken up by the operating system and program files, which you can easily reload from a CD if necessary.
Also consider data you currently store only in hard copy, as this kind of data is not easily reproducible. For example: Financial information, HR information, Contracts, Leases, etc.
This type of information should be stored in a waterproof safe deposit box or file cabinet as well as backed up electronically (either scanned or computer-generated). Give highest priority to crucial data.
2. Determine how often data has to be backed up
The frequency with which you back up your data should be aligned with your organization’s Recovery Point Objective (RPO), which is defined as the maximum allowable period between the time of data loss and the last useful backup of a known good state. Thus, the more often your data is backed up, the more likely you are to comply with your stated RPO. As a good rule of thumb, backups should be performed at least once every 24 hours to meet acceptable standards of most organizations.
Each organization needs to decide how much work it is willing to risk losing and set its backup schedule accordingly. Database and accounting files are your most critical data assets. They should be backed up before and after any significant use. For most organizations, this means backing up these files daily. Nonprofits that do a lot of data entry should consider backing up their databases after each major data-entry session. Core files like documents (such as your Documents folders) and email files should be backed up at least once a week, or even once a day.
3. Identify and implement a suitable backup and recovery solution
Based on your organization’s requirements, you need to identify a suitable backup solution as part of your backup strategy.
Some aspects to consider
There are two broadly defined approaches to backup: on-premises backup and remote backup. Either route (or both) may be appropriate for your nonprofit.
In an on-premises setup, you can copy your data to a second hard drive, other media, or a shared drive, either manually or at specified intervals.
With this setup, all the data is within your reach — and therein lies both its value and its risk. You can always access your information when necessary, but that information is vulnerable to loss, whether through theft (someone breaking in and stealing equipment) or damage (such as a leaky water pipe or a natural disaster).
In remote backup, your computer automatically sends your data to a remote center at specified intervals. To perform a backup, you simply install the software on every computer containing data you want to back up, set up a backup schedule, and identify the files and folders to be copied. The software then takes care of backing up the data for you.
With remote backup solutions, you don't incur the expense of purchasing backup equipment, and in the event of a disaster you can still recover critical data. This makes remote backup ideal for small nonprofits (say, 2 to 10 people) that need to back up critical information such as donor lists, fundraising campaign documents, and financial data, but lack the equipment, expertise, or inclination to set up dedicated on-site storage.
Automation is another key benefit to remote backup. A software program won't forget to make an extra copy of a critical folder; a harried employee at the end of a busy week might. By taking the backup task out of your users' hands you avoid the "I forgot" problem.
The main downside to remote backup solutions is that Internet access is required to fully restore your backed-up data. If your Internet connection goes down (as may happen in a disaster scenario), you won't be able to restore from your backups until your Internet connection is restored.
Another potential downside is that you have to entrust critical data to a third party. So, make sure you choose a provider that is reliable, stable, and secure. You can also help secure your data by encrypting it before it is transmitted to the remote backup center.
With most backup solutions you can choose to back up all of your data (a full backup) or just parts of your data (an incremental or differential backup).
A full backup is the most complete type of backup. It is more time-consuming and requires more storage space than other backup options.
An incremental backup only backs up files that have been changed or newly created since the last incremental backup. This is faster than a full backup and requires less storage space. However, in order to completely restore all your files, you'll need to have all incremental backups available. And in order to find a specific file, you may need to search through several incremental backups.
A differential backup also backs up a subset of your data, like an incremental backup. But a differential backup only backs up the files that have been changed or newly created since the last full backup.
Features your organization requires
Below are several essential aspects of a comprehensive and dependable backup and restore solution to consider:
• Ease of Backup: Automated and/or on-demand options
• Restore Flexibility: Cross-user, search-based, point-in-time
• Scalability: License and user management
• Ease of Use: Intuitive user interface and self-service recovery
• Post-purchase Experience: Free support and unlimited storage
• Strong Credentials: Superior customer ratings, security & compliance certifications
All backup routines must balance expense and effort against risk. Few backup methods are 100-percent airtight — and those that are may be more trouble to implement than they're worth. That said, here are some rules of thumb to guide you in developing a solid backup strategy:
Develop a written backup plan that tells you:
• What's being backed up
• Where it's being backed up
• How often backups will occur
• Who's in charge of performing backups
• Who's in charge of monitoring the success of these backups
• Think beyond just your office and its computers.
For on-premises backup solutions, we recommend rotating a set of backups off-site once a week. Ideally, you should store your backups in a secure location, such as a safe deposit box. Another method is to follow the "2x2x2" rule: two sets of backups held by two people at two different locations.
Especially if your area is susceptible to natural disasters, think about going a step further. You need to make sure your local and remote backup solutions won't be hit by the same disaster that damages your office.
Although it may sound overly cautious, you will be glad to have a system like this in place should disaster strike.
Consider what data would be most essential to have at your fingertips in an unexpected scenario. If you lose Internet connectivity, online services will be unavailable. What information or files would be key as you wait to regain Internet connectivity (which will enable you to restore from an offsite backup)? Where will you store those files?
4. Test and Monitor your backup system
Once your backup system is in place, test it, both to check that the backup is successful and that the restore is smooth and accurate. Verify the backup and restore with regards to various types of artifacts – accounts, emails, documents, sites, etc. If the backup solution supports end-user backup – inform and educate your users about using it. Finally, remember to monitor your backup performance and regularly check the logs for data lapses.
These days, bugs are far more complex than a moth stacked between relay contacts in a computer. In fact, in the past 2-3 years, a new class of bugs (that we now call vulnerabilities) were found directly in Intel processor chips, making them especially hard to detect and get rid of. If exploited, they can be used to steal sensitive information directly from the processor
[separator]
The bugs are reminiscent of Meltdown and Spectre from 2018, which exploited a weakness in speculative execution, an important part of how modern processors work. Speculative execution helps processors predict to a certain degree what an application or operating system might need next and in the near-future, making the app run faster and more efficient. The processor will execute its predictions, if they’re needed, or discard them, if they’re not.
Both Meltdown and Spectre bugs leaked sensitive data stored briefly in the processor, including secrets such as passwords, secret keys and account tokens, and private messages.
Now some of the same researchers are back with an entirely new round of data-leaking bugs. “ZombieLoad” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker in April 2020.
Almost every computer with an Intel chips dating back to 2011 is affected by the vulnerabilities.
ZombieLoad takes its name from a “zombie load”, an amount of data that the processor can’t understand or properly process, forcing the processor to ask for help from the processor’s microcode to prevent a crash. Apps are usually only able to see their own data, but this bug allows that data to bleed across those boundary walls. ZombieLoad will leak any data currently loaded by the processor’s core, the researchers said. Intel said patches to the microcode will help clear the processor’s buffers, preventing data from being read.
Practically, the researchers showed in a proof-of-concept video that the flaws could be exploited to see which websites a person is visiting in real-time, but could be easily repurposed to grab passwords or access tokens used to log into a victim’s online accounts.
Like Meltdown and Spectre, it’s not just PCs and laptops that are affected by ZombieLoad - the cloud is also vulnerable. ZombieLoad can be triggered in virtual machines, which are meant to be isolated from other virtual systems and their host device.
Although no attacks have been publicly reported, the researchers couldn’t rule them out nor would any attack necessarily leave a trace, they said.
What does this mean for the average user? There’s no need to panic, for one. These are far from drive-by exploits where an attacker can take over your computer in an instant. Researchers said it was “easier than Spectre” but “more difficult than Meltdown” to exploit and both required a specific set of skills and effort to use in an attack.
There are far easier ways to hack into a computer and steal data. But the focus of the research into speculative execution and side channel attacks remains in its infancy. As more findings come to light, the data-stealing attacks have the potential to become easier to exploit and more streamlined.
Intel has released microcode to patch vulnerable processors, including Intel Xeon, Intel Broadwell, Sandy Bridge, Skylake and Haswell chips. Intel Kaby Lake, Coffee Lake, Whiskey Lake and Cascade Lake chips are also affected, as well as all Atom and Knights processors.
But other tech giants, like consumer PC and device manufacturers, are also issuing patches as a first line of defense against possible attacks. Computer and operating system makers Apple and Microsoft and browser maker Google have released patches, with other companies expected to follow.
Intel said the latest microcode updates, like previous patches, would have an impact on processor performance. Most patched consumer devices could take a 3 percent performance hit at worst, and as much as 9 percent in a datacenter environment.
But with patches rolling out for the past few months, there’s no reason to pass on a chance to prevent such an attack.
STRETCH was the most complex electronic system yet designed and, in fact, it was the first one with a design based on an earlier computer (the IBM 704). Unfortunately, it failed its primary goal, that of being 200 or even 100 times faster than the competition, since it was only about 25-50 times faster. Only seven other Stretch machines were built after the one that went to Los Alamos, all for government agencies (like the Weather Service for charting the path of storms) or government contractors (like MITRE).
[separator]
In April 1955, IBM had lost a major bid to build a computer for the U.S. Atomic Energy Commission's Livermore Laboratory, to the UNIVAC division of Remington Rand. UNIVAC had promised up to five times the processing power as the Government's bid request, so IBM decided it should play that game too, next time it had an opportunity.
Supercomputers – the pioneers
When Los Alamos Scientific Laboratory was next to publish a bid request, IBM promised that a system operating at 100 times present speeds would be ready for delivery at the turn of the decade. Here is where the categorical split happened between "conventional computers" and supercomputers: IBM committed itself to producing a whole new kind of computing mechanism, one entirely transistorized for the first time. There had always been a race to build the fastest and most capable machine, but the market had not yet begun its path to maturity until that first cell split, when it was determined that atomic physics research represented a different customer profile compared to business accounting, and needed a different class of machine.
Stephen W. Dunwell was Stretch's lead engineer and project manager. In a 1989 oral history interview for the University of Minnesota's Charles Babbage Institute, he recalled the all-hands meeting he attended, along with legendary IBM engineer Gene Amdahl and several others. There, the engineers and their managers came to the collective realization that there needed to be a class of computers above and beyond the common computing machine, if IBM was to regain a competitive edge against competitors such as Sperry Rand.
Gordon Bell, the brilliant engineer who developed the VAX series for DEC, would later recall that engineers of his ilk began using the term "supercomputer" when referring to machines in this upper class, as early as 1957, while the 7030 project was underway.
The architectural gap between the previous IBM 701 design and that of the new IBM 7030 was so great that engineers dubbed the new system "Stretch". It introduced the notion of instruction "look-ahead" and index registers, both of which are principal components of modern x86 processor design. Though it utilized 64-bit "words" internally, Stretch utilized the first random-access memory mechanism from magnetic disk, breaking down those words into 8-bit alphanumeric segments that engineers dubbed "bytes".
Though IBM successfully built and delivered eight 7030 models between 1961 and 1963, keeping a ninth for itself, Dunwell's superiors declared it a failure for only being 30 times faster than 1955 benchmarks, instead of 100. Declaring something you built yourself a failure typically prompts others to agree with you, often for no other viable reason. When competitor Control Data set about to build a system a mere three times faster than the IBM 7030, and then in 1964 met that goal with the CDC 6600 (principally designed by Seymour Cray) the "supercomputer" moniker stuck to it like glue. (Even before Control Data ceased to exist, the term attached itself to Cray.) Indeed, the CDC 6600 introduced vector processing, executing single instructions on multiple registers in sequence, which was the beginning of parallelism. But no computer today, not even your smartphone, is without parallel processing, nor is it without index registers, look-ahead instruction pre-fetching or bytes.
The giants of supercomputing
According to the Top500.org, nowadays IBM sits in the second spot of the supercomputer race.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
The 55th edition of the TOP500 saw some significant additions to the list, spearheaded by a new number one system from Japan. The latest rankings also reflect a steady growth in aggregate performance and power efficiency.
The new top system, Fugaku, turned in a High Performance Linpack (HPL) result of 415.5 petaflops, besting the now second-place Summit system by a factor of 2.8x. Fugaku, is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, which are often used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.
Number two on the list is Summit, an IBM-built supercomputer that delivers 148.8 petaflops on HPL. The system has 4,356 nodes, each equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are connected with a Mellanox dual-rail EDR InfiniBand network. Summit is running at Oak Ridge National Laboratory (ORNL) in Tennessee and remains the fastest supercomputer in the US.
At number three is Sierra, a system at the Lawrence Livermore National Laboratory (LLNL) in California achieving 94.6 petaflops on HPL. Its architecture is very similar to Summit, equipped with two Power9 CPUs and four NVIDIA Tesla V100 GPUs in each of its 4,320 nodes. Sierra employs the same Mellanox EDR InfiniBand as the system interconnect.
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) drops to number four on the list. The system is powered entirely by Sunway 260-core SW26010 processors. Its HPL mark of 93 petaflops has remained unchanged since it was installed at the National Supercomputing Center in Wuxi, China in June 2016.
At number five is Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT). Its HPL performance of 61.4 petaflops is the result of a hybrid architecture employing Intel Xeon CPUs and custom-built Matrix-2000 coprocessors. It is deployed at the National Supercomputer Center in Guangzhou, China.
A new system on the list, HPC5, captured the number six spot, turning in an HPL performance of 35.5 petaflops. HPC5 is a PowerEdge system built by Dell and installed by the Italian energy firm Eni S.p.A, making it the fastest supercomputer in Europe. It is powered by Intel Xeon Gold processors and NVIDIA Tesla V100 GPUs and uses Mellanox HDR InfiniBand as the system network.
Another new system, Selene, is in the number seven spot with an HPL mark of 27.58 petaflops. It is a DGX SuperPOD, powered by NVIDIA’s new “Ampere” A100 GPUs and AMD’s EPYC “Rome” CPUs. Selene is installed at NVIDIA in the US. It too uses Mellanox HDR InfiniBand as the system network.
Frontera, a Dell C6420 system installed at the Texas Advanced Computing Center (TACC) in the US is ranked eighth on the list. Its 23.5 HPL petaflops is achieved with 448,448 Intel Xeon cores.
The second Italian system in the top 10 is Marconi-100, which is installed at the CINECA research center. It is powered by IBM Power9 processors and NVIDIA V100 GPUs, employing dual-rail Mellanox EDR InfiniBand as the system network. Marconi-100’s 21.6 petaflops earned it the number nine spot on the list.
Rounding out the top 10 is Piz Daint at 21.2 petaflops, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. It is equipped with Intel Xeon processors and NVIDIA P100 GPUs.
Interesting facts revealed by Top500:
China continues to dominate the TOP500 when it comes to system count, claiming 226 supercomputers on the list. The US is number two with 114 systems; Japan is third with 30; France has 18; and Germany claims 16. Despite coming in second on system count, the US continues to edge out China in aggregate list performance with 644 petaflops to China’s 565 petaflops. Japan, with its significantly smaller system count, delivers 530 petaflops.
Also, Chinese manufacturers dominate the list in the number of installations with Lenovo (180), Sugon (68) and Inspur (64) accounting for 312 of the 500 systems. HPE claims 37 systems, while Cray/HPE has 35 systems. Fujitsu is represented by just 13 systems, but thanks to its number one Fugaku supercomputer, the company leads the list in aggregate performance with 478 petaflops. Lenovo, with 180 systems, comes in second in performance with 355 petaflops.
Regardless of the manufacturer, as a technology trend, a total of 144 systems on the list are using accelerators or coprocessors, which is nearly the same as the 145 reported six months ago. As has been the case in the past, the majority of the systems equipped with accelerator/coprocessors (135) are using NVIDIA GPUs.
The x86 continues to be the dominant processor architecture, being present in 481 out of the 500 systems. Intel claims 469 of these, with AMD installed in 11 and Hygon in the remaining ones. Arm processors are present in just four TOP500 systems, three of which employ the new Fujitsu A64FX processor, the rest being powered by Marvell’s ThunderX2 processor
The breakdown of system interconnect share is largely unchanged from six months ago. Ethernet is used in 263 systems, InfiniBand is used in 150, and the remainder employ custom or proprietary networks. Despite Ethernet’s dominance in sheer numbers, those systems account for 471 petaflops, while InfiniBand-based systems provide 803 petaflops. Due to their use in some of the list’s most powerful supercomputers, systems with custom and proprietary interconnects together represent 790 petaflops.
The most energy-efficient system on the Green500 is the MN-3, based on a new server from Preferred Networks. It achieved a record 21.1 gigaflops/watt during its 1.62 petaflops performance run. The system derives its superior power efficiency from the MN-Core chip, an accelerator optimized for matrix arithmetic. It is ranked number 395 in the TOP500 list.
In second position is the new NVIDIA Selene supercomputer, a DGX A100 SuperPOD powered by the new A100 GPUs. It occupies position seven on the TOP500.
In third position is the NA-1 system, a PEZY Computing/Exascaler system installed at NA Simulation in Japan. It achieved 18.4 gigaflops/watt and is at position 470 on the TOP500.
The number nine system on the Green500 is the top-performing Fugaku supercomputer, which delivered 14.67 gigaflops per watt. It is just behind Summit in power efficiency, which achieved 14.72 gigaflops/watt.
The TOP500 list has incorporated the High-Performance Conjugate Gradient (HPCG) Benchmark results, which provided an alternative metric for assessing supercomputer performance and is meant to complement the HPL measurement.
The number one TOP500 supercomputer, Fugaku, is still the leader on the HPCG benchmark, with a record 13.4 HPCG-petaflops. The two US Department of Energy systems, Summit at ORNL and Sierra at LLNL, are now second and third, respectively, on the HPCG benchmark. Summit achieved 2.93 HPCG-petaflops and Sierra 1.80 HPCG-petaflops. All the remaining systems achieved less than one HPCG-petaflops.
August 31, 1994 is the day Aldus Corp. and Adobe Systems Inc. finalized their merger. The two companies hoped to combine forces in creating powerful desktop publishing software, building on the field Aldus founder, Paul Brainerd, had created in 1985 with his PageMaker software. PageMaker was one of three components to the desktop publishing revolution. The other two were the invention of Postscript by Adobe and the LaserWriter laser printer from Apple. All three were necessary to create a desktop publishing environment.
[separator]
With the advent of desktop publishing environments, the passage “Lorem Ipsum...” became the popular dummy text of the printing and typesetting industry, although Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularized in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages and became widely used within every desktop and online publishing environment.
Search the Internet for the phrase “lorem ipsum”, and the results reveal why this strange phrase has such a core connection to the lexicon of the Web. Its origins are murky, but according to multiple sites that have attempted to chronicle the history of this word pair, “lorem ipsum” was taken from a scrambled and altered section of “De finibus bonorum et malorum”, (translated: “Of Good and Evil,”) a 1st-Century B.C. Latin text by the great orator Cicero.
According to Cecil Adams, curator of the Internet trivia site The Straight Dope, the text from that work of Cicero was available for many years on adhesive sheets in different sizes and typefaces from a company called Letraset.
“In pre-desktop-publishing days, a designer would cut the stuff out with an X-acto knife and stick it on the page”, Adams wrote. “When computers came along, Aldus included lorem ipsum in its PageMaker publishing software, and you now see it wherever designers are at work, including all over the Web.”
This pair of words is so common that many Web content management systems deploy it as default text. Things get really interesting when you realize that “lorem ipsum” could be transformed into so many apparently geopolitical and startlingly modern phrases when translated from Latin to English using Google Translate.
Even though now the algorithm has been changed, a while back, users could notice a bizarre pattern in Google Translate: When one typed “lorem ipsum” into Google Translate, the default results (with the system auto-detecting Latin as the language) returned a single word: “China.”
Capitalizing the first letter of each word changed the output to “NATO” — the acronym for the North Atlantic Treaty Organization. Reversing the words in both lower and uppercase produced “The Internet” and “The Company” (the “Company” with a capital “C” has long been a code word for the U.S. Central Intelligence Agency). Repeating and rearranging the word pair with a mix of capitalization generated even stranger results. For example, “lorem ipsum ipsum ipsum Lorem” generated the phrase “China is very very sexy.”
Below you will see some of these translation results:
Security researchers wondered what was going on here? Has someone outside of Google figured out how to map certain words to different meanings in Google Translate? Was it a secret or covert communications channel? Perhaps a form of communication meant to bypass the censorship erected by the Chinese government with the Great Firewall of China? Or was this all just some coincidental glitch in the Matrix? :)
One thing was for sure: the results were subtly changing from day to day, and it wasn’t clear how long these two common, but obscure words would continue to produce the same results.
Things began to get even more interesting when the researchers started adding other words from the Cicero text out of which the “lorem ipsum” bit was taken, including: “Neque porro quisquam est qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit . . .” (“There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain …”).
Adding “dolor” and “sit” and “consectetur,” for example, produced even more bizarre results. Translating “consectetur Sit Sit Dolor” from Latin to English produces “Russia May Be Suffering.” “sit sit dolor dolor” translates to “He is a smart consumer.” An example of these sample translations is below:
Latin is often dismissed as a “dead” language, and whether or not that is fair or true, it seems pretty clear that there should not be Latin words for “cell phone,” “Internet” and other mainstays of modern life in the 21st Century. However, this incongruity helps to shed light on one possible explanation for such odd translations: Google Translate simply doesn’t have enough Latin texts available to have thoroughly learned the language.
In an introductory video titled “Inside Google Translate”, Google explains how the translation engine works, what are the sources of the engine’s intelligence and what are its limitations. According to Google, its Translate service works “by analyzing millions and millions of documents that have already been translated by human translators...These translated texts come from books, organizations like the United Nations and Web sites from all around the world. Our computers scan these texts looking for statistically significant patterns. That is to say, patterns between the translation and the original text that are unlikely to occur by chance. Once the computer finds a pattern, you can use this pattern to translate similar texts in the future. When you repeat this process billions of times, you end up with billions of patterns, and one very smart computer program. For some languages, however, we have fewer translated documents available and, therefore, fewer patterns that our software has detected. This is why our translation quality will vary by language and language pair.”
Still, this doesn’t quite explain why Google Translate would include so many specific references to China, the Internet, telecommunications, companies, departments and other odd couplings in translating Latin to English.
Apparently, Google took notice and something important changed in Google’s translation system that currently makes the described examples impossible to reproduce :)
Google Translate abruptly stopped translating the word “lorem” into anything but “lorem” from Latin to English. Google Translate still produces amusing and peculiar results when translating Latin to English in general.
A spokesman for Google said the change was made to fix a bug with the Translate algorithm (aligning ‘lorem ipsum’ Latin boilerplate with unrelated English text) rather than a security vulnerability.
Security researchers said that they are convinced that the lorem ipsum phenomenon is not an accident or chance occurrence.
In this age, e-mail services have became an intrinsic part of our lives, but we give them much more credit than we should. Take a look at e-mail's history throughout time and how this influences its nature today.
[separator]
Nowadays we are using e-mail services from different cloud service providers and we cannot envision a world without them. As with any mass scale service, the threats are numerous and the fact that email is not always safe by nature doesn’t help in securing this type of service. Email was never meant to be secure. The way email is used today - and its security needs - differs greatly from what its inventors intended.
Email security uses AI and other filtering techniques to stop malware, phishing scams and business email compromise (BEC). As malicious actors turn to cloud environments to exploit G Suite and attack Office 365, email security is a vast undertaking with no one-size-fits-all approach. Good email security begins with a comprehensive understanding of the threat and willingness to evolve as email usage continues to rise and change.
Email’s evolution
Relative to modern computer technologies, email evolved slowly. It originated in MIT’s compatible time-sharing system in 1965 to store files and messages on a central disk, with users logging in from remote computers. In 1971, the @ symbol was introduced to help users target specific recipients. In 1977, the “To” and “From” fields and message forwarding were created within DARPA’s ARPANET, constituting email’s first standard. These advances created the conditions for spam prototypes and, in 1978, the first mass email was sent to 397 ARPANET users. It was so unpopular that no one would try it again for a decade. Email security became necessary in the late 1980s, when spam proliferated as a prank among gamers, and quickly gained prominence as a criminal activity.
Thirty years later, email is vastly more powerful and sophisticated, with the cloud connecting users and syncing files in real-time. These factors have incentivized malicious actors to send nearly 4.7 billion phishing emails every day. Phishing is one of the most prominent forms of cybercrime today. Hackers use social engineering techniques to fool even the most attentive employees into opening a malicious attachment, clicking on a malicious link or disclosing credentials. 92% of malware is delivered via a successful phishing attack over email, enabling hackers to access corporate data infrastructures and steal millions of dollars or personally identifiable information (PII).
In reality, email security will never be 100% safe. Knowing this, hackers will never stop leveraging email as an attack vector — especially not when there are close to 200 Million Office 365 users and 1.5 Billion G Suite users sharing confidential information and documents to do their jobs. The email providers protecting those users are responsible for the security of the cloud, but you're responsible for the security of your data in the cloud. Email is the double-edged sword of the business world: it’s the enterprise’s communicative lifeblood, but that makes it the primary point of entry for hackers. All a hacker needs is one successful phishing email to open up opportunities for malware, ransomware, BEC, and other attack methods to obtain credentials and hold the organization hostage.
According to the Internet and Crime Complaint Center (IC3), both the number of complaints about cyberattacks and the financial loss of the attacked businesses have steadily increased over the past few years. In 2018, the IC3 received over 350,000 complaints (50,000 more than the year before) and financial losses nearly doubled from $1.4 Billion to $2.7 Billion. Although these statistics can be attributed to more reporting from increased user awareness, they also do not account for how many attacks go unreported — simply because victims don’t know about them until it’s too late. That’s one of the most insidious parts of email attacks: they allow the hacker to lurk in networks, observing systems and processes, waiting for the right moment to strike — and implicating potentially anyone and anything ever associated with the comprised account. Even business partners and clients are vulnerable. Given the stakes of poor email security, it’s jarring to see how many businesses around the world are unprepared for email attacks.
Today’s cloud environment means email security must go beyond the capabilities of most Secure Email Gateways, which were originally designed to protect on-premises email. In a cloud environment, email security must prioritize anti-phishing, anti-malware and anti-spam capabilities. With email integrated into applications and file sharing, business collaboration suites like Office 365 and G Suite present hackers with multiple entry points and exploits once inside the system. This also means that email security must include mid-attack measures, like compromised account detection and access management tools. Email security must also offer full-suite protection. It should connect to the native API of cloud email providers and associated SaaS/productivity applications, like OneDrive and SharePoint.
Key capabilities of the anti-phishing, anti-malware, and anti-spam email security market start with content inspection tools, like a network sandbox, an isolated environment that mimics end-user operations and file detonations. The network sandbox allows for the proactive determination of malicious content which administrators can disarm and reconstruct. In the actual environment, URL rewriting detects malicious links (sometimes rebranding the links, so that end users can see that security has done its job) and performs time-of-click analysis. Every email security solution should scan attachments in messages and they must be scanned before being passed through to their recipients. It’s far too easy, for instance, for hackers to embed malware in an attachment. Hackers also use attachments to execute various Business Email Compromise schemes, like false invoices. In that case, the hacker might have breached the company accountant and pretended to be them while emailing an unsuspecting employee asking them to sign off on a doctored invoice. If the forged email is reasonably good, and if the recipient of the email is working quickly and expecting the invoice, there may be no way to stop the scam — unless the email security system detects a blemish in the attachment.
Web isolation services prevent malware and phishing threats while allowing broad web access, by isolating potentially risky traffic. But phishing attacks and BEC need to be stopped with more targeted security measures, like display name spoof detection, domain-based message authentication, lookalike domain detection and anomaly detection. Together, these features identify compromised accounts. The most adapted email securityconnects to the cloud via APIs and uses artificial intelligence (AI) to detect communication patterns and relationships between employees and customers. With this data, the solution uses a threat detection algorithm and machine learning to prevent hackers from weaponizing the email suite in an account takeover scenario. This ability to gather real-time and historical data on every user, file, event, and policy — not only of internal accounts, but everyone who has access — allows for a seamless threat protection protocol. Solutions that adapt each specific business environment are ideal compared to one-size-fits-all vendors whose product works the same for every customer.
Email security
The first step to securing email is writing solid email security policies. This begins with developing a complete understanding of incoming email to the company. What are routine communications between clients, partners and the organization? To define this, the email security solution should learn the environment rapidly, using protection for links, attachments and suspicious subject-lines, sender behavior, and language within the email. Smart email security policies should then clearly define what happens next in a workflow, but create enough flexibility for the policies to meet the organization’s needs. For instance, should all suspicious content be sent to the spam folder or quarantine folder for review? Should it be separated from the message, which can otherwise continue through? Suspicious content needs to be sent to a secure location for detailed analysis. If a threat is detected, the policies should state an action item for investigating the scope of that specific threat, and for determining if it has affected other parts of the cloud infrastructure.
Finally, once the entirety of the malicious activity is uncovered, there need to be policies around reinforcing encryption and safeguarding from future attacks. But what decisions guide changing existing policies and creating new ones? A good first step to email security policies, after a breach, is to analyze the originally breached email with full headers and original attachments, so that you can examine IP addresses. It’s equally important to examine click patterns, both as recorded by systems and as practiced by the user. What was the user thinking when they encountered the phishing email? Did he notice any suspicious activity around that time? Once you’ve gained a thorough understanding of the incident, seize the opportunity to take smart account cleanup measures. Changing passwords is a must. Keep track of the active session for the affected users, to ensure that the hacker isn’t still able to access the network through a legitimate channel like a VPN. Check mailbox configurations to see if the hacker changed them during the compromise. Finally, naturally integrate more targeted security tools into email and associated applications that end-users rely on every day, so that business can keep on humming in an even more secure state than before.
The best email security practices blend seamless protection for users and the reinforcement that protection is there. All users should know a few simple tactics for securing their accounts from the onset — the equivalents to not leaving the front door to their home unlocked. But they should also know that their organization has installed email security defense. A strong password is the absolute bare minimum of email security — yet analysis of breached accounts shows that millions of users still choose bad passwords like “123456,” “qwerty,” “password”, or their first name. In today’s world, when the average business user needs 191 passwords, password managers are a savior. Password managers like LastPass generate passwords for you, and store them in secure environments, reinforcing that the best password is the one you don’t know. 2-Factor Authentication (2FA) involves account log-in confirmation, like when a user receives a text or email asking if they’re trying to log into their account. These are the well-known - but least secure - forms of 2FA. It is a part of multi-factor authentication (MFA) and, although MFA is not foolproof, it’s another baseline email security measure, one that can catch the fallout from a weak password.
An understandably common-sense anti-phishing solution is to raise awareness among employees. If an end-user knows of the dangers of phishing, why would they click that unfamiliar link? However, research shows the limitations of that defense. One study showed that although 78% of 1700 participants knew the risk of unknown links infecting their computers with viruses, up to 56% of email users clicked a malicious link. Why? They were curious. Anti-phishing employee training can’t prevent phishing attacks. But a more specific type of anti-phishing behavioral conditioning can be taught, particularly in the platforms pioneered by KnowBe4. Employees can be trained to spot suspicious email activity and be equipped with user-friendly tools for reporting. These reports can be valuable to a security operations team tasked with monitoring threats, containing them if initiated, and analyzing them for future preventative measures. With email always evolving, the types of email security must always evolve. Legacy security solutions need to be updated for new environments, and new solutions must prove their viability. In general, all the types of email security fall under the two main stages: pre-delivery and post-delivery.
Pre-delivery ProtectionA Secure Email Gateway is a longtime staple of email security. Because they were built for on-premises email environments, they were designed to be a firewall for email and remain that way today. With this approach, a Secure Email Gateway rejects spam, prevents data loss, inspects content, encrypts messages and more. Secure Email Gateways protect inbound and outbound messages - but email today does far more than send and receive messages. By connecting to file sharing suites and essential workplace applications, email links every facet of a user’s online identity. Without an add-on at an extra cost, a Secure Email Gateway cannot see these essential elements of daily use, so it cannot secure them.
Another problem with Secure Email Gateways is that they are lighthouses for hackers. To reroute email through a Secure Email Gateway, an organization must change their MX records to that of the gateway. Hackers know it and have found a massive loophole in this deployment mode to send malicious content directly to employees. Publicly available on sites like MXToolbox, hackers identify what vendor an organization uses to secure their environment, identify the root domain and bypass the scan of a specific Secure Email Gateway.
Post-delivery Protection
What kind of organization needs this new type of email security that scans inside the perimeter? One that:
• requires on-demand scanning of mailboxes, generally as a secondary scan at low-use times;
• wants to quickly manage outbreaks that spread through email;
• demands detection methods that use historical communication patterns (for example, to build social graphs in defense against phishing);
• has substantial intra-domain email traffic without routing through an SEG;
• uses applications that have programmatic access to the mail server;
• has users who regularly post messages in public folders.
These solutions integrate well in modern email environments.
Cloud Email Security Supplement (CESS) is a term coined by Gartner analysts to describe new measures needed in the emerging continuous adaptive risk and trust assessment (CARTA) approach to cyber security. The fact that this subset of API-based email security uses intelligence from existing security gives them a leg up on gateways - which require you to deactivate built-in security. But all of these solutions need emails to arrive into the inbox before security scans can begin. This delay in scanning means that business end-users have a small window to click on a phishing email. As their name suggests, CESS solutions may for now be supplemental ways of protecting the entire Microsoft Office 365 suite, for instance. But once in place, the organization can affirm that the CESS satisfies all required security protection and risk avoidance, which can lead to email security consolidation — and substantial cost savings.
Another important Gartner term in email security deliver is Security Orchestration, Automation and Response (SOAR). This refers to a solution stack that can be applied to compatible products and services, which helps define, prioritize, standardize, and automate incident response functions. Recently, specialists in endpoint, malware, and email/collaboration security, introduced a new term: M-SOAR - Mail-focused SOAR - as a way of focusing exclusively on email threats, as opposed to orchestration. M-SOAR is a capability at the intersection of email gateways, awareness/training and solution software for collaboration suite security - an email security capability that few organizations currently have.
Cybersecurity is a subject which gained a lot of popularity with the broad public starting around the years 2007, 2008. Why is that? Let’s analyze the history of some events and see whether we can come up with an explanation.
[separator]
The hacker culture originated at MIT in the 1960’s. Back then, computer systems were pretty fragile in terms of architectures and a lot of stuff was permitted to the astute architect or programmer. We are making reference here to a certain robustness. The first hacking to have ever occurred was in a MIT computer system, Multics (grand-grand-grandfather of Unix), that had a password check procedure which verified the length of a specific cryptographic string. The 20 years that followed lead to the development of several hacking cultures throughout the world. Some good hackers were in Australia, some in the US and some in Germany. As computers started to gain popularity, Russia and Kazakhstan took traction.
The NSF in the U.S had pretty much control of all the nodes and DARPANET was slowly declassifying the networks and making them accessible to the public at large (the creation of the internet as we know it).
First, there was the “word”… the sound
2600. The baud rate of the modem. By understanding simple principles of physics, one was able to hack just by modulating the right signals.
One of the first hackers to exploit this was “The Mentor”, a hacker who had control over the Australian IRS (Internal Revenue Service).
Around 1997, Kevin Mitnick starts to heavily hack a lot of environments by combining technical skills and social engineering.
Afterwards, the world faced a completely quiet time when it comes to computer hacking. The question that naturally presents itself is “What happened, did hackers disappear? No, they didn’t!
From 1999 to around 2007, United States Intelligence Agencies started looking closer at the phenomenon. The booming of online traders such as eBay and Craigslist gave birth to script kiddies. How did this work?
The real hackers used to sell scripts to script kiddies who would in turn use these scripts for financial gains.
It is very much true that during those years some Nessus and Nmap scans used to get you “root”, but the fact is that the world evolved so the attention of the Intelligence Agencies shifted to a better understanding of the phenomenon. Then, starting in 2007, another transformation took place. The whole world moved to web applications.
Obviously, these were taken by storm by the serious hackers who exploited the apps with techniques such as SQL injection (officially discovered in the year of 1997, actually practiced in the 1980’s at some branches of the DIA), cross site scripting and last but not least remote command execution. A vast majority of these actions went under the radar because they were performed by real hackers, and not by script kiddies. At some point the real hackers created tools to exploit web applications and these tools were used by script kiddies. Granted, they were used for financial gains and this called the attention of the FBI and some other law enforcement agencies. At this point in time, intelligence agencies started recruiting real hackers in order to understand the scenarios. Law enforcement was just coming into the scene.
The expansion of online business attracted more online fraud. Hence, the necessity for cyber protection took birth, in a somewhat forced fashion. Let’s analyze this claim for a second.
Why are we saying “forced fashion”?
Because major technology vendors felt overwhelmed with the pressure of coping with these attacks. So, who did they hire? They hired people with backgrounds in computer engineering. Obviously, it was some start, but not the best approach. Why?
I remember, back in 2008, I had an interlocutor who was a possessor of various licenses, such as MCSE, CCNA, CCNE and a bunch of other fancy licenses that were enough to put one in the 300K plus per year income bracket in the US.
I took out a USB stick with something called a BIOS level rootkit. This type of rootkit (advanced remote-control software, usually clandestine) is not detected by anything in the world. No software. Why? Because there is no Antivirus or any sort of protection at the BIOS level. By plugging in this USB, I had access to absolutely everything on the victim computer. My interlocutor, who was a SAC (Special Agent in Charge with some FBI cybercrime office) looked astonished and humbly stated that his intelligence level related to offensive maneuvers where the equivalent of the intelligence of 10 years old when presented with such threats.
The conclusion is inevitable as the famous saying goes:
“You sow what you reap!”
The law enforcement agencies created a culture that confronted the problem by employing the people with the wrong mindset. It was a start. We do not blame them, but we do applaud those who took a departure from the norm.
Nowadays we are in the possession of a reach market which offers us solutions for every single problem:
1. We have WAF (Web Application Firewalls);
2. We have smart switches;
3. We have AI (Artificial Intelligence systems) that analyze network traffic;
4. We have EDR (Endpoint Detection and Response), or otherwise known as antivirus;
5. We have DLP (Data Loss Prevention).
But do these solutions suffice when dealing with a skilled offensive actor? Absolutely not!
They are there to prevent 95% of the known threats. And we say “known threats” because we have vulnerabilities also called “N days”.
If N=0 then we have a 0 day. A vulnerability which is not yet known by anybody except the one who discovered it, and the one who discovers a zero day may choose not to disclose the discovery.
Then we have “N days” exploits where N is bigger than 0. That is why vendors have patches.
However, to the professional hacker these security mechanisms do not matter. Let me offer a quick example which can illustrate the statement.
Every second or fourth Tuesday of the month, Microsoft patches its systems. If a hacker listens to the patching process then binary diffing is performed (a technique that shows clearly which DLL or executables within the OS were changed). All a hacker has to do is exploit code against those changed executables. In theory, is not a zero day because Microsoft is patching it, so it is an “N day”, where N equals the number of days it takes to exploit it. However, please take into consideration, in a corporate environment consisting of thousands of machines, how often are systems patched? This sounds like a problem.
We need to ask ourselves: if we purchase a respectable security solution, how effective is this solution, when faced with a skilled attacker?
Unless this solution is administered by people who have the appropriate skills in the offensive game, then this solution will be as effective as the level of the attacker.
There is also something else worth mentioning. We are being bombarded with news about security incidents every day. However, none of these are the actual real threats. The real threats are not made public nor do they reveal themselves to people coming from a culture to where nowadays one acquires a couple of security licenses and is deemed an expert.
Lately, there is no talk about APTs (Advanced Persistent Threats) because they are not discovered with the existing skills set, not because they do not exist.
The message of this article is clear: Only skilled attackers playing defenders are able to protect systems from other skilled attackers. If it were otherwise, you wouldn’t hear of security breaches every day. Most of the organizations that are attacked are in the possession of protection technology.
Unless that technology is deployed correctly by somebody who understands the playground, it is ineffective and we shall keep on hearing about security incidents.
The demand for cloud-based solutions is increasing all around the world and data is moving to the cloud at a record pace. This includes everything, from secure data storage to entire business processes.Cloud-based internet security is an outsourced solution for storing data. Instead of saving data onto local hard drives, users store it on Internet-connected servers. Data Centers manage these servers to keep the data safe and secure to access.
[separator]
Enterprises turn to cloud storage solutions to solve a variety of problems: small businesses use the cloud to cut costs, while IT specialists see it as the best way to store sensitive data. Any time you access files stored remotely, you are accessing a cloud.
Email is a prime example. Most email users don’t bother saving emails to their devices because those devices are connected to the Internet.
There are three types of cloud solutions and each of these offers a unique combination of advantages and drawbacks:
• Public Cloud: These services offer accessibility and security. This security is best suited for unstructured data, like files in folders. Most users don’t get a great deal of customized attention from public cloud providers. This option is affordable.
• Private Cloud: Private cloud hosting services are on-premises solutions. Users assert unlimited control over the system. Private cloud storage is more expensive. This is because the owner manages and maintains the physical hardware.
• Hybrid Cloud: Many companies choose to keep high-volume files on the public cloud and sensitive data on a private cloud. This hybrid approach strikes a balance between affordability and customization.
All files stored on secure cloud servers benefit from an enhanced level of security.
The security credential most users are familiar with is the password. Cloud storage security vendors secure data using other means as well.
Some of these include:
• Advanced Firewalls: All Firewall types inspect traveling data packets. Simple ones only examine the source and destination data. Advanced ones verify packet content integrity. These programs then map packet contents to known security threats.
• Intrusion Detection: Online secure storage can serve many users at the same time. Successful cloud security systems rely on identifying when someone tries to break into the system. Multiple levels of detection ensure cloud vendors can even stop intruders who break past the network’s initial defenses.
• Event Logging: Event logs help security analysts understand threats. These logs record network actions. Analysts use this data to build a narrative concerning network events. This helps them predict and prevent security breaches.
• Internal Firewalls: Not all accounts should have complete access to data stored in the cloud. Limiting secure cloud access through internal firewalls boosts security. This ensures that even a compromised account cannot gain full access.
• Encryption: Encryption keeps data safe from unauthorized users. If an attacker steals an encrypted file, access is denied without finding a secret key. The data is worthless to anyone who does not have the key.
• Physical Security: Cloud data centers are highly secure. Certified data centers have 24-hour monitoring, fingerprint locks, and armed guards. These places are more secure than almost all on-site data centers. Different cloud vendors use different approaches for each of these factors. For instance, some cloud storage systems keep user encryption keys from their users. Others give the encryption keys to their users.
Best-in-class cloud infrastructure relies on giving users the ideal balance between access and security. If you trust users with their own keys, users may accidentally give the keys to an unauthorized person.
There are many different ways to structure a cloud security framework. The user must follow security guidelines when using the cloud.
For a security system to be complete, users must adhere to a security awareness training program. Even the most advanced security system cannot compensate for negligent users.
Security breaches are rarely caused by poor cloud data protection. More than 40% of data security breaches occur due to employee error. Improve user security to make cloud storage more secure.
Many factors contribute to user security in the cloud storage system. Many of these focus on employee training:
• Authentication: Weak passwords are the most common enterprise security vulnerability. Many employees write their passwords down on paper. This defeats the purpose. Multi-factor authentication can solve this problem.
• Awareness: In the modern office, every job is a cybersecurity job. Employees must know why security is so important and be trained in security awareness. Users must know how criminals break into enterprise systems. Users must prepare responses to the most common attack vectors.
• Phishing Protection: Phishing scams remain the most common cyber-attack vector. These attacks attempt to compromise user emails and passwords. Then, attackers can move through business systems to obtain access to more sensitive files.
• Breach Drills: Simulating data breaches can help employees identify and prevent phishing attacks. Users can also improve response times when real breaches occur. This establishes protocols for handling suspicious activity and gives feedback to users.
• Measurement: The results of data breach drills must influence future performance. Practice only makes perfect if analysts measure the results and find ways to improve upon them. Quantify the results of simulation drills and employee training to maximize the security of cloud storage.
Employee education helps enterprises successfully protect cloud data. Employee users often do not know how cloud computing works. Explain cloud storage security to your employees by answering the following questions:
• Where Is the Cloud Located?
Cloud storage data is located in remote data centers. These can be anywhere on the planet. Cloud vendors often store the same data in multiple places. This is called redundancy.
• How is Cloud Storage Different from Local Storage?
Cloud vendors use the Internet to transfer data from a secure data center to employee devices. Cloud storage data is available everywhere.
• How Much Data Can the Cloud Store?
Storage in the cloud is virtually unlimited. Local drive space is limited. Bandwidth – the amount of data a network can transmit per second – is usually the limiting factor. High-Volume, low-bandwidth cloud service will run too slowly for meaningful work.
• Does the Cloud Save Money?
Most companies invest in cloud storage to save money compared to on-site storage. Improved connectivity cuts costs. Cloud services can also save money in disaster recovery situations.
• Is the Cloud Secure and Private?
Professional cloud storage comes with state-of-the-art security. Users must follow the vendor’s security guidelines. Negligent use can compromise even the best protection.
• What are the Cloud Storage Security Best Practices?
Cloud storage providers store files redundantly. This means copying files to different physical servers. Cloud vendors place these servers far away from one another. A natural disaster could destroy one data center without affecting another one hundreds of miles away.
Consider a fire is breaking out in an office building. If the structure contains paper files, those files will be the first to burn. If the office’s electronic equipment melts, then the file backups will be gone, too.
However, if the office saves its documents in the cloud, this is not a problem. Copies of every file exist in multiple data centers located throughout the region. The office can move into a building with Internet access and continue working.
Redundancy makes cloud storage security platforms failure-proof. On-site data storage is far riskier. Large cloud vendors use economies of scale to guarantee user data is intact. These vendors measure hard drive failure and compensate for them through redundancy.
Even without redundant files, only a small percentage of cloud vendor hard drives fail. These companies rely on storage for their entire income. They take every precaution to ensure users’ data remains safe.
Cloud vendors invest in new technology. Advances improve security measures in cloud computing, new equipment improves results.
This makes cloud storage an excellent option for securing data against cybercrime. With a properly configured cloud solution in place, even ransomware poses no real threat. You can wipe the affected computers and start fresh. Disaster recovery planning is a critical aspect of cloud storage security.
Netscape Communications Corp has created Navio Communications Inc. subsidiary to develop internet software for the consumer market - anything from cars to games consoles - aimed at non-PC users, but based on stripped-down versions of its Navigator web browser software. “The aim is to go where the PC can’t and is not likely to go”, said Netscape at that time. And where the PC can’t and won’t go, Netscape’s obviously hoped Microsoft Corp. can’t either and won’t try to follow.
[separator]
Navio signed agreements with IBM Corp., Oracle Corp.,Nintendo Co. Ltd., Sony Corp., Sega Enterprises Ltd. and NEC Corp. The last four were clearly masters at the consumer market place, while IBM and Oracle were not such obvious participants. As to the type of products, details were sketchy, with Navio insisting it was just a company announcement. None of the partners was even present.
Navio’s chief executive, Wei Yen, identified three areas in which the products – due sometime in 1997 - will be used. The first was television-centric environments, such as game consoles, set-top boxes, and Digital Video Disk (DVD) systems. The second was communications devices, including Personal Digital Assistants (PDAs), cellular and other telephones. Yen said this category may subsume into one device before long (you got to give credit to that vision as it was fulfilled 11 years later in 2007 by Apple with the first iPhone). And lastly, the information terminal, by which Yen meant network computers, kiosks and other home appliances. He said the first batch of all three categories of products was likely to release around the same time in 1997.
The Navio software was based on Navigator technology and ran on devices with embedded, real-time operating systems or no operating systems at all, supporting all the standards that Navigator supported. Navio software was modular and dynamically downloadable. Netscape was readying a modular version of Galileo, the next version of Navigator. The full version was due at the end of the year, with the modular version early next year. The Navio modular software was at least connected with the modular Navigator work, according to the company. In other words, if a specific Navigator module was already there, it won’t be re-written for Navio. The plan was for the Navio browsers to reformat input for televisions and devices such as phones that only have space for a few lines of text and the Navigator team to provide the knowledge as far as Java, security and objects. The whole software stack will be extensible via plug-ins.
Marc Andreessen, Netscape’s co-founder and chief technology officer reckoned the market for the Navio software was at least 500 million users in five years’ time. If all the PCs – about 240 million in 1996 - phones, consoles, pagers, cars, televisions and practically everything else that moved and everything that didn’t were included, then that number was clearly conservative, and pretty meaningless. But Netscape was fast out of the blocks in signing up all the games console companies that mattered, together with IBM and Oracle, as well as some others that it declined to talk about, even though deals were not thought to be exclusive. Yen claimed, at the time, that internet will be as important as electricity to consumer devices in the next century, and Andreessen predicted an internet device on every desk and in every backpack, eventually (again, credit to that foresight). Andreessen said because of the extra advertising opportunities, the potential for giving consumer internet devices away for free was even greater than with cellular phones, which were already given away in many markets, and were also ideal internet devices. He wondered whether some sort of consumer internet access device might be bundled on the front of a magazine, or even come with a pizza box.
Oracle bought the majority stake in Navio in 1997. The company got assimilated into the huge Oracle machine and their dream of developing a pervasive ecosystem of inexpensive internet connected devices that will be based on their Navio Browser/OS never took flight. They never released a product, device, browser, operating system or otherwise, and they never published a roadmap for their supposed products and third-party integrations.
Their unfulfilled dream is called nowadays IoT :)
As we anticipated with the first episode of this small foray into the history of AI, in this second part we will try to present some essential theoretical achievements in the field. The way we consider to be the most appropriate is to exemplify two representative algorithms of their time in the field of computational and cognitive science. And who is more appropriate to start with than John McCarthy, the man who introduced the term "Artificial Intelligence"? (at the famous Dartmouth conference in Summer 1956, which also marked the beginning of AI as a field)
[separator]
One of the most critical American researchers of the time, McCarthy contributed massively in related fields such as mathematics, logic, information technology, cognitive sciences, artificial intelligence. As it is very difficult for us to mention now all his major contributions, we limit ourselves to listing some of them, such as:
• creation in the 50s of the LISP language, which, based on lambda calculus becomes the preferred language of the AI community; inventing the concept of "garbage collector" in 1959 for LISP
• participation in the committee that gave birth to the ALGOL60 language, for which he proposed in 1959 the use of recursion and conditional expressions
• significant contribution in defining three of the very earliest time-sharing systems (Compatible Time-Sharing System, BBN Time-Sharing System, and Dartmouth Time-Sharing System)
• the first promoter of the idea of a computer utility, a prevalent concept in the 60s to the 90s, and returned to a new youth today in various forms: cloud computing, application service provider, etc.
But his genius (for which McCarthy is referred to as “Father of AI”) came to light even better through papers such as "Artificial Intelligence, Logic and Formalizing Common Sense", "Making Conscious Robots of their Mental States", "The Little Thoughts of Thinking Machines", "Epistemological Problems of Artificial Intelligence", "On the Model Theory of Knowledge", "Creative Solutions to Problems", or "Appearance and Reality: A challenge to machine learning" - papers that we hope will arouse the curiosity of many of our readers.
From the article "Free Will - Even for the Robots," we present below a sample of his deterministic approach to free will. The aim was to propose a theory of Simple Deterministic Free Will (SDFW) in a deterministic world. The theory splits the mechanism that determines action into possible actions and their consequences first and then decides which action is preferred the most. AI requires the formal expression of phenomena through the mathematical logic of situation calculus. The equation:
s’ = Result(e, s)
asserts that s’ is the situation that results when event e occurs in the situation s. Since there may be many different events that can occur in s, and the theory of the function Result does not say which occurs, the theory is nondeterministic. Having some preconditions for the event to occur, we will get to the formula:
Precond(e, s) → s’ = Result(e, s).
McCarthy added a formula Occurs(e, s) to the language that can be used to assert that the event e occurs in situation s. We have:
Occurs(e, s) → (Next(s) = Result(e, s)).
Adding occurrence axioms makes a theory more deterministic by specifying that certain events occur in situations satisfying designated conditions. The theory still remains partly non-deterministic, but if there are occurrence axioms specifying what events occur in all the possible situations, then the theory becomes deterministic (i.e. has linear time).
We can now give a situation calculus theory for SDFW illustrating the role of a non-deterministic theory in determining what will deterministically happen, i.e. by saying what choice a person or machine will make.
In the following formulas, lower case term represents a variable and capitalized term represents a constant. Let us assume that an actor has a choice of just two actions a1 and a2 that may be performed in situation s. It means that event Does(actor, a1) or Does(actor, a2) occurs in situation s according to which of Result(Does(actor, a1), s) or Result(Does(actor, a2), s) that actor prefers.
The formulas that declare that an actor will do the preferred action are
(1)
Prefers(actor, s1, s2) means that actor prefers s1 to s2 (and therefore made his choice) and this way it makes the theory determinist.
Now let us take a non-deterministic theory of “greedy John”:
It is obvious that greedy John prefers a situation in which he has greater wealth by making the right action from situation S0 to situation S1. From equations ¹-³ it can be inferred
(4)
Occurs(Does(John, A1, S0))
Prefers(actor, s1, s2) means that actor prefers s1 to s2 and there were used two actions to keep the formula for Choose as short as possible. This illustrates briefly the role of the non-deterministic theory of Result within a deterministic theory of what occurs. Equation (1) represents the non-deterministic of Result used to asses which action leads to the better situation. Equation (2) represents the deterministic part that indicates which action occurs.
McCarthy makes four conclusions:
• “1. Effective AI systems, e.g. robots, will require identifying and reasoning about their choices once they get beyond what can be achieved with situation-action rules (i.e. chess programs always have).
• The above theory captures the most basic feature of human free will.
• Result(a1, s) and Result(a2, s), as they are computed by the agent, are not full states of the world but elements of some theoretical space of approximate situations the agent uses in making its decisions. Part of the problem of building human level AI lies in inventing what kind of entity Result(a, s) shall be taken to be.
• Whether a human or an animal uses simple free will in a type of situation is subject to experimental investigation.”
We can consider that formulas (¹) and (²) illustrate a person making a choice. Nothing about person knowing it has some choices or preferring situations in which there are available more choices. So, for the situations when we need to take into considerations these phenomena we have to extend SDFW – as a partial theory. The importance of this theory is enormous both in terms of the interest given to the understanding of cognitive processes by man and as an aggregating result of some essential minds of the time who supported McCarthy in its realization.
The second algorithm proposal comes from the well-known Alan Turing. One of the pioneers and most prominent promoters of theoretical computer science, Alan Turing was a mathematician, logician, cryptanalyst, philosopher, and British botanist. Perhaps his best-known contribution to the field is the Turing Machine - a mathematical model that has received over time numerous theoretical variants and alternatives as well as practical implementations. To fully understand the context of its creation, it must first be mentioned that in the 1930s, there were no computers, but this did not prevent the scientists of the time from proposing extremely bold theoretical objectives regarding the opening of the application area (i.e., “Halting Problem”).
The Turing Machine has the following parts:
1. an infinite roll of tape over which the symbols can be written, deleted and rewritten
2. the head that moves left and right on the tape as the symbols are written/rewritten or deleted (i.e. the head of a Hard Disk Drive)
3. the state register that represents a memory area which stores the state of the machine
The machine can read a symbol on the tape at some point, then write a symbol and then reposition the writing head to the left or right. Although it has only implemented these simple routines, we will prove in the following that this model – and therefore, the Turing Machine – provides the theoretical basis for implementing any algorithm in any known language. The table of instructions for use of the machine is presented in the table below.
Current state
Current symbol
Action
Move
Next state
S0
“0”
Write “1”
Right
S1
S0
“1”
Write “0”
Right
S1
S1
“0”
Write “0”
Right
S0
S1
“1”
Write “1”
Right
S0
The first two columns determine the input combinations that the machine can receive, and it consists of the state of the car and the symbol read. The next three columns determine the action performed by the machine, consisting of the symbol to be written, the direction of movement of the writing head and the future state of the machine. For example, the second line in the table above tells us that, being in the state S0, with the write end positioned on the symbol “0”, the machine will write the symbol “1” in that position after which it will move to Right transitioning to state S1.
The analysis of the table with instructions shows the following:
• from any state S0, the symbols 0 and 1 are interchanged
• from any state S1, the symbols 0 and 1 remain the same
• based on the two points above we deduce that a string of type "111111" will be processed resulting in the string "010101".
Let's take a more complex example, which allows us to perform additions like "000+00=00000", the equivalent of "3+2= 5". For this we will consider the following instruction table - a variant of the instruction table above.
Current state
Current symbol
Action
Move
Next state
S0
“0”
Write “Blank”
Right
S1
S0
“+”
Write “Blank”
Right
S5
S1
“0”
Write “0”
Right
S1
S1
“+”
Write “+”
Right
S2
S2
“0”
Write “0”
Right
S2
S3
“0”
Write “0”
Left
S3
S3
“+”
Write “+”
Left
S4
S4
“0”
Write “0”
Left
S4
S4
“Blank”
Write “Blank”
Right
S0
Applying the calculation method from the previous example, we can see that the machine does the following major steps:
• STEP 1: replaces the first “0” (in the group of three blank spaces) with a blank space
• STEP 2: moves to the end of the string (after the group of two blank spaces)
• STEP 3: add a “0” at the end of the string (after the last “0”)
• STEP 4: return to the beginning of the string and resume STEP 1
• STEP 5: if the first symbol is "+" it is removed and the algorithm ends successfully.
We start with a step-by-step addition. The initial input (written on tape) is "000+00", and once the machine is started, the writing head is positioned on the first "0" - in the S0. S0 has two transitions, one for “0” and another for “+”; read the first "0", replace it with blank space and then move the head one position to the right. From S1, the machine can have again two transitions. The first of these is a loop, and involves replacing all "0" with "0" with the repeated movement of the head to the right - keeping the machine in the S1. The transition to S2 is made once the "+" is passed, after which moves are made to the left of the writing head until S0 is reached; then the specific transitions towards S1 and S2 are resumed (similar to above). Once the blank space is found, the write head will replace it with a “0” and move to the left, towards S3. Similarly, from S3 it will jump over all “0”s again until the head reaches a "+", but this time the machine will move to the left. Once "+" is reached, the machine will move a space to the left and transition to S4 state. From S4 the machine will jump over all “0”s and move back to S0 and move to the right if it reaches an empty space (i.e. a space past the beginning of the row) - that is, the entire loop repeated. In fact, the machine replaces a “0” in front of a "+" with a blank space, moves its head to the end of the string and adds a "0". Then go back to the first “0” on the left and repeat. It keeps doing this until all the "0" characters from the left of "+" will be replaced with blank spaces.
• Loop 1: "000+00"
• Loop 2: “00+000”
• Loop 3: “0+0000”
• Loop 4: “+00000”
• Loop 5: “00000”.
After four loops, the machine is in S1, but this time the head reads a "+". In fifth loop the machine replaces "+" with an empty space and moves to S5, the final state. The conclusion is that, without a real computer, through a simple set of tools and rules, we can build a machine that can calculate! And it will work for any length of the "0"s (string).
The algorithms presented above were in a simplified form, and of course, perhaps more examples would have been needed for a thorough understanding of them. We hope, however, that the chosen examples will arouse your curiosity to read more about them and their authors. We will return with a third and last part of this short history of AI with some examples of the most representative achievements in the field, real turning points in the human-machine relationship.
The Enigma machine is the creation of dr. Arthur Scherbius. This device was capable of transcribing coded information for secure communications. In 1923 he set up his Chiffriermaschinen Aktiengesellschaft (Cipher Machines Corporation) in Berlin to manufacture this product.
[separator]
The German military however was producing its own version. The German navy introduced their version as well in 1926, followed by the army, in 1928, and the air force, in 1933.
The military Enigma version allowed an operator to type in a message, then scramble it by means of three to five notched wheels, or rotors, which displayed different letters of the alphabet. The receiver needed to know the exact settings of these rotors in order to reconstitute the coded text. The Poles managed to crack the commercial Enigma versions by reproducing the internal parts of the machine, but that was not useful for decoding the military versions.
During the World War II, the military versions of Enigma were heavily used by the Germans, convinced that it couldn't be decoded. The allies established a special divison at Bletchley Park, Buckinghamshire, whose task was to decode the German communications. The best mathematicians were recruited here and, with the intelligence from the Poles, they build early computers with the task to work out the vast number of permutations in Enigma settings. In the mean time, the Germans were upgrading their machine by improving the hardware used for setting the code in each machine. Also, the use of daily codes for the machine made the allies’ job a lot harder.
One of the briliant mathematicians involved in decoding Enigma was Alan Turing.
Born in 1912, in London, he studied at Cambridge and Princeton universities. Turing played a key role inventing, along with fellow code-breaker Gordon Welchman, a machine known as the „Bombe”. This device helped to significantly reduce the work of the code-breakers.
From mid-1940, German Air Force signals were being read at Bletchley and the intelligence gained from this was quite helpful. From 1941, messages sent using the army's Enigma were read also. The one used by the German navy, on the other hand, was not that easy to crack.
Capturing Enigma machines and codes from different German units helped decipher communication, but with a considerable delay. To compesate for this, allies started hunting for ships and planes that carried Enigma codes in order to decode communications faster.
In July 1942, Turing developed a complex code-breaking technique he named „Turingery”. This method helped the team at Bletchley understand another device that enciphered German strategic messages of high importance - the „Lorenz” cipher machine. Bletchley division’s ability to read these messages contributed greatly to the Allied war effort.
Alan Turing’s legacy came to light long after his death. His impact on computer science was widely acknowledged: the annual „Turing Award” has been the highest accolade in that industry since 1966. But the work done at Bletchley Park – and Turing’s role there in cracking the Enigma code – was kept secret until the 1970s. Actually, the full story was not known until the 1990s.
It has been estimated that the efforts of Turing and his fellow code-breakers shortened the war by several years. What is certain is that they saved countless lives and helped determine the course and outcome of the conflict.
Computer hacking – a fascinating subject populated with tales from the scholars of trivia who often heard about hacking from TV, seen it in a movie or acquired a couple of certifications which they believe allow them to call themselves so.
[separator]
We give you hacking insights based on experience, not hypothetical scenarios created in labs. How can one hacker exploit corona? In the times of the COVID-19 crisis, forecasts estimated that cyber-crime will increase 400%. And these estimations went low. They actually increased way more than that.
Let's delve into the subject. Usually, social engineering is probably the most potent way of delivering attack payloads to corporate environments whose users’ only training consists in less than mentally challenging security mantras (change your password, don’t click on these links, click on these other links, etc.). Furthermore, the psychological nature of a crisis such as the one we are facing now attempts to, at the very least, excite a basic human trait: curiosity. Throw in curiosity and a cunning manner of delivering a message and the result is called “victims”. Let’s analyze the following examples, which we introduce in a somewhat random fashion, but they will eventually make sense in the end.
The crisis kind of pushed companies to adopt working from home as the way to move forward. This move by itself is self-obvious in terms of how it can be exploited by the clever hacker. Hackers identify the first element that creates an exploit: confusion. A study indicates that oral communication when perpetuated to a chain of more than 5 people dilutes itself to 20% or less. It is quite easy to imagine an IT department training. “Guys, do not click on phishing links. No spamming links. We may update our VPN to incorporate multi factor authentication.” Most people are unable to identify phishing links. It can be quite hard sometimes, as some of these links are actually legitimate, but their purpose is to lead to spear-phishing. Please consult https://www.phishtank.com/ and test your phishing “street smarts”. Then, people are told that they may update their VPN. Well, that right there can break all hell loose. If a user receives an email from their IT department, being asked to download a new VPN client, 95% of the users will attempt to do it, while only 30% of that 95% will succeed in installing the malicious package (lack of computer literacy when it comes to installing programs).
Imagine the next scenario: a hacker wants to break into a bank, but their security is quite strong and he may not want to create mathematical models of deception for their network analysis software. What can he do? Quite simple. All their profiles are listed on LinkedIn. Great. What’s next? Gathering social media information on these people, he can somehow obtain a score of who is tolerant to a degree of hypochondria. Then he emails them as being the hospital and tells them that according to their records, there is a high degree of probability that they may be infected with COVID-19 and they may want to register for a free COVID test at their website, https://ExampleHospital.com, where they will be asked to fill in their address, DOB, phone number, email and eventually fax in a copy or upload a copy of their NI document. The skilled operator (hacker) will now go and brute-force the Wi -Fi password to their house. Or they might get more creative, and eventually offer some chatting software, support software which enables the victim to talk to others in their category or consult with a live doctor. Of course, the “get-you-well” software is nothing more than a trojan, a RAT (remote administration tool).
This is just a casual example of what a hacker might do. But let’s consider the following scenario:
The employees of company X receive an email from the IT department stating that their picture has to be uploaded to the new SharePoint directory for a work from home directory creation and the distribution of COVID-19 testing toolkits. This attachment containing the picture might be ransomware or adware or some other malware. Usually, the common criminal will send ransomware. The average criminal will send some malware/adware and the smart criminal will send an APT, whose purpose is to lie dormant and probably redirect TB of Google traffic to their benefit to shortening links and this situation can go on for years.
As we can see, the COVID-19 crisis, if played on the right soft psychological side of people, can have devastating effects on a company’s security systems. As always, knowledge is power. At Metaminds, we pay close attention to every requirement our clients express and make sure we address their concerns with a flawless, custom-designed solutions to ensure the safety of their operations.
Trans-Atlantic television and other communications became a reality as the Telstar communications satellite was launched. A product of AT&T Bell Laboratories, the satellite was the first orbiting international communications satellite that sent information to tracking stations on both sides of the Atlantic Ocean. Initial enthusiasm for making phone calls via the satellite waned after users realized there was a half-second delay as a result of the 25,000-mile transmission path.
[separator]
Even if nowadays it seems like a phone call is a regular thing, IT professionals have dealt with many difficulties in the past to make fixed phone calls a reality. Nowadays we are concerned with making our conversations safer by solving the different security breaches we are confronted with, but back then people had other issues.
Quick recap for the millennials: long before everyone had a smartphone or two, the implementation of a telephone was quite different than today. Most telephones had real, physical buttons. Even more bizarrely, these phones were connected to other phones through physical wires. Weird, right? These were called “landlines”, a technology that is still employed in many households around the world.
It gets even more bizarre. Some phones were wireless (quite just like your smartphone) but they couldn’t get a signal more than a few hundred feet away from your house for some reason. These were “cordless telephones”. Many hackers are working on deconstructing the security behind these cordless phones for a few years now and found these cordless phones aren’t secure at all.
While nothing is 100% secure, many people thought that DECT and 5.8GHz phones were safe, at least more so than the cordless phones from the 80s and 90s. While DECT has been broken for a long time, 5.8GHz phones were considered to be safer than 900mhz phones, as scanners are harder to come by in the microwave bands, because very few people have a duplex microwave transceiver sitting around. But everything is bound to happen eventually.
With the advent of cheap SDR, hackers demonstrated that listening to and intercepting any phone call they want is actually possible. Using a duplex microwave transceiver (very cheap at ~$300 for the intended purpose) they freely explored the radio system inside these cordless phones. After taking a duplex microwave transceiver to a cordless phone, hackers found the phone technically didn’t operate in the 5.8 GHz band. Control signals, such as pairing a handset to a base station, happened at 900 MHz. Here, a simple replay attack was enough to get the handset to ring. It gets worse: simply by looking at the 5.8 GHz band with a transceiver, they found an FM-modulated voice channel when the handset was on. That’s right: the phone transmits the voice signal without any encryption whatsoever.
This isn’t the first time hackers found a complete lack of security in cordless phones. A while ago, they explored the DECT 6.0 standard, a European cordless phone standard for PBX and VOIP. There was no security there, either.
It would be chilling if landlines were as spread today as they were 20 some years ago, because the tools to perform a landline hack are freely available and thoroughly documented.
Few people realize that making movies like Tron: Legacy is also a huge data project. Doing a movie with that much computer generated content creates an enormous amount of data, amount that now is measured in petabytes. Also, because the computer generated content is integrated in the filmed content, usually the CGI companies involved are using at some point a more or less finished version of the movie. That makes them a prime target for hacking attempts.
[separator]
The HBO hack of 2017, when Game of Thrones scripts and episodes of Curb Your Enthusiasm and Ballers were released online before their air dates, caused chaos for the premium cable network. The hackers were motivated by greed. The organization that went by the name Mr. Smith was seeking a ransom in the range of $6 million to prevent the release of this highly sensitive information. And this data breach is far from the first example the entertainment industry has faced.
The Sony hack of 2014, in which thousands of confidential company documents and emails were released, had a long-lasting impact on the company. It resulted in the ouster of Amy Pascal, head of Sony Pictures Entertainment, turned „The Interview” into a box-office bomb, resulted in a slew of lawsuits and, in general, caused a lot of pain and embarrassment to a lot of people.
And then there’s the release of Quentin Tarantino’s „The Hateful Eight” script. The Oscar-winning director closely guard his material. When it turned out that someone had leaked an early draft of the Western whodunit, Tarantino actually considered shelving the project altogether. Even though Tarantino went on with making the movie after all, it underscores an issue that many in Hollywood face, whether working in production or at a studio. That issue is: “how to ensure the security of information and intellectual property?”.
A movie or TV production can employ hundreds of people. And with each production there are countless documents and files – scripts, budgets, payroll documents and video – that could be very detrimental to the production and its staff if leaked out. Knowing hackers are looking for high-value targets, having a strong data security system in place is of the utmost importance. Unfortunately, most in the entertainment industry – be they productions or studios – aren’t using the enterprise-grade protection they need to keep their information safe. Especially when it comes to productions, they’re simply using the most rudimentary of storage and security services.
To secure such a great amount of movie data against hacking and premature leaking, Hollywood had to embrace digital security.
As many other industries before it, Hollywood turned to a new class of technology companies, that for the last few years have been offering ways to manage the data slipping into employees’ personal smartphones and Internet storage services. They wrap individual files with encryption, passwords and monitoring systems that can track who is doing what with sensitive files.
The most sensitive Hollywood scripts were — and, in many cases, still are — etched with watermarks, or printed on colored and even mirrored paper to thwart photocopying.
Letter spacing and minor character names were switched from script to script to pinpoint leakers. Plot endings were left out entirely. The most-coveted scripts are still locked in briefcases and accompanied by bodyguards whose sole job is to ensure they don’t end up in the wrong hands.
But over the last decade, such measures have begun to feel quaint. Watermarks can be lifted. Color copiers don’t care what color a script is. Even scripts with bodyguards linger on a computer server somewhere.
And once crew members started using their personal smartphones on set, people started leaving with everything they had created for the movie production.
So the movie studios had to employ security solutions that give file creators the ability to manage who can view, edit, share, scan and print a file, and for how long. If hackers steal the file off someone’s computer, all they will see is a bunch of encrypted characters.
Also, some Hollywood studios are removing their movie editing software from the Internet employing a process known as “air-gapping”— so that if hackers breach their internal network, they can’t use that access to steal the data.
One of the quirkier features that some studios use is adding a digital spotlight view that mimics holding a bright flashlight over a document in the dark. Everything beyond the moving circular spotlight is unreadable. The feature makes it difficult for anyone peering over your shoulder — or a hacker pulling screen shots of your web browser — to read the whole document.
In this first article of our series dedicated to the brief history of AI, we will focus on essential achievements in this field in the pre-computer age period. The dominant method of research at the time was to look in nature for ideas for solving severe problems. In the absence of an understanding of the functioning of natural systems, the research could only be experimental. So the most daring of the researchers approached the creation of mobile automatons (pre-robots) as the first attempt to create artificial intelligence.
[separator]
Grey Walter’s “Tortoise”Born in the United States but educated in England, Walter failed to obtain a research fellowship in Cambridge and started neurophysiological research in various places over the world. Heavily influenced by the work of the Russian physiologist Ivan Pavlov and Hans Berger (the inventor of the electroencephalograph for measuring electrical activity in the brain), Walter made several discoveries using his version of EEG machine in the field of brain topography. The most notable was the introduction of triangulation as a method of locating the strongest alpha waves within the occipital lobe, thus facilitating the detection of brain tumors or lesions responsible for epilepsy. He pioneered the brain topography based on EEG machine with a multitude of spiral-scan CRTs coupled to high-gain amplifiers.Walter remained famous as an early contributor to the AI field mainly for making some of the first mobile automatons in the late ’40s, named tortoises (after the tortoise in “Alice in Wonderland”) because of their slow speed and shape. These battery-powered automatons were prototypes to test his theory that a small number of cells can induce complex behavior and choice. As a very simple model of the nervous system, they implemented two neuron architecture by incorporating only two motors, two relays, two valves, two condensers, and one sensor (ELSIE had sensor for light and ELMER had sensor for touch).ELSIE scanned the surroundings continuously with the rotating photoelectric cell until a light source was detected. If the light was too bright, it moved away. Otherwise, ELSIE moved toward the light source. ELMER explored the surroundings as long as it didn’t encounter any obstacles; otherwise, ELMER retreated after the touch sensor had registered a contact. Both versions of the tortoise moved toward an electric charging station when the battery level was low.Walter noted that the automatons “explore their environment actively, persistently, systematically, as most animals do”. This is what happened most of the time, except when a light source was attached to ELSIE’s nose. The automaton started “flickering, twittering and jigging like a clumsy narcissus” and Walter concluded that this was a sign of self-awareness. Even though many scientists today believe that robots will not achieve self-awareness, Walter’s experiment succeeded in proving that complex behaviours can be generated by using only a few components and that biological principles can be applied to robots.Subsequent developments, some remaining only in a theoretical phase, promised substantial improvements in the direction of intelligent behaviour, Walter trying to add “learning” skills – even if they were in a primary form, such as Pavlovian conditioning. For example, the incorporation of an auditory sensor and the whistle immediately before contact between ELMER and an obstacle will cause ELMER to subsequently perform an obstacle avoidance maneuver before contact occurs – if it “heard” the whistle. Although it seems that Walter materialised this attempt, it seems that the echo was not noticeable in the scientific world at that time.John Hopkins’ “Beast”Another well-known realisation of a mobile automaton is the “Beast” project from the ’60s of a team of engineers from Johns Hopkins University Applied Physics Laboratory, including Ron McConnell (Electrical Engineering) and Edwin B. Dean, Jr. (Physics). By having a height of half a meter, over 200 cm diameter, and a weight of almost 50 kilograms, “Beast” was built to perform two tasks only: explore the surroundings and survive on its own.Initially equipped with physical switches, “Beast” moved “freely” following the white walls of the laboratory and avoiding potential obstacles encountered. When the battery level was low, “Beast” “looks for” a black wall socket and plugs it in for power. Without a central processing unit, its control circuitry consisted of multiple transistor modules that controlled analogue voltages; three types of transistors allowed three classes of tasks:– Make a decision when activating a sensor, by emulating Boolean logic;– Specify a period to do something, by creating timing gates;– Control the pressure for the automaton’s arm and the charging mechanism by using power transistors.A second version also received a photoelectric cell in addition to an improved sonar system. With the help of two ultrasonic transducers, “Beast” could now determine the distance, location within the perimeter, and obstructions along the path – thus exposing a significantly more complex “behaviour” than those of Walter’s tortoises. Performances such as stopping, slowing down or bypassing an obstruction or recognising doors, stairs, installation pipes, hanging cables or people through taking the appropriate actions are perhaps the most significant technical achievement of the pre-computer age.In his response to Bill Gates, who predicted in 2008 that the “next” hot field would be robotics, McConnell humorously stated about their work from the ’60s: “The robot group built two functioning prototypes that roamed and “lived” in the hallways of the lab, avoiding hazards such as open stairwells and doors, hanging cables and people while searching for food in the form of AC power on the walls to recharge their batteries. They used the senses of touch, hearing, feel and vision. Programming consisted of patch cables on patch boards connecting hand-built logic circuits to set up behaviour for avoidance, escape, searching and feeding. No integrated circuits, no computers, no programming language. With a 3-hour battery life, the second prototype survived over 40 hours on one test before a simple mechanical failure disabled it.”Ashby’s “Mobile Homeostat”Indeed, the most intriguing prototype of care saw the light of day before the computer age was The Homeostat¹, created by W. Ross Ashby, Research Director at the Barnwood House Hospital in Gloucester, in 1948 and presented at the Ninth Macy’s. Conference on Cybernetics in 1952. The Homeostat contained four identical control switch-gear kits that came from WW2 bombs (with inputs, feedback, and magnetically driven, water-filled potentiometers), and each transformed into an electro-mechanical artificial neuron. The purpose of this prototype was extremely challenging for that time, namely to be an example for all types of behaviour – by addressing all living functions.During the presentation, The Homeostat was able to perform tasks that indicate some cognitive abilities, i.e., the ability to learn and adapt to the environment. But the approach was at least strange: while other automaton of the time exhibited a dynamic character by exploring the environment, the goal of the Homeostat was to reach the perfect state of balance (i.e. homeostasis). This approach was intended to support the author’s principle of ultra-stability and the law of a variety of requirements. Based on the concept of “negative feedback,” the Homeostat approached incrementally the path between the current state and the final state of equilibrium, the steps representing the concrete responses of the automatons to changes in the environment (which affected the state of equilibrium). In detail, the principle of “Law of Requisite Variety” (as the author called it), stated that in order to break the variety of disturbances from the external environment, a system needs a “goal-seeking” strategy and a wide variety of possibilities to respond to them. For the animal world, a final goal like “no goal” was equivalent to achieving immortality. The part of “cognitive intelligence” embedded in the activity of automatons was precisely this “goal-seeking” approach, and, from a technical standpoint, “its principle is that it uses multiple coils in a milliammeter & uses the needle movement to dip in a trough carrying a current, so getting a potential which goes to the grid of a valve, the anode of which provides an output current”. But the audience was not very convinced of this principle, and, on the whole, its activity could be classified as a “goal-less goal-seeking machine.” It was Gray Walter, who called The Homeostat a “Machina sopor,” of which he said “fireside cat or dog which only stirs when disturbed, and then methodically finds a comfortable position and goes to sleep again,” in contrast with his creation, “The Tortoise,” called “Machina speculatrix,” which embodies the idea that “a typical animal propensity is to explore the environment rather than to wait passively for something to happen.” It was later learned that Alan Turing advised Ashby to implement a simulation on the ACE² computer instead of building a special machine.However, The Homeostat received a significant comeback in the 1980s, when a team of cognitive researchers from the University of Sussex led by Margaret Boden created several practical robots incorporating Ashby’s ultrastability mechanism. Boden was fascinated by the idea of modeling an autonomous goal-oriented creature, arguing that the future of cognitive science is one based on The Homeostat.ConclusionsThe cybernetics of the ’60s are long gone, and the current possibilities of computer simulation are infinitely more capable than anything that could be imagined or created by the geniuses of those times, and within reach of any school student. Suffice it to say that the level of tropism of Tortoises is equivalent to that of a simple bacteria and The Beast equals the ability to coordinate of a large nucleated cell’s like Paramecium, which is a bacterial hunter; or that what was then presented as a continuous adaptation of responses to external stimuli is far from what we understand and have today in terms of learning – supervised or unsupervised. But evolution has not been just the result of the appearance of computer technology and its fantastic development. As I mentioned in the introduction, the history of AI overlaps the history of cognitive science. So at today’s AI level, achievements in multiple fields have contributed, including linguistics, psychology, philosophy, neuroscience, anthropology, and, of course, mathematics.Simply put, even though in most cases it was agreed that it was a success, we can say that these mobile automatons of the pre-computer-era were nothing more than experiments before theoretical research and not during it. The rudimentary means of construction, the lack of a common language in the field and the non-adjustment between the model and the implementation mechanisms have often made the researchers of the time doubt each other’s achievements³; unimaginable today, when everyone understands that an self-driving car can anticipate complex accidents better than all the drivers involved or that a software robot crushes the world chess champion without even training by playing with someone other than himself.
[separator]
Footnotes:
In biology, homeostasis is the state of steady internal, physical, and chemical conditions maintained by living systems.
The Automatic Computing Engine (ACE) was a British early electronic serial stored-program computer designed by Alan Turing.
With regard of The Homeostat of Ashby, the cyberneticist Julian Bigelow famously asked, “whether this particular model has any relation to the nervous system? It may be a beautiful replica of something, but heaven only knows what.”
References:
Steve Battle – “Ashby’s Mobile Homeostat”
Margaret A. Boden – “Mind as Machine, A History of Cognitive Science”
Margaret A. Boden – “Creativity & Art, Three Roads to Surprise”
Stefano Franchi, Francesco Bianchini – “The Search for a Theory of Cognition: Early Mechanisms and New Ideas”
“My goal is simple. It is a complete understanding of the universe, why it is as it is, and why it exists at all”, said Stephen Hawking, the famous theoretical physicist and cosmologist of the 20th century. The quote emphasizes that he was not one to settle for an easy challenge, a trait that we hope is the basis of every individual in our team. The task he set for himself was too large for an individual to complete in a lifetime, but, even so, the renowned British physicist accomplished substantial parts of it by leading the worldto understand the bits of the universe.
Stephen Hawking devoted all his resources to the study of black holes, individually and in collaboration with other acclaimed researchers. His debut took place in 1970, when, together with Sir Roger Penrose, established the theoretical basis (the Penrose – Hawking singularity theorems) for the formation of black holes. Their prediction was proven by recent observational experiments (2015-2019) at the Laser Interferometer Gravitational-Wave Observatory (LIGO) that detected gravitational waves emitted by colliding black holes (or emerging ones).
The same theoretical basis was the expansion of the black hole (this translates into an increase in the area of a black hole's event horizon) with the absorption of matter and energy from its vicinity. According to the second law of thermodynamics, the entropy of the black hole can only increase, and, as the entropy is an energy-dependent function that possesses the temperature, the scientists wanted to know how high the temperature of a black hole can go. Here comes perhaps the most significant contribution so far in the field, namely the Hawking radiation, which may be responsible for keeping the temperature bellow a „certain limit”. He uncovered that black holes, once thought to be static, unchanging, and defined only by their mass, charge, and spin, are actually ever-evolving engines that emit radiation and evaporate over time.Although this contribution has not yet been proven by any experiment, which is why Hawking did not win the Nobel Prize in his lifetime, it is seen as the only widely recognized result by physicists in the field as support for a unifying theory of quantum mechanics and gravity.
The next question for the scientific world was, logically, whether the radiation emitted by the black hole preserves the information that came with the ingestion of matter, even in a scrambled form. For many years Hawking did not believe so, and proposed in 1997, characteristically for him, a bet (Thorne – Hawking – Preskill bet). In 2004 Hawking updates his own theory stating that the black hole event's horizon is not really a "firewall" but rather an "apparent horizon" that enables energy and information to escape (from the quantum theory standpoint), thus declaring himself the loser of the bet. Moreover, he considers that he has thus corrected the biggest mistake of his life in the field. Neither Kip Thorne, who was with him in the bet against John Preskill nor half of the scientific world, is considered convinced of this update, today, two years after Hawking's death. In the absence of solid experimental evidence (which, among other things, will support a quantum theory of gravity), the question of whether and how information leaks from a black hole (through Hawking radiation) remain open.
The Internet has become an intrinsic part of our everyday life, both if you are interested in the threats it poses from a cybersecurity point of view or if you are only enjoying the many advantages it offers. Not so long ago though, you had to be a visionary to imagine the power it was going to hold in the future. Microsoft wanted to get into the browser game as soon as possible after Netscape Communications Corporation became the web browser industry leader, a little after the release of its flagship browser, Netscape Navigator, in October 1994.
[separator]
Soon after, Microsoft licensed from Spyglass Inc. the Mosaic software that would be furtherly used as the basis for the first version of Internet Explorer. Spyglass was an Internet software company founded by students at the Illinois Supercomputing Center that managed to develop one of the earliest browsers for navigating the web. They waited an entire year to go public after they began distributing their software and making up to $7 million out of it, which happened exactly on this day, 25 years ago.
Microsoft developed the functionality of the Internet Explorer browser and embedded it in the core Windows operating system for the better part of the last 25 years. They are still providing to this day the old Windows Internet Explorer 11 (latest supported version) with security patches, but they are replacing it on the newer operating systems with their own Microsoft Edge browser, which in turn, they are replacing this year with a brand new Microsoft Edge browser. Confusing, right? The main difference between the old Edge browser and the new Edge browser is that the latter is based on Google’s Ghromium web engine and has nothing to do with Microsoft’s old code-base.
But until the new Edge browser will be the default choice on Microsoft OS’s, let’s take a look at the current Edge browser and his relationship with the old Internet Explorer.
The already „old” Microsoft Edge has more in common with Internet Explorer than you might think especially when it comes to security flaws.
Given that the number of vulnerabilities found in Edge is far below Internet Explorer, it's reasonable to say Edge looks like a more secure browser. But is Edge really more secure than Internet Explorer?
According to a Microsoft blog post from 2015, the software giant's Edge browser, an exclusive for Windows 10, is said to have been designed to "defend users from increasingly sophisticated and prevalent attacks."
In doing that, Edge scrapped older, insecure, or flawed plugins or frameworks, like ActiveX or Browser Helper Objects. That already helped cut a number of possible drive-by attacks traditionally used by hackers. EdgeHTML, which powers Edge's rendering engine, is a fork of Trident, which still powers Internet Explorer.
However, it's not clear how much of Edge's code is still based off old Internet Explorer code.
When asked, Microsoft did not give much away. They said that "Edge shares a universal code base across all form factors without the legacy add-on architecture of Internet Explorer. Designed from scratch, Microsoft does selectively share some code between Edge and Internet Explorer, where it makes sense to do so."
Many security researchers are saying that overlapping libraries are where you get vulnerabilities that aren't specific to either browser, because when you're working on a project as large as a major web browser, it's highly unlikely that you would throw out all the project specific code and the underlying APIs that support it. There are a lot of APIs that the web browser uses that will still be common between the browsers. If you load Microsoft Edge and Internet Explorer on a system, you will notice that both of them load a number of overlapping DLLs.
The big question is how much of that Internet Explorer code remains in Edge, and crucially, if any of that code has any connection to the overlap of flaws found in both browsers that poses a risk to Edge users.
The bottom line is that it's hard, if not impossible to say if a browser is more or less secure than another browser.
A "critical" patch, which fixes the most severe of vulnerabilities, is a moving scale and has to consider the details of the flaw, as well as if it's being exploited by attackers. With an unpredictable number of flaws found each month coupled with their severity ratings, a browser's security worth can vary month by month.
As history showed us, in the last 5 years the Edge browser had no fewer than 615 security vulnerabilities and Internet Explorer almost doubles that – 1030.
Microsoft's decision to adopt the Chromium open-source code to power its new Edge browser could mean a sooner-than-expected end of support for Internet Explorer and the end of support for the shared code-base with the „old” Edge browser. And that’s a good thing for the security of users that are only using the browser provided by the operating system itself (7.76% - Microsoft Edge, 5.45% - Internet Explorer as of April 2020).
Some of us can’t imagine life without Siri or another virtual assistant to help, guide and save time throughout the day. Even though it has so many advantages, the fact that, in order to work properly, it must always be listening, raises serious privacy concerns.
[separator]
The first step that led to the creation of today’s speaking devices was an educational toy named the Speak & Spell, announced back in 1978 by Texas Instruments. It offered a number of word games, similar to the hangman, and a spelling test. What was revolutionary about it was its use of a voice synthesis system that electronically simulated the human one.
The system was created as an offshoot of the pioneering research into speech synthesis developed by a team that included Paul Breedlove as the lead engineer. Breedlove was the one that came up with the idea of a learning aid for spelling. Breedlove’s plan was to build upon bubble memory, another TI research effort, and as such it involved an impressive technical challenge: the device should be able to speak the spelling word out loud.
The team analyzed several options regarding how to use the new technology and the winner was this 50$ toy idea.
With Apple’s introduction of iOS 12 for all their supported mobile devices came a powerful new utility for automation of common tasks called Siri Shortcuts. This new feature can be enabled via third-party developers in their apps, or custom built by users downloading the Shortcuts app from the app store. Once downloaded and installed, the it grants the power of scripting to perform complex tasks on users’ personal devices.
Siri Shortcuts can be a useful tool for both users and app developers who wish to enhance the level of interaction users have with their apps. But this access can potentially also be abused by malicious third parties. According to X-Force IRIS research, there are security concerns that should be taken into consideration in using Siri Shortcuts.
For instance, Siri Shortcuts can be abused for scareware, a pseudo-ransom campaign trying to trick potential victims into paying a certain a criminal by convincing them their data is in the hands of a remote attacker.
Using native shortcut functionality, a script could be created to transmit ransom demands to the device’s owner by using Siri’s voice. To lend more credibility to the scheme, attackers can automate data collection from the device and have it sent back the user’s current physical address, IP address, contents of the clipboard, stored pictures/videos, contact information and more. This data can be displayed to the user to convince them that an attacker can make use of it unless they pay a ransom.
To move the user to the ransom payment stage, the shortcut could automatically access the Internet, browsing to a URL that contains payment information via cryptocurrency wallets, and demand that the user pay-up or see their data deleted, or exposed on the Internet.
Apple prefers quick access over device security for Siri, which is why the iOS default settings allow Siri to bypass the passcode lock. However, allowing Siri to bypass the passcode lock could allow a thief or hacker to make phone calls, send texts, send e-mails, and access other personal information without having to enter the security code first.
There is always a balance that must be struck between security and usability. Users and software developers must choose how much perceived security feature-related inconvenience are they willing to endure in order to keep their devices safe versus how quickly and easily they want to be able to use them.
Whether you prefer instant access to Siri without having to enter a passcode is completely up to you. In some cases, while you're in the car, for example, driving safely is more important than data security. So, if you use your iPhone in hands-free mode, keep the default option, allowing the Siri passcode bypass.
As the Siri feature becomes further advanced and the amount of data sources it is tapped into increases, the data security risk for the screen lock bypass may also increase. For example, if developers tie Siri into their apps in the future, Siri could provide a hacker with financial information if a Siri-enabled banking app is running and logged in using cached credentials and a hacker asks Siri the right questions.
By integrating cybersecurity and complex architectures in the IT field, we cannot appreciate enough the unprecedented security developed by Netscape Corporation.Besides developing Navigator, the browser that would change the way the Internet was used by the masses, it also pioneered the Secure Sockets Layer (SSL) Protocol that enabled privacy and consumer protection.
[separator]
The underlying technology used for their browsers at that time, Navigator and Communicator, still powers today’s security standard, Transport Layer Security (TLS).
Back in 1996, Washington Post published an article in which they speculated that Netscape might one day turn into a challenge for Microsoft, due to the fact that the software startup was growing very fast. It seems like they were right since, years later, the source code used for Netscape Navigator 4.0 would lead to the creation of Mozilla and its Firefox browser. This is one of the best alternatives to Google Chrome which, in 2016, managed to dethrone Internet Explorer, the browser created by Microsoft.
Although all modern browsers are using the SSL and TLS protocols pioneered by Netscape Corporation, these protocols had their fair share of vulnerabilities over the years. So, remember that using the latest browser, without any other security solution, doesn’t mean that you are protected against the latest attacks.
Here are some of the most prominent attacks involving breaches of the SSL/TLS protocols that had surfaced in recent years:
POODLEThe Padding Oracle On Downgraded Legacy Encryption (POODLE) attack was published in October 2014 and exploits two aspects: the fact that some servers/clients still support SSL 3.0 for interoperability and compatibility with legacy systems and a vulnerability within SSL 3.0 that is related to block padding.
The client initiates the handshake and sends a list of supported SSL/TLS versions. An attacker intercepts the traffic, performing a man-in-the-middle (MITM) attack, and impersonates the server until the client agrees to downgrade the connection to SSL 3.0.
The SSL 3.0 vulnerability is in the Cipher Block Chaining (CBC) mode. Block ciphers require blocks of fixed length. If data in the last block is not a multiple of the block size, extra space is filled by padding. The server ignores the content of padding. It only checks if padding length is correct and verifies the Message Authentication Code (MAC) of the plaintext. That means that the server cannot verify if anyone modified the padding content.
An attacker can decipher an encrypted block by modifying padding bytes and watching the server response. It takes a maximum of 256 SSL 3.0 requests to decrypt a single byte. This means that once every 256 requests, the server will accept the modified value. The attacker does not need to know the encryption method or key. Using automated tools, an attacker can retrieve the plaintext character by character. This could easily be a password, a cookie, a session or other sensitive data.
BEASTThe Browser Exploit Against SSL/TLS (BEAST) attack was disclosed in September 2011. It applies to SSL 3.0 and TLS 1.0 so it affects browsers that support TLS 1.0 or earlier protocols. An attacker can decrypt data exchanged between two parties by taking advantage of a vulnerability in the implementation of the Cipher Block Chaining (CBC) mode in TLS 1.0.
This is a client-side attack that uses the man-in-the-middle technique. The attacker uses MITM to inject packets into the TLS stream. This allows them to guess the Initialization Vector (IV) used with the injected message and then simply compare the results to the ones of the block that they want to decrypt.
CRIMEThe Compression Ratio Info-leak Made Easy (CRIME) vulnerability affects TLS compression. The compression method is included in the Client Hello message and it is optional. You can establish a connection without compression. Compression was introduced to SSL/TLS to reduce bandwidth. DEFLATE is the most common compression algorithm used.
One of the main techniques used by compression algorithms is to replace repeated byte sequences with a pointer to the first instance of that sequence. The bigger the sequences that are repeated, the higher the compression ratio.
All the attacker has to do is inject different characters and then monitor the size of the response. If the response is shorter than the initial one, the injected character is contained in the cookie value and so it was compressed. If the character is not in the cookie value, the response will be longer.
Using this method an attacker can reconstruct the cookie value using the feedback that they get from the server.
BREACHThe Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) vulnerability is very similar to CRIME, but BREACH targets HTTP compression, not TLS compression. This attack is possible even if TLS compression is turned off. An attacker forces the victim’s browser to connect to a TLS-enabled third-party website and monitors the traffic between the victim and the server using a man-in-the-middle attack.
HeartbleedHeartbleed was a critical vulnerability that was found in the heartbeat extension of the popular OpenSSL library. This extension is used to keep a connection alive as long as both parties are still there.
The client sends a heartbeat message to the server with a payload that contains data and the size of the data (and padding). The server must respond with the same heartbeat request, containing the data and the size of data that the client sent.
The Heartbleed vulnerability was based on the fact that if the client sent false data length, the server would respond with the data received by the client and random data from its memory to meet the length requirements specified by the sender.
Leaking unencrypted data from server memory can be disastrous. There have been proof-of-concept exploits of this vulnerability in which the attacker would get the private key of the server. This means that an attacker would be able to decrypt all the traffic to the server. Server memory may contain anything: credentials, sensitive documents, credit card numbers, emails, etc.
Bleichenbacher
This relatively new cryptographic attack can break encrypted TLS traffic, allowing attackers to intercept and steal data previously considered safe and secure.
This downgrade attack works even against the latest version of the TLS protocol, TLS 1.3, released in 2018 and considered to be secure.
This cryptographic attack is a variation of the original Bleichenbacher Oracle attack and represents yet another way to break RSA PKCS#1 v1.5, the most common RSA configuration used to encrypt TLS connections nowadays. Besides TLS, this new Bleichenbacher attack also works against Google's new QUIC encryption protocol as well.
The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations.
Even the newer version of the TLS 1.3 protocol, where RSA usage has been kept to a minimum, can be downgraded in some scenarios to TLS 1.2, where the new Bleichenbacher attack variation works.
In most cases, the best way to protect yourself against SSL/TLS-related attacks is to disable older protocol versions. This is even a standard requirement for some industries. For example, June 30, 2018, was the deadline for disabling support for SSL and early versions of TLS (up to and including TLS 1.0) according to the PCI Data Security Standard. The Internet Engineering Task Force (IETF) released advisories concerning the security of SSL. Deprecation of TLS 1.0 and 1.1 by IETF is expected soon.
The Los Angeles Times reported that Father Leonard Boyle was working to put the Vatican’s Library on the World Wide Web through a site funded by IBM. “Bringing the computer to the Middle Ages and the Vatican library to the world.” Boyle computerized the library’s catalog and placed manuscripts and paintings on the website, which was in part funded by IBM. Today, thousands of manuscripts and incunabula have been digitized and are publicly available on the Vatican Library website. A number of other offerings are available, which include images and descriptions of the Vatican’s extensive numismatic collection that dates back to Roman times.
[separator]
The Vatican’s digital presence soon caught the hacker’s attention and in August 2011, when by the elusive hacker movement known as Anonymous launched a cyber-attack against it. Although the Vatican has seen its fair share of digital attacks over the years, what makes this particular one special is the fact that this was the first Anonymous attack to be identified and tracked from start to finish by security researchers, providing a rare glimpse into the recruiting, reconnaissance and warfare tactics used by the shadowy hacking collective.
The campaign against the Vatican, which has not received wide attention at the time, involved hundreds of people, some with hacking skills and some without. A core group of participants openly drummed up support for the attack using YouTube, Twitter and Facebook. Others searched for vulnerabilities on a Vatican Web site and, when that failed, enlisted amateur recruits to flood the site with traffic, hoping it would crash.
Anonymous, which first gained widespread notice with an attack on the Church of Scientology in 2008, has since carried out hundreds of increasingly bold strikes, taking aim at perceived enemies including law enforcement agencies, Internet security companies and opponents of the whistle-blower site WikiLeaks.
The group’s attack on the Vatican was confirmed by the hackers and it may be the first end-to-end record of a full Anonymous attack.
The attack was called “Operation Pharisee” in a reference to the sect that Jesus called hypocrites. It was initially organized by hackers in South America and Mexico before spreading to other countries, and it was timed to coincide with Pope Benedict XVI’s visit to Madrid in August 2011 for World Youth Day, an annual international event that regularly attracts more than a million young Catholics.
Hackers initially tried to take down a website set up by the church to promote the event, handle registrations and sell merchandise. Their goal – according to YouTube messages delivered by an Anonymous figure in a Guy Fawkes mask – was to disrupt the event and draw attention.
The hackers spent weeks spreading their message through their own website and social media channels like Twitter and Flickr. Their Facebook page encouraged volunteers to download free attack software so that they might join the attack.
It took the hackers 18 days to recruit enough people. Then the reconnaissance began. A core group of roughly a dozen skilled hackers spent three days poking around the church’s World Youth Day site looking for common security holes that could let them inside. Probing for such loopholes used to be tedious and slow, but the advent of automated tools made it possible for hackers to do this around the clock.
In this case, the scanning software failed to turn up any gaps. So, the hackers turned to a brute-force approach – a DDoS attack. Even unskilled supporters could take part in this from their computers or smartphones.
Over the course of the campaign’s final two days, Anonymous enlisted as many as a thousand people to download attack software, or directed them to custom-built websites that let them participate using their cellphones. Visiting a particular web address caused the phones to instantly start flooding the target website with hundreds of data requests each second, with no special software required.
On the first day, the denial-of-service attack resulted in 28 times the normal traffic to the church site, rising to 34 times the next day. Hackers involved in the attack, who did not identify themselves, said, through a Twitter account associated with the campaign, that the two-day effort succeeded in slowing the site’s performance and making the page unavailable “in several countries”.
Anonymous moved on to other targets, including an unofficial site about the pope, which the hackers were briefly able to deface.
In the end, the Vatican’s defenses held up because, unlike other hacker targets, it invested in the infrastructure needed to repel both break-ins and full-scale assaults, using some of the best cybersecurity technology available at the time.
Researchers who have followed Anonymous say that despite its lack of success in this and other campaigns, their attacks show the movement is still evolving and, if anything, emboldened.
Dear Visitor,
We use necessary cookie files that allow you to normally navigate this website. We also use cookie files to understand how visitors interact with our website, by collecting anonymous information for statistical purposes. Because we value your privacy, we are herewith asking for your permission to use this technologies. No marketing cookies are used on this website. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.