We Live Security

Articles from We Live Security Magazine

Two US cities opt to pay $1m to ransomware operators

Source: We Live Security Magazine On:

Read On

A few days apart, two cities in Florida cave in to extortionists’ demands in hopes of restoring access to municipal computer systems

To pay or not to pay ransomware attackers? And if I do pay up, will I get my data back?

These have been some truly pressing questions not only for ransomware victims and, as the recommended reading section at the end of this article shows, we too have spilled quite some digital ink on answering them. (The short answer to the questions is ‘no’ but, for better insight as well as for reasons why they may not be the right questions to ask in the first place, you may want to navigate to the articles.)

But why bring this up now, anyway?

In recent weeks, two cities in Florida have found themselves in a similar quandary after their computer systems were struck by ransomware. As it turns out, they both decided to cough up some hefty money to the cyber-extortionists.

The first to fall victim, on May 29th, was the small city of Riviera Beach, where a police department employee opened a malicious email attachment, unwittingly unleashing mayhem on the city’s computer systems and forcing its staff to turn to pen and paper.

Fast forward three weeks and, heeding advice from external consultants, the municipality’s officials authorized its insurance carrier to pay 65 bitcoins (close to US$600,000 at the time) to the cybercriminals in hopes of retrieving access to its computer systems, reads the New York Times report.

Barely a few days flew by before another municipality in the Sunshine State gave in to extortionists’ demands. Lake City – which has been reeling from a ransomware attack going back to June 10th – authorized the payment of 42 bitcoins (some US$460,000) on Monday, with the actual ‘transaction’ to follow on the next day, reads the report by the local WCJB-TV.

All but US$10,000 was actually covered from what city officials described as “a good comprehensive insurance plan in place that does cover this type of an incident”. The city is said to have made multiple attempts to get its systems unlocked and up and running again after the incident disabled all of its online systems. In fact, the city’s police department said two days after the attack that recovery was going well, but apparently the efforts came to nothing.

In either incident, there is no word on what kind of prevention or business continuity measures, if any (and notably backups), were in place or why they weren’t successful. Nor is it immediately clear if the post-payment recovery efforts have been successful.

According to a recent report by threat intelligence provider Recorded Future, state and local governments in the US reported 169 ransomware incidents between 2013 and April 2019.

Recommended reading

Ransomware: To pay or not to pay?
Ransomware: To pay or not to pay? (another article of the same name)
Ransomware: Should you pay the cybercriminals?
FBI: No, you shouldn’t pay ransomware extortionists
The cyber insurance question
The economics of ransomware recovery
Ransomware: Expert advice on how to keep safe and secure

26 Jun 2019 – 11:05PM

Stopping stalkerware: What needs to change?

Source: We Live Security Magazine On:

Read On

What technology makers and others can – and should – do to counter the kind of surveillance that starts at home

Regardless of whose statistics you read, a disturbingly high percentage of women and men will experience intimate partner violence or harassment in their lifetime. Worryingly, technology is being used more and more frequently as a tool to coerce and intimidate victims, with social media, smart phones and smart home devices being among the most popular tools for these purposes. This will continue to be the case until we change how technology is developed and implemented.

One of the first articles I wrote upon joining ESET in 2013 was about how domestic violence survivors could help protect themselves. While I had helped friends defend themselves against ongoing surveillance by domestic partners before writing that article, I hadn’t realized until researching it how incredibly challenging it can be to deal with a more technically enabled abuser.

That article is also one of a very small handful of articles I’ve ever been asked to update with more current information, because the issue it describes continues to be such a huge problem for so many people. This is clearly not a problem that’s getting better without a much stronger effort on the part of a lot of different people, including anti-malware vendors.


There are many challenging aspects to combating harassment, due in large part to the failure of legislation in keeping up with technology, as well as the failure of technology manufacturers in designing products to prevent misuse. This makes it hard for defenders to address these problems, which has allowed them to pile up.

Laws combating stalking and domestic violence have been woefully inadequate and slow in being adopted at all, much less enforced. As such, it should be no surprise that laws surrounding “digital versions” of these crimes are almost non-existent, and law enforcement all too often has little capability to pursue crimes committed over the Internet.

Manufacturers of legitimate devices and services that have not been designed to prevent misuse, which are then used to harass or stalk people, shrug their way past complaints of their products being used for harm. Companies that create products that are designed to monitor people in a way that’s legally acceptable (such as employee or child trackers) shake off questions about their products, when they are designed in such a way that they can also be used for questionable legal purposes (such as surreptitiously surveilling a spouse).

The existence of these legal grey areas has the knock-on effect of hamstringing organizations that seek to defend victims. It’s a whole lot harder to fight against something that’s probably legal, even if it’s being used in ways that are at least deeply unethical if not illegal.

Tools as weapons

If you’ve been through a security line in an airport in the last decade or so, you’re probably aware of the occasionally perplexing list of items that are prohibited in carry-on luggage. Objects that are designed as weapons, such as guns and knifes, are obviously prohibited. But sporting equipment, hand tools, crafting implements like knitting needles, and even large quantities of liquids are also prohibited on most flights.

Air travel is now one of those situations where public sentiment is generally in favor of very stringent methods of excluding access to potentially dangerous items, even when they’re considered innocent in 99% of our daily activities. Most people don’t use screwdrivers for harmful or illegal purposes, but the risk of misuse is considered too great when a bunch of people are locked into a metal tube at 35,000 feet, so we have collectively agreed not to allow access to these items while we’re in flight. But in almost every other situation, screwdrivers are totally unregulated, and you’d be hard pressed to find anyone who would argue that it should be otherwise.

Airports have set up special infrastructure that allows them to apply a higher level of scrutiny, where they can exclude items that are usually considered innocent. Outside the airport, you have to use different techniques to protect yourself against harm from traditional weapons as well as tools that can also be used as weapons.

For most people, during most of their lifetime, an appropriate level of caution dictates being vigilant against traditional weapons rather than being worried about the presence of hand tools or sporting goods. People who’re in the midst of a harassment or domestic violence situation, however, are entirely reasonable to consider the possibility of ordinary household items being used as weapons.

Paranoia as a powerful defense

Anti-malware vendors are in an interesting place, with their products being used by people in ordinary threat scenarios as well as by those who are in very extraordinary threat scenarios. Most people would find it somewhere between bothersome and extremely problematic to be warned about every bit of code on their devices that could be used for harmful purposes. There would be a lot of alerts if you were to be warned about the presence of every figurative screwdriver, frying pan, or baseball bat in your midst.

But it’s entirely reasonable for you to want those warnings sometimes, especially if the context of your situation warrants an extra degree of caution. Each company that makes a security product has to decide on what an appropriate level of caution is, for their customer base.

That decision is generally arrived at based on the specific capabilities or aims of their products, as well as using feedback from their customers. Each company strikes a balance so that people get the best level of protection, without being so inundated with warnings that they get alert-fatigue. And over time, that balance inevitably shifts as product capabilities and the threatscape changes.

One tactic that a lot of security companies have taken is to allow some customizability within their products. The default level of protection is what should be appropriate for the largest number of customers; you can tweak individual settings to increase or decrease protection if your situation requires something different.

If you’re in a situation where extra caution is warranted – especially in the case of domestic violence or stalking – it’s a good idea to lock your system down as much as possible. This includes enabling the most paranoid settings on your security software.

In anti-malware software, this usually means enabling scanning for potentially unsafe or unwanted applications or using advanced detection mechanisms that will alert you to the presence of files that may pose a threat. If possible, contact your security vendor’s technical support so they can help you change your settings to those most appropriate to your situation.

The tricky thing about stalking and domestic violence is that it begins subtly: victims may not know they’re being targeted until it reaches a truly dangerous level. Technology makers must keep this in mind, so we can maintain a balance that helps protect our customers.

Changes from technology makers

The changes that need to happen to protect people in extraordinary circumstances against harmful code and devices are not just on users. Technology makers are an absolutely crucial part of this process as well. The expectation is not that companies can completely prevent harm from people misusing their products, but that the currently very high risk of misuse should be decreased.

Phone service providers and smart phone makers should enable devices so we can block numbers to quickly, completely, and permanently prohibit contact, including both calls and messages. Other communication media – including email, instant messaging, and social networking sites – need this functionality as well; if your platform offers a way to contact someone, there needs to be a way to rescind this ability for selected individuals.

This is not a perfect solution, as persistent abusers can usually find ways to create new accounts to resume their torment. Service providers that can apply anti-fraud protections should be able to somewhat limit this activity.

App stores need to specifically address what activities are acceptable for apps, and specifically prohibit products that operate in stealth mode such that they cannot be easily detected once the product has been installed. They need to prohibit searches related to illegal activities, especially those relating to domestic abuse. And they need to be consistent in enforcing these policies regardless of the size of the app developer.

Device and app manufacturers should be designing with privacy and security in mind, not bolting protections on after the fact. These companies need to have privacy policies in place that are published in highly visible places, as well as instructions regarding how to report security issues, including the relevant contact information. Internally, they should also have incident response plans in place so that they can quickly address reported problems.

And last, but certainly not least, security vendors need to play their part. We need to have consistent policies about what spyware and stalkerware products will be detected with default settings, as well as which products will be detected with advanced settings. We also need to continue innovating ways of offering more flexible options for people to increase detection when their threat model is different from what is typical.

25 Jun 2019 – 11:30AM

Hackers breach NASA, steal Mars mission data

Source: We Live Security Magazine On:

Read On

The infiltration was only spotted and stopped after the hackers roamed the network undetected for almost a year

The United States’ National Aeronautics and Space Administration, better known as NASA, suffered a security incident recently that saw hackers make off with sensitive data relating to the agency’s Mars missions, including details about the Curiosity rover.

The breach, which affected NASA’s Jet Propulsion Laboratory (JPL), went undetected for 10 months, reads a report by the NASA Office of the Inspector General (OIG).

“In April 2018 JPL discovered an account belonging to an external user had been compromised and used to steal approximately 500 megabytes of data from one of its major mission systems,” reads the report, attributing the intrusion to an Advanced Persistent Threat (APT) group.

But just as notable is how the breach occurred. It turns out that the hackers exploited a Raspberry Pi, which was attached to the JPL network without authorization, as a launch pad for getting inside and moving laterally across the network.

There’s no word on who was behind the intrusion or, indeed, who connected the diminutive, single-board computer, which can retail for as little as US$25, to the network [As it happens, today saw the unveiling of the device’s fourth incarnation].

What is abundantly clear, however, is that OIG wasn’t impressed with the space agency’s cybersecurity posture.

Dropping the ball

“Over the past 10 years, JPL has experienced several notable cybersecurity incidents that have compromised major segments of its IT network,” reads the scathing report.

And it doesn’t stop at that, going on to list a bit of a litany of shortcomings in NASA’s network security controls that put its systems and data at risk. “Multiple IT security control weaknesses reduce JPL’s ability to prevent, detect, and mitigate attacks targeting its systems and networks, thereby exposing NASA systems and data to exploitation by cybercriminals,” according to the report.

This was also laid bare in the Raspberry Pi incident, which was partly enabled by “reduced visibility into devices connected to its [NASA’s] networks”. This effectively means that new devices added to the network weren’t always subject to a vetting process by a security official and the agency didn’t know the gadget was present on the network.

In addition, the audit noted a lack of network segmentation, which the hackers ultimately exploited to move laterally between various systems connected to a network gateway. The gateway gives external users and its partners, including foreign space agencies, contractors, and educational institutions, remote access to a shared environment.

Moreover, the audit found that security log tickets, which include applying a software patch or updating a system’s configuration, sometimes sat unresolved for more than six months. That’s despite the fact that system administrators had a maximum of 30 days to take corrective action.

Such laggard progress helped oil the wheels of the Raspberry Pi intrusion, as “one of the four compromised systems had not been patched for the vulnerability in a timely manner”.

Also affected were systems involved in NASA’s Deep Space Network (DSN). This ultimately prompted security teams from the Johnson Space Center, which manages the International Space Station, to disconnect from the gateway due to fears that “cyberattackers could move laterally from the gateway into their mission systems, potentially gaining access and initiating malicious signals to human space flight missions that use those systems”.

The report also noted that JPL had not implemented a threat hunting program to “aggressively pursue abnormal activity on its systems for signs of compromise”, relying instead on “an ad hoc process to search for intruders”.

The report outlined 10 recommendations, with NASA agreeing too all but one – to put in place a formal threat hunting process.

24 Jun 2019 – 10:28PM

Privacy legislation may soon affect smaller businesses

Source: We Live Security Magazine On:

Read On

Why smaller businesses cannot afford to ignore how they gather, store and protect data

Between breaches and privacy gaffes at global mega-corporations, more people are on edge about protecting digital data. Consumers want to be able to control what companies collect and store, and many businesses want to be able to recoup costs for online services they’re expected to provide free of charge. So far, smaller businesses in the US have been excluded from this excitement. But that exception may be ending sooner rather than later.

Coming soon to a city near you

The General Data Protection Regulation (GDPR) in the European Union has already impacted many larger, international businesses based in the US. The California Consumer Privacy Act (CCPA) will impact many businesses that were too small or local to be affected by GDPR. But the CCPA exempts businesses below a US$25 million revenue threshold; many of these organizations may choose to kick the can down the road rather than to implement security standards such as those laid out in the NIST Cybersecurity Framework.

This may currently seem like a reasonable and cost-effective way of doing business, as many people erroneously consider smaller businesses a less tempting target for criminals. Smaller businesses are, in fact, squarely in the crosshairs of criminals, and are often less able to weather the financial costs associated with a breach. And it may not be long before smaller businesses are legally compelled to comply with security and privacy standards, just like bigger businesses.

Legislation has been proposed in the New York State Senate that goes much further in its proposed protections for consumer privacy. Like the CCPA, the New York Consumer Privacy Act would allow people to find out what information companies are collecting about them, see how they’re sharing that data, request corrections or deletions, or opt out of having their data shared with other organizations. Unlike the CCPA, this privacy legislation would apply to businesses of any size.

It still remains to be seen whether this will become law in New York as currently written. Whether or not the New York legislation specifically impacts your business, this wave of privacy legislation is only just beginning. It’s likely that privacy legislation will soon be coming to your locale. It could be at the city or state level, or it could even become a federal law of the land.

Smaller businesses cannot afford to ignore how they gather, store and protect data. They may soon be called upon to adhere to the same standards as larger organizations. And smaller business may have less access to funding that would allow them to move quickly should they need to rush to address privacy and security issues.

In order to prevent costly compliance issues later, smaller businesses should start preparing now.

Start with risk assessment and security training

To protect your business adequately, it’s important to know what you have to protect. Knowing what assets you have – in terms of both data and devices – will help keep your expenses lower. As a smaller business, you have an advantage in that assessing risks to your organization will likely be a much less complex process than for a larger business.

If you’re not sure where to start, you may wish to check out the NIST Small Business Cybersecurity Corner. If you feel you don’t have the bandwidth or expertise to handle the recommended actions, there are a growing number of security service providers out there that you can hire to help you manage this process.

Even if you don’t have the experience to implement security controls, it’s still important for everyone in your organization to be well versed in good cyber-hygiene practices. It’s the responsibility of everyone in the company to protect your data and devices. This is especially important if your business isn’t large enough to have a full-fledged security department, or if any of the data in your care has been made available to you by your customers. And finally, the good news is that to help bring the people in your company up to speed, training is available that’s both high quality and free or inexpensive.

21 Jun 2019 – 11:30AM

LoudMiner: Cross-platform mining in cracked VST software

Source: We Live Security Magazine On:

Read On

The story of a Linux miner bundled with pirated copies of VST (Virtual Studio Technology) software for Windows and macOS

LoudMiner is an unusual case of a persistent cryptocurrency miner, distributed for macOS and Windows since August 2018. It uses virtualization software – QEMU on macOS and VirtualBox on Windows – to mine cryptocurrency on a Tiny Core Linux virtual machine, making it cross platform. It comes bundled with pirated copies of VST software. The miner itself is based on XMRig (Monero) and uses a mining pool, thus it is impossible to retrace potential transactions.

At the time of writing, there are 137 VST-related applications (42 for Windows and 95 for macOS) available on a single WordPress-based website with a domain registered on 24 August, 2018. The first application – Kontakt Native Instruments 5.7 for Windows – was uploaded on the same day. The size of the apps makes it impractical to analyze them all, but it seems safe to assume they are all Trojanized.

The applications themselves are not hosted on the WordPress-based site, but on 29 external servers, which can be found in the IoCs section. The admins of the site also frequently update the applications with newer versions, making it difficult to track the very first version of the miner.

Regarding the nature of the applications targeted, it is interesting to observe that their purpose is related to audio production; thus, the machines that they are installed on should have good processing power and high CPU consumption will not surprise the users. Also, these applications are usually complex, so it is not unexpected for them to be huge files. The attackers use this to their advantage to camouflage their VM images. Moreover, the decision to use virtual machines instead of a leaner solution is quite remarkable and this is not something we routinely see.

Here are some examples of applications, as well as some comments you can find on the website:

  • Propellerhead Reason
  • Ableton Live
  • Sylenth1
  • Nexus
  • Reaktor 6
  • AutoTune

Figure 1. Comment #1 from the “admin

Figure 2. Comment #2 from the “admin”

We found several forum threads of users complaining about a qemu-system-x86_64 process taking 100% of their CPU on their Mac:

Figure 3. User report #1 (https://discussions.apple.com/thread/250064603)

Figure 4. User report #2 (https://toster.ru/q/608325)

A user named “Macloni” (https://discussions.apple.com/thread/8602989) said the following:

“Unfortunately, had to reinstall OSX, the problem was that Ableton Live 10, which I have downloaded it from a torrent site and not from the official site, installs a miner too, running at the background causing this.” The same user attached screenshots of the Activity Monitor indicating 2 processes – qemu-system-x86_64 and tools-service – taking 25% of CPU resources and running as root.”

The general idea of both macOS and Windows analyses stays the same:

  1. An application is bundled with virtualization software, a Linux image and additional files used to achieve persistence.
  2. User downloads the application and follows attached instructions on how to install it.
  3. LoudMiner is installed first, the actual VST software after.
  4. LoudMiner hides itself and becomes persistent on reboot.
  5. The Linux virtual machine is launched and the mining starts.
  6. Scripts inside the virtual machine can contact the C&C server to update the miner (configuration and binaries).

While analyzing the different applications, we’ve identified four versions of the miner, mostly based on how it’s bundled with the actual software, the C&C server domain, and something we believe is a version string created by the author.


We’ve identified three macOS versions of this malware so far. All of them include dependencies needed to run QEMU in installerdata.dmg from which all files are copied over to /usr/local/bin and have appropriate permissions set along the way. Each version of the miner can run two images at once, each taking 128 MB of RAM and one CPU core. Persistence is achieved by adding plist files in /Library/LaunchDaemons with RunAtLoad set to true. They also have KeepAlive set to true, ensuring the process will be restarted if stopped. Each version has these components:

  1. QEMU Linux images.
  2. Shell scripts used to launch the QEMU images.
  3. Daemons used to start the shell scripts at boot and keep them running.
  4. A CPU monitor shell script with an accompanying daemon that can start/stop the mining based on CPU usage and whether the Activity Monitor process is running.

The CPU monitor script can start and stop the mining by loading and unloading the daemon. If the Activity Monitor process is running, the mining stops. Otherwise, it checks for how long the system has been idle in seconds:

If it’s been longer than 2 minutes, it starts the mining. If it’s been less than 2 minutes, it checks the total CPU usage:

divides that by the number of CPU cores:

and if it’s greater than 85%, it stops the mining. The script itself is a bit different across versions, but the general idea stays the same.

After the installation is done, all miner-related installation files are deleted.

Figure 5. Installation of Polyverse.Music.Manipulator.v1.0.1.macOS.dmg

Figure 6. Polyverse.Music.Manipulator.v1.0.1.macOS.dmg setup instructions

Version 1

The miner files in the downloaded application package are not obfuscated in any way or placed in another package; they are installed alongside the software in the following places:

  • /Library/Application Support/.Qemusys
    • qemu-system-x86_64 – clean QEMU binary
    • sys00_1-disk001.qcow2 – Linux image (first)
    • qemuservice – shell script that launches the first image via the qemu-system-x86_64 binary (see Script 1 listing)
  • /Library/Application Support/.System-Monitor
    • system-monitor.daemon – launches first image via system-monitor binary
  • /usr/local/bin
    • .Tools-Service
      • sys00_1-disk001.qcow2 – Linux image (second)
      • tools-service.daemon – launches second image via tools-service binary
    • cpumonitor – starts/stops mining based on idle time and CPU usage
    • system-monitor – copy of qemu-system-x86_64 binary
    • tools-service – copy of qemu-system-x86_64 binary
  • /Library/LaunchDaemons
    • buildtools.system-monitor.plist – launches system-monitor.daemon
    • buildtools.tools-service.plist – launches tools-service.daemon
    • modulesys.qemuservice.plist – launches qemuservice
    • systools.cpumonitor.plist – launches cpumonitor

Script 1. qemuservice shell script

After the dependencies are copied over, all miner-related daemons are launched and then the actual software is installed:

  • qemuservice won’t launch the image if the Activity Monitor process is running. In fact, if it is running, it will unload the plist that it was launched by.
  • tools-service.daemon will launch the image only when qemu-system-x86_64 process is not running and after sleeping for 45 minutes.
  • System-monitor.daemon will launch the image only if Intel i5, i7 or i9 CPU is detected.

These scripts use the same command to launch the QEMU image, only differing in names and the image path.

We’ve found the following screenshot related to version 1 of the miner:

Figure 7. CPU consumption of QEMU with Little Snitch (source: https://imgur.com/a/sc3u6kk)

It is from Little Snitch indicating that some connections from the process qemu-system-x86_64 were blocked. Specifically, hopto[.]org (a free hostname service) is a C&C used by version 1 of the miner.

Version 2

Miner files are in data_installer.pkg inside the downloaded application package. data_installer.pkg is installed first, then the VST software. Before installation, version 1 of the miner is removed along with executing the command:

As seen in the listing in Script 2, it only does so when it detects a running qemu-system-x86_64 process.

Script 2. data_installer.pkg preinstall script that removes version 1

The following temporary files are created:

  • /Users/Shared
    • z1 – QEMU binary
    • z1.daemon – launches the QEMU image with the QEMU binary
    • z1.qcow2 – QEMU image
    • z1.plist – launches z1.daemon
    • z3 – CPU monitor script, little change from version 1 cpumonitor
    • z3.plist – used to launch z3
    • randwd – generates random names

After dependencies are copied over, the miner is installed. This time the names of QEMU binaries, plists and directories are randomized with the randwd script. The miner installation creates two copies of z1, z1.daemon, z1.qcow2 and z1.plist. For each copy, the following happens:

  • A directory with a random name is created in /Library/Application Support
  • The QEMU binary z1 carries the same name as the directory and is copied into /usr/local/bin
  • z1.daemon (see listing in Script 3) and z1.qcow2 are copied into this directory under their random names
  • z1.plist is copied with the name com.<random_name>.plist into /Library/LaunchDaemons

z1.daemon, z1.plist, z3 and z3.plist files serve as templates. References to other scripts, binaries, plists, etc. in these files are replaced by their corresponding generated random name.

A random name is also chosen for the CPU monitor (z3) shell script and its accompanying plist file. z3 is copied into /usr/local/bin and the plist into /Library/LaunchDaemons under the name com.<random_name>.plist.

Script 3. z1.daemon shell script

Version 2 is a bit cleaner and/or simpler than version 1. There is only one QEMU image, with two copies made; same for the image launcher scripts, daemons and the cpumonitor. Even though version 2 randomizes its filenames and directories, it can only be installed once because the installation checks for running processes with accel=hvf in their command line.

From the version 2 applications we’ve checked so far, the SHA1 hash of the data_installer.pkg is always 39a7e86368f0e68a86cce975fd9d8c254a86ed93.

Version 3

The miner files are in an encrypted DMG file, called do.dmg, inside the application package. The DMG is mounted with the following command:

The miner DMG contains a single package: datainstallero.pkg. This and the software package are then installed.

The package contents of datainstallero.pkg and data_installer.pkg from version 2 are more or less the same, but datainstallero.pkg adds two obfuscated scripts – clearpacko.sh and installpacko.sh – and obfuscates an existing script – randwd:

  • clearpacko.sh removes version 1 of the miner like version 2 does.
  • installpacko.sh installs the miner the same way version 2 does, except the comments have been stripped from the script.

The SHA1 of the do.dmg remains the same as well: b676fdf3ece1ac4f96a2ff3abc7df31c7b867fb9.

Launching the Linux image

All versions use multiple shell scripts to launch the images. The shell scripts are executed by plists on boot and are kept alive.

  • Version 1 executes the following binaries (copies of qemu-system-x86_64) to launch the QEMU images: qemu-system-x86_64, system-monitor, tools-service.
  • Versions 2 and 3 use the same command, but the filename of the binary, directory in Application Support and the QEMU filename is randomized.

All versions use the following switches:

  • -M accel=hvf to use the Hypervisor framework as an accelerator. HVF was introduced with OS X 10.10 and support for HVF was added in QEMU 2.12, which was released in April 2018.
  • -display none so the virtual machine runs without a graphical interface.

Since the image is launched without specifying the amount of RAM and # of CPU cores, the default values are used: 1 CPU core and 128MB of RAM. All versions can launch 2 images.

Windows (version 4)

From the strings we extracted from the application, we define the only Windows version seen so far as version 4. As we mentioned earlier, the logic is quite similar to the macOS version. Each Windows application is packaged as an MSI installer that installs both the “cracked” application, and Figure 8 shows the trust popup for installing the VirtualBox driver when running a “cracked” VST installer from vstcrack[.]com.

Figure 8. Trust popup for a VirtualBox driver when running the installation of an application from vstcrack[.]com

VirtualBox is installed in its usual folder name (C:Program FilesOracle); however, the attributes of the directory are set to “hidden”. Then the installer copies the Linux image and VBoxVmService (a Windows service used to run a VirtualBox virtual machine as a service) into C:vms, which is also a hidden directory. Once the installation is complete, the installer runs a batch script compiled with BAT2EXE (see the unpacked listing in Script 4) to import the Linux image and run VmServiceControl.exe to start the virtual machine as a service.

Script 4. Batch script used to run the Linux virtual machine as a service

This method is used to ensure the persistence of the miner after reboot. Indeed, VboxVmService comes with a configuration file (see Script 5) in which it is possible to enable the AutoStart option so the virtual machine is automatically launched at startup.

Script 5. Configuration file for VBoxVmService with AutoStart enabled

The OVF file included in the Linux image describes the hardware configuration of the virtual machine (see Script 6): it uses 1GB of RAM and 2 CPU cores (with a maximum usage of 90%).

Script 6. Hardware configuration of the Linux image

Linux image

The Linux image is Tiny Core Linux 9.0 configured to run XMRig, as well as some files and scripts to keep the miner updated continuously. The most interesting files are:

  • /root/.ssh/{id_rsa, id_rsa.pub} – the SSH pair key used to update the miner from the C&C server using SCP.
  • /opt/{bootsync.sh, bootlocal.sh} – the system startup commands that try to update the miner from the C&C server and run it (see Scripts 7 and 8):

Script 7. bootsync.sh

Script 8. bootlocal.sh

  • /mnt/sda1/tools/bin – main files and scripts used to update and run the miner.
  • /mnt/sda1/tools/xmrig – contains the source code of XMRig (from the GitHub repository).

The configuration of the miner is stored in /mnt/sda1/tools/bin/config.json and contains mostly the domain name and the port used for the mining pool, which can differ depending on the version (see examples in the IoCs section).

The update mechanism is performed via SCP (Secure File Copy) by three different scripts:

  • xmrig_update – updates the configuration of the miner (config.json);
  • ccommand – updates ccommand_update, xmrig_update (see Script 9), updater.sh, xmrig;
  • ccommand_update – updates ccommand;

From what we have seen, the miner configuration is updated once every day.

Script 9. xmrig_update

In order to identify a particular mining session, a file containing the IP address of the machine and the day’s date is created by the idgenerator script and its output is sent to the C&C server by the updater.sh script.

Obviously, the best advice to be protected against this kind of threat is to not download pirated copies of commercial software. There are, however, some hints that can help you to identify when an application contains unwanted code:

  • A trust popup from an unexpected, “additional” installer (in this case the Oracle network adapter).
  • High CPU consumption by a process you did not install (QEMU or VirtualBox in this case).
  • A new service added to the startup services list (Windows) or a new Launch Daemon (macOS).
  • Network connections to curious domain names (such as system-update[.]info or system-check[.]services here).


macOS “cracked” applications (versions 1-3)

SHA-1FilenameESET detection nameVersion number
32c80edcec4f7bb3b494e8949c6f2014b7f5db65Native Instruments Massive Installer.pkgOSX/LoudMiner.A1

Windows “cracked” applications (version 4)

SHA-1FilenameESET detection name

Linux images

SHA-1FilenameVersion number
39a7e86368f0e68a86cce975fd9d8c254a86ed93z1.qcow2 (renamed with a randomized name)2
59026ffa1aa7b60e5058a0795906d107170b9e0fz1.qcow2 (renamed with a randomized name)3



  • /Library/Application Support/.Qemusys
  • /Library/Application Support/.System-Monitor
  • /usr/local/bin/{.Tools-Service, cpumonitor, system-monitor, tools-service}
  • /Library/LaunchDaemons/{com.buildtools.system-monitor.plist, com.buildtools.tools-service.plist, com.modulesys.qemuservice.plist, com.systools.cpumonitor.plist}



vstcrack[.]com (137[.]74.151.144)

Download hosts (via HTTP on port 80)

  • 185[.]112.156.163
  • 185[.]112.156.29
  • 185[.]112.156.70
  • 185[.]112.157.102
  • 185[.]112.157.103
  • 185[.]112.157.105
  • 185[.]112.157.12
  • 185[.]112.157.181
  • 185[.]112.157.213
  • 185[.]112.157.24
  • 185[.]112.157.38
  • 185[.]112.157.49
  • 185[.]112.157.53
  • 185[.]112.157.65
  • 185[.]112.157.72
  • 185[.]112.157.79
  • 185[.]112.157.85
  • 185[.]112.157.99
  • 185[.]112.158.112
  • 185[.]112.158.133
  • 185[.]112.158.186
  • 185[.]112.158.190
  • 185[.]112.158.20
  • 185[.]112.158.3
  • 185[.]112.158.96
  • d-d[.]host (185[.]112.158.44)
  • d-d[.]live (185[.]112.156.227)
  • d-d[.]space (185[.]112.157.79)
  • m-m[.]icu (185[.]112.157.118)

Update hosts (via SCP)

  • aly001[.]hopto.org (192[.]210.200.87, port 22)
  • system-update[.]is (145[.]249.104.109, port 5100)

Mining hosts

  • system-update[.]info (185[.]193.126.114, port 443 or 8080)
  • system-check[.]services (82[.]221.139.161, port 8080)
ExecutionT1035Service ExecutionOn Windows, the Linux image is run as a service with VboxVmService.
PersistenceT1050New ServiceInstall the Linux virtual machine as a service with VboxVmService.
T1062HypervisorInstall a type-2 hypervisor on the host (VirtualBox or QEMU) to run the miner.
T1160Launch DaemonThe macOS versions use a Launch Daemon to ensure the persistence.
Defense EvasionT1027Obfuscated Files or InformationSome shell scripts are obfuscated, and some installers are encrypted in macOS versions.
T1045Software PackingUse BAT2EXE to pack batch script in Windows versions.
T1158Hidden Files and DirectoriesThe VirtualBox installation folder and the directory containing the Linux image are hidden.
Command and ControlT1043Commonly Used PortUse TCP ports 443 and 8080 for mining pool communication.
T1105Remote File CopyUse SCP (port 22 or 5100) to copy files from/to the C&C server.
ImpactT1496Resource HijackingUse victim machines to mine cryptocurrency (Monero).

and 20 Jun 2019 – 11:00AM

You’d better change your birthday – hackers may know your PIN

Source: We Live Security Magazine On:

Read On

Are you in the 26% of people who use one of these PIN codes to unlock their phones?

You’ve likely seen a list of top 25 passwords that get reused time and time again – “password” is a usual suspect – but what about phone PIN numbers? How unique are the PIN codes that we choose to stop cybercriminals from getting into our phones and their eyes onto our most precious accounts?

People tend to lock their phones with a code, but what if someone knew that code or could possibly work it out? Maybe they could guess it from frequently used PIN numbers? Would they then be able to read your emails, send a WhatsApp or view your Amazon basket?

Recent research from the SANS Institute found the top 20 most common mobile phone PIN codes were (and not in order):


They found that an astonishing 26% of all phones are cracked using these codes. There is a good chance that if your phone is stolen or lost, criminals could get into your phone within their first few attempts – even without knowing anything about you.

So why do people – including Kanye West – continue to use simple codes? Well, it might be best to answer this question first: When did you last change the PIN code to unlock your phone?

Most people have now had a smart phone complete with a lock on it for around a decade and it must be said that in 2007, when the first Apple iPhone came out, we were more interested in its features than discussing attack vectors.

Fingerprint readers were a few years off in 2007 and so when we had to enter the code up to 50, maybe even 100 times a day to unlock it, you can start to see why people wanted to get into their phones quickly and easily.

The problem is, even with the introduction of longer codes, Face ID or Touch ID, people rarely change their PINs and settled with a code they use on every device – even though we now rarely unlock our phones with a PIN.

Another method people use to remember PIN codes is to use numbers that mean something to them. However, a threat actor relies on people who tend to have an “it won’t happen to me” attitude, so what if the person wanting to get into your phone knows a little about you? When phones have a 4-digit code, people will often use a year; when a 6-digit code is recommended, people often enter a memorable date to unlock their phone.

This is an extremely dangerous way to secure your most cherished device and allows any cybercriminal with some open-source research skills to trial possible codes to unlock your phone.

Why context matters

To give a little context about how easy it can be, I was recently at an event where I was giving a talk – ironically, about how to hack a business – and where I started discussing how cybercriminals can socially engineer passwords out of people. At that precise moment, a guy in the front row took his phone from his pocket and entered a PIN to unlock it. I noticed he entered a 6-digit code and I was able to view the last two digits, which were 1 and 4.

To most people this might sound like just two random numbers but if I add context to these numbers I might be able to work out the other four. I decided to go off-piste in my talk, so I asked his name. He obliged willingly, and I entered his name into Facebook. On his “about” page I found he was married but apart from her name, there wasn’t much else to take in. I clicked on his wife’s profile and went to her “about” page.

There I noticed she had lots of personal information open to public view, specifically the date when she got married which was the 1st September, yes you guessed it, 2014. I then politely asked the gentleman if I could hold his phone and attempt to get into it and, although not happily, for the sake of the test, he allowed me. I entered “010914” into his phone and bingo, I was in! I had live demoed a fantastic example of what can happen in real life and it was at that moment, half the audience then got out their phones and asked what the shortcut was to change their phone’s PIN code.

What about Face ID or Touch ID? Won’t this fully protect us from attackers? Well, the short answer is no. Many people think that once they have a fingerprint reader or facial recognition on their device that they won’t need to be so hot on PIN code security. Remember that there is still a default code to get into your phone and a hacker can work out this code far more easily than cutting off your finger or replicating your face to open your device. (On that note, I once used a dead finger to get access into a phone but that’s one for another blog!)

When I used to work in the Digital Forensics Unit for the police, we had a wonderful tool that could get into Apple iPhones. (You can view the same machine in action here). Our code breaker would attempt all 4-digit codes incrementally from “0000” to “9999” without locking or wiping the phones. It took 4 seconds per attempt, so – ideally, to save time – we wanted to start the process on a number near to where the PIN could be located.

We used to start the tool at “1970” and, more often than not, we would have access to the devices before it had reached “2010”. This is because so many people fall foul of using their date of birth, wedding year or the year where their child was born so they can more easily remember it.

How to stay safe

The best countermeasure is to start using a long unique alphanumeric code to unlock your phone; then, as this can be time consuming to unlock your device, turn on Touch ID or Face ID to speed up entry.

It might also be a good idea to mention here that you should also be aware of your surroundings and who might be watching your movements. Far too frequently on public transport have I seen people enter PIN codes, passwords, or even been on the phone shouting out credit card details including the three-digit CVV number on the back!

Finally, after backing up your device, you should add a further layer of security by turning on “Find My iPhone” for iOS or “Find My Device” on Android, which will allow you to wipe your phone remotely should it ever get stolen (anti-theft and remote-wipe features are also included in reputable mobile security solutions). Even though you may never see that device again, at least the criminals won’t be able to get into your device and look through your personal data and information.

19 Jun 2019 – 11:30AM

Instagram tests new ways to recover hacked accounts

Source: We Live Security Magazine On:

Read On

Locked out and out of luck? The photo-sharing platform is trialing new methods to reunite you with your lost account

Instagram is testing out a new, in-app process for users to regain access to accounts overtaken by cybercriminals.

In recent years, the site has been grappling with a growing problem of successful account-takeover attempts, including via apparent mass campaigns that we also wrote about recently (here and here). ESET has research also uncovered a bunch of Android apps in Google Play that were designed to steal Instagram credentials.

The platform’s new account-retrieval method is intended to do away with what has often been a laborious process that could involve long waiting times and back-and-forths with its customer support. The site could previously also ask you to supply a selfie in which you would hold up a sheet of paper with an Instagram-supplied code handwritten on it in order to prove you’re the legitimate account holder.

And yet, this hasn’t always helped victims get their accounts back. At any rate, this should be a thing of the past according to the new recovery process first detailed by Motherboard, which cited an emailed statement from the photo-sharing platform.

The ‘new order’

Under the new rules, if you repeatedly input an incorrect password – such as because your account has been invaded by a hacker who wasted no time in changing the login credentials – the Instagram app will ask you for your contact information of choice. You could, for example, input the email address or phone number you used to sign up for the service, so that you can reclaim access to your account even if the ne’er-do-wells have altered the username or associated contact information. (The same prompt will appear if you simply tap the “Need more help” option on the app’s login screen.)

From there, you will receive a six-digit access code that will enable you to retrieve your profile.

The social platform also aims to address the scenario where the hackers have also overtaken either the email account or phone number tied to an Instagram account. “When you re-gain access to your account, we will take additional measures to ensure a hacker cannot use codes sent to your email address [or] phone number to access your account from a different device,” an Instagram spokesperson was quoted as saying.

Also part of the new safeguards is a mechanism to foil account hijacking aimed at grabbing high-profile aliases before holding the victims for ransom or selling the handles off for hefty gains on underground markets. Any changes to an account, including those made by its genuine owner, will result in a temporary freeze on the username, so that it “can’t be claimed by someone else if you lose access to your account”. This feature, which will give people some time to claim their accounts back, is available on Android at the moment but will also roll out on iOS soon.

Meanwhile, a human review will still be needed in “edge cases”, writes PCMag, including when hackers take control of both the email address and the mobile phone number tied to an (also hijacked) Instagram account.

Hard to hack

The Facebook-owned service may still fine-tune the new system over the next few months, as its wider availability remains unclear. Still, it’s best not to have to go through any account recovery process, streamlined or not.

Restricting who can view your personal information, locking down your account with a strong and unique password and an extra authentication factor, as well as being wary of messages targeting your credentials will go a long way towards staying safe on many a social platform. In addition, you may also want to refer to our 5 tips to help you stay safe on Instagram.

18 Jun 2019 – 10:16PM

Malware sidesteps Google permissions policy with new 2FA bypass technique

Source: We Live Security Magazine On:

Read On

ESET analysis uncovers a novel technique bypassing SMS-based two-factor authentication while circumventing Google’s recent SMS permissions restrictions

When Google restricted the use of SMS and Call Log permissions in Android apps in March 2019, one of the positive effects was that credential-stealing apps lost the option to abuse these permissions for bypassing SMS-based two-factor authentication (2FA) mechanisms.

We have now discovered malicious apps capable of accessing one-time passwords (OTPs) in SMS 2FA messages without using SMS permissions, circumventing Google’s recent restrictions. As a bonus, this technique also works to obtain OTPs from some email-based 2FA systems.

The apps impersonate the Turkish cryptocurrency exchange BtcTurk and phish for login credentials to the service. Instead of intercepting SMS messages to bypass 2FA protection on users’ accounts and transactions, these malicious apps take the OTP from notifications appearing on the compromised device’s display. Besides reading the 2FA notifications, the apps can also dismiss them to prevent victims from noticing fraudulent transactions happening.

The malware, all forms of which are detected by ESET products as Android/FakeApp.KP, is the first known to sidestep the new SMS permission restrictions.

The first of the malicious apps we analyzed was uploaded to Google Play on June 7, 2019 as “BTCTurk Pro Beta” under the developer name “BTCTurk Pro Beta”. It was installed by more than 50 users before being reported by ESET to Google’s security teams. BtcTurk is a Turkish cryptocurrency exchange; its official mobile app is linked on the exchange’s website and only available to users in Turkey.

The second app was uploaded on June 11, 2019 as “BtcTurk Pro Beta” under the developer name “BtSoft”. Although the two apps use a very similar guise, they appear to be the work of different attackers. We reported the app on June 12, 2019 when it had been installed by fewer than 50 users.

After this second app was removed, the same attackers uploaded another app with identical functionality, this time named “BTCTURK PRO” and using the same developer name, icon and screenshots. We reported the app on June 13, 2019.

Figure 1 shows the first two malicious apps as they appeared on Google Play.

Figure 1. The fake BtcTurk apps on Google Play

After installation, both apps described in the previous section follow a similar procedure. In this section of the blogpost, we will describe the novel 2FA bypass technique using the first app, “BTCTurk Pro Beta”, as an example.

After the app is launched, it requests a permission named Notification access, as shown in Figure 2. This permission allows the app to read the notifications displayed by other apps installed on the device, dismiss those notifications, or click buttons they contain.

Figure 2. The fake app requesting Notification access

The Notification access permission was introduced in Android version 4.3 (Jelly Bean), meaning almost all active Android devices are susceptible to this new technique. Both fake BtcTurk apps require Android version 5.0 (KitKat) or higher to run; thus they could affect around 90% of Android devices.

Once the user grants this permission, the app displays a fake login form requesting credentials for BtcTurk, as shown in Figure 3.

Figure 3. The fake login form displayed by the malicious app

After credentials are entered, a fake error message in Turkish is displayed, as seen in Figure 4. The English translation of the message is: Opss! Due to the change made in the SMS Verification system, we are temporarily unable to service our mobile application. After the maintenance work, you will be notified via the application. Thank you for your understanding.”

In the background, the entered credentials are sent to the attacker’s server.

Figure 4. The fake error message displayed by the malicious app

Thanks to the Notification access permission, the malicious app can read notifications coming from other apps, including SMS and email apps. The app has filters in place to target only notifications from apps whose names contain the keywords “gm, yandex, mail, k9, outlook, sms, messaging”, as seen in Figure 5.

Figure 5. Targeted app names and types

The displayed content of all notifications from the targeted apps is sent to the attacker’s server. The content can be accessed by the attackers regardless of the settings the victim uses for displaying notifications on the lock screen. The attackers behind this app can also dismiss incoming notifications and set the device’s ringer mode to silent, which can prevent victims from noticing fraudulent transactions happening.

As for effectiveness in bypassing 2FA, the technique does have its limitations – attackers can only access the text that fits the notification’s text field, and thus, it is not guaranteed it will include the OTP. The targeted app names show us that both SMS and email 2FA are of interest to the attackers behind this malware. In SMS 2FA, the messages are generally short, and OTPs are likely to fit in the notification message. However, in email 2FA, message length and format are much more varied, potentially impacting the attacker’s access to the OTP.

Just last week, we analyzed a malicious app impersonating the Turkish cryptocurrency exchange Koineks (kudos to @DjoNn35 for bringing that app to our attention). It is of interest that the fake Koineks app uses the same malicious technique to bypass SMS and email-based 2FA but lacks the ability to dismiss and silence notifications.

According to our analysis, it was created by the same attacker as the “BTCTurk Pro Beta” app analyzed in this blogpost. This shows that attackers are currently working on tuning this technique to achieve the “next best” results to stealing SMS messages.

Figure 6. Information about the fake Koineks app on Google Play

If you suspect that you have installed and used one of these malicious apps, we advise you to uninstall it immediately. Check your accounts for suspicious activity and change your passwords.

Last month, we warned about the growing price of bitcoin giving rise to a new wave of cryptocurrency malware on Google Play. This latest discovery shows that crooks are actively searching for methods of circumventing security measures to increase their chances of profiting from the development.

To stay safe from this new technique, and financial Android malware in general:

  • Only trust cryptocurrency-related and other finance apps if they are linked from the official website of the service
  • Only enter your sensitive information into online forms if you are certain of their security and legitimacy
  • Keep your device updated
  • Use a reputable mobile security solution to block and remove threats; ESET systems detect and block these malicious apps as Android/FakeApp.KP
  • Whenever possible, use software-based or hardware token one-time password (OTP) generators instead of SMS or email
  • Only use apps you consider trustworthy, and even then: only allow Notification access to those that have a legitimate reason for requesting it
Package nameHashESET detection name
Initial AccessT1475Deliver Malicious App via Authorized App StoreThe malware impersonates legitimate services on Google Play.
Credential AccessT1411User Interface SpoofingThe malware displays phishing activity and requests users to log in.

17 Jun 2019 – 11:30AM

GDPR one year on: Most Europeans know at least some of their rights

Source: We Live Security Magazine On:

Read On

On the other hand, a surprisingly high number of Europeans haven’t even heard of the landmark legislation

Most people in Europe have heard of the General Data Protection Regulation (GDPR) and are aware of at least one right guaranteed by the landmark rules, a special Eurobarometer survey has shown.

The European Commission polled 27,000 people in the European Union in March of this year to gather their views on data protection and GDPR itself, which came into effect on May 25th, 2018, and gives power back to EU citizens over how their personal information is processed and used.

The survey showed that two-thirds (67%) of the respondents have heard of GDPR, whereas 32% have not. Of the respondents who are aware of the regulation, there was a fairly even split between those who have heard of it and know what it is versus those who have heard of it but don’t know exactly what it is. The level of awareness varied wildly between countries – from 90% in Sweden all the way to 44% in France.

When it came to rights guaranteed by the regulation, nearly three in four people (73%) said they’d heard about at least one out of six GDPR-guaranteed rights that were specifically brought up in the survey.

One in three people (31%) were aware of all six, whereas 27% of people never heard about any of them. The rights to access one’s data, to correct one’s data, and to object to receiving direct marketing rang a bell, at least, to most people. The right to ‘be forgotten’, to have one’s data moved from one provider to another, and not to be subject to a decision based solely on automated processing came next.

Similarly, the majority of people were aware of the existence of a public authority in their country that is responsible for protecting their rights regarding personal data.

“The results show that Europeans are relatively well aware of the new data protection rules, their rights and the existence of national data protection authorities, to whom they can turn for help when their rights are violated,” said the European Commission.

The EU’s executive arm also announced that it was launching a campaign to boost people’s awareness of their privacy rights, as well as encourage them to read privacy statements and optimize their privacy settings on websites.

Inscrutable and intractable

On a related note, the survey found that perhaps a surprisingly high number of Europeans (60%, to be exact) read websites’ privacy policies at least partly – with 13% saying that they actually read them in their entirety.

Meanwhile, most of those who never read them said that this is because the statements are too long or difficult to understand. “I once again urge all online companies to provide privacy statements that are concise, transparent and easily understandable by all users,” Věra Jourová, European Commissioner for Justice, Consumers and Gender Equality, was quoted as saying.

In fact, The New York Times has just published an analysis of privacy policies of 150 popular websites and apps. Interestingly enough, Google’s privacy policy, for example, became more readable after the introduction of GDPR. However, this was found to be at the expense of brevity, suggesting “an intractable tradeoff between a policy’s readability and length”, wrote NYT.

14 Jun 2019 – 04:00PM

Spain’s top soccer league fined over its app’s ‘tactics’

Source: We Live Security Magazine On:

Read On

La Liga has taken substantial flak for tapping into microphones and geolocation services in fans‘ phones in a bid to root out piracy

Spain’s national data protection agency AEPD has slapped a fine of €250,000 (US$280,000) on the country’s top-flight soccer league, La Liga, for failing to make it adequately clear to users of its Android app that the app can activate microphones on its users’ phones as well as monitor their location, according to the Spanish daily El Diario.

You may recall our report from a year ago (exactly to the day, in fact) about La Liga’s rather unusual approach to tackling pirate broadcasts of soccer games – by enlisting the help of its app’s users. More precisely, the app would ask for access to the microphones on the fans’ handsets in order to record their surroundings and check if the captured audio fingerprint matched up with the sound of a soccer broadcast. Together with GPS data also collected by the app, this was intended to pin down the locations of bars and other public venues that might be showing games illegally. The functionality provoked an outcry, with many people claiming that the app was essentially turning them into spies and their phones into bugging devices.

Fast forward 12 months, and the Spanish data watchdog concludes that La Liga violated European Union rules about consent and transparency.

The app as available in Google Play

Here’s the kicker

While the app does request– twice in fact, according to La Liga – user permissions to activate the microphones and GPS services, AEPD maintains that this is communicated in an “opaque” manner. Additionally, the agency says that the consent should be requested every time the mic is activated, because the practice amounts to the collection of personal data. AEPD also said that La Liga violated the EU’s General Data Protection Regulation (GDPR) by failing to enable users to withdraw their consent at any time.

La Liga would have none of it, however. In a statement, Spain’s premier soccer league said that it will challenge the decision in court, noting that AEPD made no effort to understand how the technology works. The league also reaffirmed what it’s said before – that the technology doesn’t make it possible to listen to users’ voices and conversations and that there is no way to turn the sound footprint back into the actual content of the recording, hence there’s no collection of personal data to begin with.

La Liga also claims that the captured audio snippet is automatically converted into a binary code on the device itself. The code is then compared to a reference database and, if there’s no match between the two, the former code is discarded. The feature was introduced into the app with an update on June 8, 2018.

Either way, the league said that it will kill the functionality by June 30, although not exactly because it was told to do so by the data watchdog. Rather, La Liga called the feature “experimental”, adding that it won’t extend its contract with the technology’s provider after it expires at the end of this month. Nevertheless, the league said that it will continue to test new technologies in its fight against unlicensed broadcasts of soccer games, which it says cost Spanish soccer €400 million (US$450 million) in lost revenues each year.

The app has more than 10 million users, including 4 million in Spain. Its main functionality is to deliver scores, news, highlights from the top flight of Spanish soccer.

12 Jun 2019 – 10:29PM

Do NOT follow this link or you will be banned from the site!
QtW pAL pt xLnLUaB Tv dcT