Standards, Regulations & Compliance , Video
ISMG Editors: Apple's Antitrust Showdown With the Feds
Legal Expert Jonathan Armstrong Unpacks Issues in Big Tech, Ransomware, AI and More Anna Delaney (annamadeline) • March 29, 2024In the latest weekly update, legal expert Jonathan Armstrong joined three ISMG editors to discuss the Department of Justice's antitrust lawsuit against Apple, ransomware payment dilemmas and AI copyright infringement fears - highlighting the intricate legal issues shaping big tech and cybersecurity.
See Also: OnDemand | 2024 Phishing Insights: What 11.9 Million User Behaviors Reveal About Your Risk
Armstrong, adjunct professor at Fordham Law School, joined ISMG editors - Anna Delaney, director, productions; Mathew Schwartz, executive editor of DataBreachToday and Europe; and Tony Morbin, executive news editor, EU - to discuss:
- The antitrust lawsuit against Apple - its background and potential to reshape tech industry practices and affect cybersecurity;
- The evolving legal landscape of ransomware payments, underscored by recent sanctions and the critical role of board-level risk assessment;
- Challenges faced by AI developers in balancing the need for extensive training of datasets with copyright compliance - and why tech firms should proceed with caution and due diligence.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the March 15 edition that goes inside the politics of cybersecurity and the March 22 edition on how the quantum era will reshape cybersecurity.
Transcript
This transcript has been edited and refined for clarity.Anna Delaney: Welcome to the ISMG Editors' Panel. I'm Anna Delaney and today we're addressing legal and ethical challenges that are reshaping the tech landscape. Our discussion will cover the U.S. Justice Department's antitrust lawsuit against Apple, the complexities of ransomware payments in Bitcoins, and the legal challenges CISOs encounter, along with the alleged breach of Catherine, Princess of Wales's medical records, highlighting data privacy concerns in the era of GDPR and DPA. To do so joining us is lawyer, Jonathan Armstrong, adjunct professor at the Fordham Law School, and formally partner at Cordery Compliance. Jonathan, it's a real honor to have you join us. You have a new role, do tell us about it and what are you up to these days?
Jonathan Armstrong: I've left Cordery, and I'm doing a few new things, something new to be announced in the late summer. In the meantime, I'm helping a business called Elevate, get off the ground. That looks at training non-executive directors, and whether that be somebody who's not been a non-executive director before who might have skills that a board might need, such as cybersecurity skills, information technology skills, or it might be training an existing board to fill the gaps that they have.
Delaney: Jonathan, we have a few questions for you. At this point, I'll hand over to Mat.
Mathew Schwartz: I know that you've been tracking Bitcoins and the payment of Bitcoins for ransomware - not that we advocate this sort of thing - but obviously it does happen. When a business decides that it needs to happen, these are dicey waters because you could be falling afoul of sanctions if you're a CISO. How does a CISO go about handling this without getting themselves into hot water?
Armstrong: That's a really great question, and I think that it's a really challenging situation. I've always thought it required more thought to pay a ransom, and I know, statistically, a lot of organizations are still of the view that paying gets rid of the issue. I don't think it ever does. The use of the sanctions regime by the U.S. and the U.K. authorities has upped the stakes. There are a couple of reasons for that. The first is attribution is always challenging in ransomware, and because these gangs change shape, so much that you're not ever clear which individuals you're dealing with behind those gangs. There's always a risk that you're dealing with sanctioned individuals, or that you're dealing with entire states such as North Korea that are sanction. There's always that risk with the actual threat actors themselves. And then, adding Bitcoin adds another level of complexity because some of the mixers are also sanctioned, and some of the banks at the other end might be sanctioned as well. Not only do you have a lack of clarity on attribution and who you are paying, but also you have a lack of clarity on how you are paying them, and the route from your cash to where and when they cash out at the other end. There are no easy answers for CISOs, and a lot of organizations that I see doing this maturely are having those discussions pre-breach not after, and meeting as a board to decide where their risk tolerances are. I think, many of them are almost reversing their default position. Their default position is, "We will not pay unless and until we find that it's safe to do so, and there's a compelling business reason." The other thing that I'm not sure about - and I don't know whether we have the evidence to back this up at all - is whether it's changing threat actors' behavior as well. Most of the breaches that I've been involved with recently are attacks on vendors of what the customers usually thought was non-core services. We can argue whether they were core or not, but things such as clocking systems, payroll providers, and common providers. Ransomware threat actors play a numbers game there, "If I compromised 300, then maybe a 100 will pay the ransom, and that's still worth my effort." What I'm interested in - and I suspect, we haven't got any statistics yet, whether we will, I don't know - is if some of the ransomware threat actors, because they're intelligent and follow developments like this will divert their efforts to non-U.K. non-U.S. targets more, because they think it will be an easier ride to pay the ransom. If I'm a ransomware threat actor, and I've got a choice between compromising a French-based payroll provider or a U.K. or a U.S.-based one? I'll pick the French instead. I don't know the answer to any of that yet, but I think that's the other interesting bit. While, therefore, we might say that sanctions are almost a shot in the dark, I think they are changing corporate mindsets, and I think that there's at least an assumption that threat actor behaviors could change as well.
Schwartz: Fascinating, so many angles to that. We're obviously waiting to see how a lot of this shakes out, as you say.
Armstrong: But again the simple message for the CISO, which we've said endlessly, is don't walk this journey alone. This isn't the CISOs call in most corporations, and you need the compliance team, you need the legal team, you probably need the board to do the heavy lifting on deciding if we are going to pay, how we're going to pay.
Schwartz: Great advice! One more question for you. Before I turn over to my colleagues. There has been a lot of discussion recently about certain royal personages' healthcare records. We talk about data breaches all the time, sometimes we talk about it with high-profile individuals. It seems that one of the recent high-profile individuals affected by a data breach may have been Catherine, Princess of Wales. There's been some question about medical issues, unfortunately for her, and there seems to have been perhaps a delayed notification to the ICO that somebody inside a healthcare facility was attempting to snoop on her records. What's your reaction as someone who's followed these data protection statutes for so long? Is this a surprise to you? Human nature can sometimes trump even the strongest of data protection rules it seems.
Armstrong: I think that's definitely true. I think there are often a number of aspects to cases like this, and we've been involved in some. The legal position is somewhat easier since the changes to the Data Protection Act in 2018. We've always had criminal offenses under Data Protection legislation pre-GDPR and post, that's not common across the EU, and it's not part of GDPR. It's the U.K.'s wrapper around GDPR to some extent, and the moving across of old laws. There are potential offences under the Computer Misuse Act, if you access a system for which you have authorized access for an unauthorized purpose, and there are also potential criminal offences under Section 170 of the 2018 Data Protection Act. Section 170 is quite useful in that you can also commit a criminal offense if you're the recipient of data, and you refuse to hand it back. And 170 could come into play here if it is true that the records were taken for a U.S. news outlet. If the U.S. news outlet is served with a Section 170 notice and is told to hand the documents back then they could commit a criminal offense. The criminal offense can also be committed by a director/manager, so it can be committed by individuals as well. I think cases like this are relatively common. It's not often that you're taking medical details of a princess to sell a story, I don't think that's common. It's much more common in areas like accident claims, we see every maybe couple of months a prosecution where, as a general rule - I don't want to over-generalize - boyfriend works for accident repair shop, gets girlfriend to pull list of recent road traffic accident patients, and then mails them to try and sell them car hire or accident claims services, or whatever that might be. That's relatively common. Clearly extracting data to move from one job to another is relatively common. That's become more common during the pandemic. The particular challenge for health service organizations in circumstances like this is. Again - without overgeneralizing - people who work in clerical positions in hospitals tend to be really poorly paid, and I guess, American news outlets after a scoop tend to reward pretty highly. There's always this balance whenever you've got employees with access to super sensitive data, you don't want to pay at the absolute bottom of the pay scale, because then it's easier to get them to do bad things. We see that, particularly, in the area of outsourcing. If we're outsourcing something to Manila, where the rate of pay is relatively low, then the cost to bribe that individual is lower as well, as a general rule. What it takes to bribe you is a factor of your salary. We've seen cases back in the day where Indian contact center workers, for example, were asked how many records they would give for a Snickers, and one guy gave a floppy disk full of bank customers in exchange for a Snickers. There's always that difficulty. The last thing I'd say on this, is that we know that the ICO are on the case they said on March 20, that they had received a report, maybe it was late, and it should have been reported in the 72 hours. The hospital might not be off the hook, the hospital has to take technical and organizational measures to prevent data theft. It's a known thing, whenever you've got celebrity patients, it's more of a risk. They have to have in place a training program. Online training will not be adequate for these risks. If I was them, I'd want to be seeing a face to face training program for those individuals with access to this data, I'd want to see access controls, I might want to see some sort of heuristic-type system running over the network looking at access, and I definitely want to see very detailed access control measures to make sure that only people with the need to know access that data. If the hospital hasn't done all of those things, then I think there's a risk that the hospital might be fined too. The ICO has fined hospitals and pharmacists and people involved in medicine previously for incidences when employees have been careless with data. So, they're not off the hook yet, from what little we know at this stage.
Schwartz: Well, great lessons that can be learned from this among these other incidents. Thank you so much. I'm going to hand you over to Tony.
Tony Morbin: Jonathan, I was getting very interested when you were saying that you're getting involved in training for the board and training the board on cybersecurity and those areas. My question is related to a recent U.K. government survey, which showed that many boards are under engaged in cybersecurity, there's a lack of cyber expertise on the board. So my question was - I'm really balling you underarm here - how can you rectify this situation, but also is cyberinsurance and directors and officers liability insurance undermining our ability to make boards accountable?
Armstrong: They're really great questions. We are seeing more of a move towards board responsibility. If we look at things like DORA and the U.K. equivalent, then there's more of a concentration on individual accountability and those at the top of being accountable, and that's probably a good thing. I think there is definitely a lack of cybersecurity and tech skills on boards, otherwise I wouldn't be wasting my time with Elevate and getting that off the ground. That's not just a U.K. issue, there's an EY study that says - from memory -56% of Fortune 100 boards have a gap in terms of cybersecurity skills. It definitely is an issue. DNO insurance is often seen as the panacea, that we don't have to do things well because we've insured against doing them badly. But, insurers are being much more assiduous in asking questions of organizations and, obviously, still some are struggling to get cover or cover at the right price. The lack of awareness is an issue. Whenever you're involved in a major data breach, then usually the non-executive directors will want to be involved. If there is a sense that the executives in the board should have done more to prevent the data breach, then you might find that the non-executives are leading the investigation into the data breach, and leading the response. I've been in a situation, for example, where the non-exec director who the board wanted to lead the response to what was quite a technical data breach, I had to show him how to switch his iPad on to start the meeting. We do need to upskill boards, and that's obviously upskilling existing board members so that they understand the risk. In many cases, it will be getting new board members in with that diversity of background and diversity of skills that are going to enable the board to respond more than maybe the last thing I'd say is the time to learn all this stuff isn't in the heat of a breach. You've got to rehearse and you've got to rehearse as a board. You've got to do your playbook work out whose roles and responsibilities there are. You've also got to educate the board on the need to respond quickly and look at all those various issues involved.
Morbin: I'd like to follow up my other question also on the area of accountability and responsibility, and I'm thinking about NIST 2.0 where they're actually having named people with criminal liability. Considering these, the diverse range of stakeholders involved in cybersecurity risk and now generative AI risk, should the cybersecurity responsibilities be distinct from AI security tasks? Who should ultimately be accountable for overarching security risks at a board level? Or is it personal at all? Is it going to be a risk committee?
Armstrong: I spoke at two conferences on two consecutive days a week before last, one - two cyber security professionals and one - two, chief compliance officers. I said, "Who's responsible for things such as NIST 2.0? Who's responsible for DORA?" All of the compliance officers said the CISO, all of the CISOs said, the compliance officer, and I said to the both groups, "When you go back to the office if you're the chief compliance officer take the CISO out to lunch." I have no dog in the fight as to who it is, except that I would say that if you're an organization who follows the traditional compliance lines of accountability, then your compliance officer probably shouldn't be the one taking active decisions, they should be checking that those decisions have been taken. As a result, probably first-time responsibility does rest somewhere with the CISO, I probably agree with you, that AI maybe needs a different set of decision makers. Whenever we look at GDPR, regulators are getting more acute, about looking at conflict of interest. GDPR says that a Data Protection Officer can do other duties, but his other duties must not conflict with his role as a DPO. I think we will end up with a situation like that with AI as well, to some extent, and the EU AI act is now passed, but it isn't going to be in for two years. Businesses can volunteer to adopt early, few will. I think we do need that same checks and balances and distance from the business with AI versus the CISO. Obviously, in some cases, it will be the CISO. That's adopting AI tools to help with the security posture. For most organizations, there needs to be a period of reflection. I've been working this morning, on something for a large corporation that's altering its code of ethics, across the whole business, to say that to have provisions in about AI, how does AI fit within its compliance and ethics framework? Who's responsible? I suspect, for most organizations it goes down to the board. The board needs to have the right skills to be able to understand AI. That isn't saying, we don't have the skills, we're going to opt out, because shareholders will demand that businesses adopt AI when it's sensible to do so. We're already seeing pressure in that environment. In the Horizon Report, the suggestion seems to be proper use of technology-assisted review, so an AI tool on the e-discovery software. From what I hear, there's credible evidence to suggest that if they'd have used that AI functionality, the review would have been quicker, cheaper, and more accurate, and potentially less consequences for the post-masters and post-mistresses involved. You can't just opt out of AI, you can't put your hands over your ears and pretend it isn't happening. Boards need that level of skill to distinguish where their pounds and dollars are being spent properly and what the risks are.
Morbin: I think you're right when you gave that example of the two opposite answers that people gave. It hasn't been decided, it's still up for grabs at the moment. I'm going to hand you back now to Anna but thanks very much, Jonathan.
Delaney: This is inspiring so many stories, Jonathan. I want to ask you about a significant story which erupted last week in the tech world and that's the antitrust lawsuit filed by the U.S. Justice Department and a coalition of states against Apple, saying that Apple has prioritized its stronghold in the smartphone market at the expense of user privacy and security. I'd love to hear your take, Jonathan, on what this means, what are the implications of this case? How likely is the DOJ to prevail? What might it mean for iPhone users and cybersecurity firms?
Armstrong: I predict that the case won't be over this year. I also think that it's a long-running battle, we're going to see more and more antitrust and competition law aspects in the tech world. About four or five years ago, I interviewed Max Schrems, the privacy campaigner. We had a really good debate about this. The proposition that we discussed is almost like a triangular form of tech regulation. That would be data privacy regulators, data protection regulators, fair trade regulators - the FTC and the U.S., the CMA in the U.K., for example - and also competition law regulators. In some respects, the DOJ - while it has many arrows in its quiver - is acting as effectively as an antitrust regulator in this case, and some of the staff on this matter, have done Federal Trade Commission and more conventional antitrust cases in the past. I don't think it's a surprise that this is happening, I think we're going to see more and more impact of competition or antitrust law in the tech world. In AI, particularly, we're going to see a lot of that, because so much of the generative AI world is dominated by so few players. Antitrust law is a cumbersome weapon to use, because it takes time, because we're arguing about dominant position, anti-competitive behavior. But, this isn't the first rodeo for Apple, the EU started investigation into Apple in June 2020. This is the third investigation into Apple for anti-competitive practices. I've not read the entire complaint - it's 88 pages long, I think - from what I understand, it's playing to the public audience a bit more, it's in a more simplified language than some of the earlier complaints. Some of it is about almost emotive factors, whether people who have Android versus iOS on their mobile devices feel less privileged, because their messages appear in blue and not in green. Is this some new form of tech apartheid, if you like? Those behavioral factors will be interesting. Conventional monopolies have a lot of science going into them, but that's economists. I've borrowed an office from an outfit called Pontus Arsalan in London today, they have teams of analysts that look at antitrust type cases and market share. That's a science but it's a relatively developed science. The fact of whether you like the form of an iPhone better than an Android device is more a theory. I know from my lockdown courses in design that many say that Johnny Ives stole that design from the Braun's work just postwar. How much of that is Dieter Rams at Braun? How much of it is Johnny Ives? How much of it is antitrust regulators? Is it there as to get their fingers on? We'll see more impact of antitrust, and this has a real impact for every other business as well. If you're setting up a new AI system and you're using an Open AI platform, and Open AI are going to face allegations that they're a monopoly, then that might impact your AI operations. Apple, part of their defense to some of their practices, like the walled garden for apps, is to make it more secure. A resolution might be that they have to open up the walled garden more, which might make iOS less secure, if Apple has to be believed. It's not just something that we buy popcorn and ringside seats for and watch as a disinterested participant. We've got skin in this game, as well as organizations and corporations, because it could affect our security stance, and it could affect some of the stuff we're building now.
Delaney: Lots of really useful insights there. I want to ask you, as my final question, about the Common Crawl challenge. The Common Crawl presents a vital resource for AI developers offering an extensive dataset that can enhance AI training and development. However, this wealth of information often includes copyrighted material raising complex AI, legal and ethical challenges regarding copyright laws. How can AI developers, Jonathan, navigate this intricate balance between leveraging extensive datasets such as Common Crawl without infringing copyright laws, especially in light of this dilemma of needing such material for AI's effectiveness, but potentially violating copyright in the process? What are your thoughts?
Armstrong: As far as Common Crawl is concerned, there are particular issues with that. The message for people is that if you're training your AI tool to do something, you need to watch the quality of the data and where is that data coming from? For those of you not familiar with it, there's a research report out by the Mozilla Foundation, which suggests that about 80% of generative AI is trained on Common Crawl data. Common Crawl is obviously a big dataset, but it wasn't originally set up to provide training data. Common Crawl has all sorts of purposes, and it partly originates from a move to give alternatives to big tech, but it's big tech who seemed to be using Common Crawl the most. You and different people have directed Common Crawl to search different bits of the Internet to simplify. For example, researchers researching hate crime got Common Crawl to gather elements of hate crime for their academic research. If you're doing stuff like research, then the copyright rules are different than if you're an out and out for profit organization. Some of the copyright issues originate there, and some of the issues with chatbots saying bad things, for example Replica AI, originate because hate speech is included. For researchers who are researching hate on the internet and what the impact of that might be, we've got an issue where Common Crawl perhaps was fit for its original purpose, but probably isn't fit for the purpose it's been shoehorned into. Just because it's a big data set, it doesn't mean to say it's the right data set to use to train. The other issue that we're seeing is with rights holders, who said, "Well, hang on, our stuff got included, perhaps under copyright, exemptions, perhaps it shouldn't have ever been included, but our stuff got included in common crawl, but we made it clear on the access agreements on our website, or the terms and conditions etc. that we didn't allow that stuff to be used for commercial purposes, which it now appears to be doing. If we ask ChatGPT a question, we'll get some of our content back, or if we ask generative AI to print something in the style of a particular author, then we might find that that's been trained on stuff that it shouldn't have been trained on." We're going to get a lot of litigation on this, and this litigation is going to probably produce different results in different countries, because copyright law is different in different countries. At the same time, we're getting what you might call special pleading by the tech bros with some of the startups around the periphery, including people such as Open AI saying "We're a special and conventional rules shouldn't apply to us, because AI is for the greater good," and "Hey, we're on a mission!" I'm not sure that that stacks up. Copyright law is a fundamental right, and if we start and say that people can rip off ISMG films and use them to train Sora, then the creativity of you and Tony and Mat is undermined, the financial model disappears as well. I think we've got to be careful about enriching MIT graduates by accident by giving them a pass on laws that have existed for a long time, just because they're spinning, what some might say, is copyright theft, as something a bit different. If I said "I'm in Trafalgar Square at the moment, I'm just going to walk into the square, and sell Adidas T shirts, and I should have a special pass, because I'm going to give some of the money to humanity." I'd probably still be arrested, and I'd probably still be sued by Adidas. Why is it different? Because I met up with somebody in the dorm in MIT and came up with a wizzy tech idea.
Delaney: Well, thank for the thorough answer there, Jonathan, you've brought a lot of clarity to a rather complex issue. Before we wrap, just for fun, we have one final question for you. If a genie granted you one legal cybersecurity wish, what would it be? Not an illegal, a legal one. Think Aladdin's cave at the moment? Mathew and Tony you want to jump in?
Schwartz: A legal, not an illegal wish? I would hold ransomware threat actors - just to use that cybersecurity buzzy term - to account, even if they lived in Russia. I think the most direct route there might be to have Russia extradited citizens to face crimes allegedly committed abroad.
Delaney: Love that! Tony?
Morbin: Having a wish reminded me, I asked my then seven-year-old son what he would do with three wishes. He answered, first one was to go to heaven when he died. Second was to be wise, so he'd know what to do with his third wish. I thought that was a really good one. Given the ever evolving nature of cyber threats, a single wish won't necessarily cut it. I'd go for having the wisdom to always implement the best course of action available to prevent or react to future threats.
Delaney: That's a very wise answer. I am going for imagine if we could have one global universally accepted set of rules for cybersecurity and data privacy. Just one playbook that everyone everywhere follows, and I think it just would make things so much simpler and for companies trying to protect data across all these different countries.
Armstrong: My passion at the moment is getting more cybersecurity more tech skills onto boards. If I could wave the magic wand, then that's what I'd be doing. I think it's invidious that most boards take it as a given that they have somebody with robust accountancy qualifications to be there as the sounding board for the FD. It would be laughable if somebody suggested that there was a public company without somebody with financial or an audit background on the board. Cybersecurity is a systemic risk to most organizations. We don't necessarily insist on and we sometimes don't value cybersecurity skills at a board level. That's got to change because while that doesn't change, then maximum number of threat actors could escape and run a merry dance.>
Delaney: Jonathan, we've really enjoyed this. Thank you so much for all the wealth of information and knowledge education that you shared. Please do join us again soon.