Data Privacy and Legislation In An Age of AI: An Interview with Dayle Duran

This interview was conducted in the Spring of 2023. It has been edited for brevity and clarity.

Harvard Undergraduate Law Review (HULR): Before we begin, can you tell me just a little bit about yourself, your role at Meta, or just your career in general?

Ms. Duran: Yeah, definitely. I started my career as a Bard lawyer, but I don't practice in the traditional sense these days, which I think you'll find a lot in the privacy space. One because it's kind of new and very amorous. In terms of my current role, I'm a privacy policy manager at Meta.

I work on the ads and the monetization team. I work with product teams, spotting privacy issues from clinic to market, and I've largely worked in tech with SaaS companies and software. I worked at a healthcare SaaS for a while, which was interesting too. We had an app that lets providers interact with their patients as well as insurance companies, making sure that there are no gaps for individuals. And so I worked closely with product teams to sort of issue-spot what privacy issues are getting run into. Before that, I did some consulting that ran the gamut between big companies, such as retail companies, to smaller FinTech-related stuff.

HULR: Perfect. Thank you so much. We're just gonna jump right into it. There's sort of three topics I want to touch upon today, which is general data privacy and information security, legislation, and the role of AI in all of this.

In recent years, we've seen a variety of data breaches, some being the Solar Winds hack and the LinkedIn database leak. So, what is your opinion on the current state of data privacy and information security for larger tech firms?

Ms. Duran: I think it's going to depend. Information security and privacy go hand in hand and there are a lot of overlaps, kind of like a Venn diagram. It depends on the company and its risk profile. At large, we live in an increasingly tech and data-driven world and economy. The more data there is, the more information you have about people. With that, you can either use it appropriately or inappropriately.

Then, there’s a legal angle that goes alongside that, but the law almost always moves slower than technology. We're trying to superimpose legal frameworks on really complex data. I think that's probably true whether you're in a mom-and-pop shop that's trying to reach end users. Or, you are a big company like Target, trying to sell stuff or a large tech company. So, I think it's complicated right now, and it's only getting more complicated. Then you have regulatory regimes like the GDPR in the EU that came out in 2018 that started to lend a consumer protection angle to hold somebody responsible for processing people's data.

Along with this idea, there are certain fundamental data rights that people should have, like access and deletion. And it depends on where you live, the expectation of privacy in a particular country, or the society you live in. So, at large, it's complicated.

HULR: It is certainly complicated. You mentioned the GDPR, and I want to go further on that. It regulates data protection and privacy for consumers. Do you think there should be an equivalent to that in the United States?

Ms. Duran: It's tricky. I'll give you a little bit of a privacy history lesson. So, what I mentioned before about how culture, society, context, and country impact people's expectation of privacy. So you're gonna find people in Europe have a different expectation of privacy than people in the United States do. Now that's informed by certain historical events. One of them, like the Holocaust: the Nazis kept diligent records and used that information collected under one’s pretense to turn around and kill people. There's a greater sensitivity, I think, in a lot of European countries too. Particularly regarding the amount of harm to the individual that data collected and then used for a different purpose than it was collected for can have on somebody.

Applying that to your question, should we have something here in the US like that? Yes. I think culturally the United States operates a little bit differently regarding what our expectation of privacy is as a person. It depends on social factors. I don't think in the United States we have the same expectation of privacy as people in Europe because we've had a different experience with that. We haven't had something like the Holocaust and the data collection aspects that come with it.

In the US it's more of a consumer protection issue than it is a human rights issue. You see that with certain laws like the CCPA in California or Colorado or Virginia. Also, we’re seeing Illinois and its Biometric Information Privacy Act. Should we have something similar here? Yes. Is it necessarily gonna look different in form? Yes, because the context is different. People's expectation of privacy is different, and it's probably going to emerge from more of a consumer protection bend than in Europe where it is framed as more of a human rights issue.

HULR: I think that the consumer perspective is really important. You touched upon the CCPA. Other states are also moving towards or adopting a kind of similar legislation, as you mentioned, Virginia. Do you see them as effective? Are there any downsides to them that you can foresee? And overall, do you observe the US moving towards more state or federal control in terms of such data privacy issues?

Ms. Duran: I'll start with the last question. The role of federal law is to operate in spaces where the states can't, and I think the internet's certainly a great place for that. The internet does not know territorial bounds and information flows don't know territorial bounds. I think the way we're approaching it in the US is that based on the state that you live in, you have certain rights.

That is not a durable approach to have 50 different states take 50 different approaches to that. I think we’ve already had at least three or four different general approaches. Virginia's a little bit more business-friendly than say, California. Colorado is kind of in the middle. I think at some point we're gonna see some sort of federal legislation. Privacy is one of those interesting areas where you can see a lot of bipartisan alignment. The way it manifests is different depending on the political party that is adopting and running forward with it.

So, in terms of aligning on a place for the federal government to act –– that's gonna be tricky. Is it gonna be lumped under the FTC, since that's underneath consumer protection? Or, what are the enforcement parts gonna look like? Unclear. I think making the laws is one thing, enforcing them is another. Ultimately, it doesn't make sense to continue with the state-by-state approach.

HULR: For those state-by-state approaches such as CCPA, do you think they're effective or do you foresee any downsides to them?

Ms. Duran: You know, I think they are only as effective as they are on the outer side of the bell curve. With the CCPA, and globally too with the GDPR, folks are less focused on complying with LGPD in Brazil because they know they can meet all the requirements by complying with the GDPR. In a similar way, you can meet almost all the requirements of, for example, the Colorado Privacy Laws in Virginia by adhering to CCPA. So, each state's iteration of privacy law depends more on mandating stricter requirements than the finer details. If tomorrow we have West Virginia come out with an even stricter privacy regime, I think that will be more impactful and it will matter more. I think it's controlled by the most stringent law.

HULR: Going along the lines of the government, we often think of data privacy leaks and associate them with large tech companies such as Google, Meta, etc., or even places like Target, for example. But it's also important to remember that governments also suffer major security breaches. In February of this year, the Marshall Service suffered a major security breach in which hackers stole data regarding its administrative information, legal processes, and personally identifiable information. I was wondering, how do you view the security of government documents and information compared to the security of consumer data at large tech companies like Meta? Do companies like Meta have more robust measures to protect their data compared to larger governmental bodies?

Ms. Duran: I can’t comment on what Meta doesn't or doesn't have, but we can take another hypothetical like Google. So, it's interesting because government information is often more regulated than your regular consumer products company is. They have different requirements, so they have different protections that they implement. There are certain sector-specific requirements for security, and I think they’re very different from the government to tech to healthcare or finance.

So, it's difficult to compare which is better or worse because there are different obligations and expectations, and industry standards for each of them. With respect to government data leaks, anytime you're looking at the context of public civil organizations, they aren't always as well funded as the Microsofts of the world. Maybe they don't have the ability to have a large staff body to make sure their risk profiles are minimized. So, I think the resourcing of private companies versus the government can result in a different attack surface.

I think it's often unfortunate when the government gets hacked because it's usually people's information or some sort of sensitive information that is not meant to be generally available with specific requests. It's hard to comment on which is better or worse in a superlative sense because it’s different contexts. Often government bodies are less resourced, so they have less ability to have the latest devices or a stacked team of security experts to make sure that their risk profile is minimized.

HULR: Historically, have you observed data breaches occur more against governments, due to their lack of access to resources compared to large tech companies? Have you seen that the majority of attacks have been more tech-focused or government-documented-focused?

Ms. Duran: It's hard to say at large because we don't always know what data breaches happen or the reporting requirements. Does the individual or entity even know they’ve been hacked? So it's hard to comment on the quantity of the two of them. As I said, government agencies are often operating with less sophisticated tech which makes them susceptible to attacks.

HULR: I see. Shifting to companies that have more resources, what do you think are some of the steps that they should take to improve their data privacy? Particularly in response to these recent events and also in response to a society that is becoming more and more technologically reliant.

Ms. Duran: Take it seriously, certainly. The biggest problem that I have faced working for often like tech companies, or just in general as a risk professional is that you're shouting about the sky. It sounds like you're telling a company the sky is falling until the sky actually falls. You are always spending too much on privacy and security until you're not spending enough. So, I think the first thing that businesses should do is take it seriously.

Something that we call security by obscurity or just the fact that there are a bazillion people out there –– how are they gonna hack me? Or a bazillion businesses –– why would they pick me? But that doesn't necessarily make you safe and it doesn't protect you from the damaging effects of an incident like somebody stealing your identity or somebody breaking your company's financial systems because those things can take a lot of time to fix. And also a lot more money to fix than it would've taken to prevent it. So, I think the first step companies should do is to take it seriously. Hire somebody. Don’t rely on sheer numbers or the probability that it won't happen to you.

HULR: I can see what you're saying. If you're standing in front of the executive board, you’re describing a scenario that statistically will probably never happen until it does happen.

Data is crucial to our identity. Oftentimes, a lot of these tech companies know more about us than we know about ourselves. We know that companies harness that data, for targeted advertisements and other purposes. In late 2022, Google agreed to a 392 million settlement with 40 states for unauthorized location tracking. What do you think the role is of tech giants in confronting personal privacy concerns? Should there be policies that force them to disclose relevant information or would that be overarching?

Ms. Duran: So, this is a consumer protection right issue, and Google might’ve violated the state consumer protection laws by not being clear. So this goes back to each state having something called a UDAP statute, which is short for Unfair Deceptive Acts and Practices. And the Federal parallel is the FTC Act Section Five, and it's again primarily a consumer protection issue. It's not inherently bad to track people's location if they know that you're tracking their location, they've agreed to it, and they know what you're using it for.

It's not okay to use people's information in a way that they don't know about and consented to, particularly in ways they don't have control over. The thrust of a lot of these privacy issues comes from that consumer protection angle about whether you are saying you're doing something or are doing another thing.

HULR: Is there a way for consumers or maybe government bodies to ensure that what companies say they're doing with our data is actually what they're doing? And why might companies misuse data?

Ms. Duran: So, you know, in this Google instance, bringing this particular case is meant as a deterrent. In criminal law, we see the same thing with deterrence. You have a punishment and it's meant both to deter others to make an example and accept guidelines for success. In this sense, they are striking back saying “Hey, we're gonna slap your hand if you are using data in a way other than what you have disclosed.”

Deterrents –– you find companies and make it known that this is not the way. Or, you publish consent orders to show how you analyze what is okay versus what is not. And then of course there are actual laws that are supposed to be preemptive. In nursing CPA, there's an opt-out button and specify what that opt-out button's supposed to look like.

In terms of why companies might misuse data, I often think it depends on the situation. But very often companies are just moving too fast and very often they're not aware of their obligations, which goes back to my earlier comment about actually taking privacy and information security seriously. Sometimes they're not aware of all of the user stories that they're working with in a particular situation, and so they neglect to think about how this impacts the user. For example, “Would this thing be clear to them?”

When I work with product teams, they often just haven’t thought of something, or they get excited about the idea and do not necessarily consider what it means for somebody's privacy or how it could be misused. I think that's another angle that it comes out of. Companies come up with cool things and it's great for their original purpose, but there’s a way they can be misused.

For instance, the whole Target sending pregnancy ads to a pregnant teenager. They predicted pregnancy via her purchasing habits. So again, I think it's very seldom intentional that companies misuse information. I think intentional misuse often comes more from bad actors. Money is valuable. Your data is valuable. The more data you have, the more insights you can gain.

HULR: Absolutely. I think data is an indication of behavior. It's an indication of your identity. With all of that data nowadays, and the development of AI, which is taking the world by storm right now, how has AI impacted your work?

Ms. Duran: You know, it's, it has impacted my life more recently and my husband's as well since he’s a software engineer. He's using ChatGPT to come up with the best practices in Python coding, etc. So, in that respect, like that's kind of how my life has experienced AI recently, um, along with generative AI –– such as cool pictures that you can make or imitate an artist's style.

It’s really cool technology. Again, this is another one of those situations where the law is 10,000 behind, and so is our ability to forecast how things might go wrong. You get a bunch of risk professionals in a room and they might tell you some of the issues generative AI could have, such as deepfakes. Further, the implications for copyright, trademark, and intellectual property with generative AI. Can an algorithm own a copyright to something? Was that what copyright was even meant for? Copy and paste that across the world and rights over things generate money for people and companies. We're gonna see issues around that in the future. Then also privacy issues where people's information is being used in ways that they didn't agree to.

There are tons of issues and questions. There are also issues on the other side of that when you're talking about algorithmic bias. When you're talking about an AI system, obviously it's had to be trained on something. Did you have permission from the individuals or the owners on the information it was trained on? Was it trained on a diverse enough data set? It's easy to rely on these things as if they're like a person or an expert in the field that can crawl the web and deliver all of these really specific answers quickly.

But the second we stop believing that and start double-checking it, I think that's when we run into issues too. Computers are fallible. There are intellectual property issues about people using it too. Because if you're not paying for something, you are the product, or you're at least giving data. Everything we type into ChatGPT is collected and used to fit into the algorithm to spit out something different the next time. You know, that could become an issue, for example, if I'm a software engineer working for Google, and I ask a bunch of really specific questions that could somehow help someone somewhere reverse engineer some of the systems at Google.

There are any number of ways that it can impact a company's intellectual property, rights, and individuals’ privacy.

HULR: I think in terms of what AI can truly do, and I think we're still at the first beginning stages and there's, as you mentioned, a lot of issues or potential threats that could happen. But at the same time, can you see sort of examples in which AI can be used to enhance data privacy, um, for either small or larger tech companies or non-tech companies?

Ms. Duran: You know, I think if I wanted to create an advertising campaign and I wanted it to vaguely be a Latino woman and a bottle of ketchup. If you could generate that image, then suddenly you don't have to use somebody's face and share their face all over the place. You're just sharing an idea of a face. I can see that being useful for marketing teams.

In terms of privacy implications, being used for good –– that one's trickier. Obviously, the things that it can be used for good are for cutting down the time it takes to do something. Or, putting additional information at your fingertips that you would’ve had to work really hard for. It can be used for really cool things and people are talking about that now. You go back to machines, they're to supplement human abilities or do them faster. Even back to like sewing machines or the industrial revolution. So we thoroughly predict that that's what AI and generative AI is gonna do.

AI can also cloak certain information. It just depends on the use. It depends on the protections that are placed at its onset because as soon as you feed something into an algorithm, all bets are off about how it's gonna be used or how it's gonna come out again.

HULR: With so much uncertainty and also a potential danger with this coming wave, do you foresee a collaboration between the federal government and the data privacy sector to implement more robust policies? Why or why not? Or, what are some of the obstacles that we might face in that sort of collaboration?

Ms. Duran: The legal frameworks are often very reactive. Just in the same way as how a police officer can't do anything until somebody's been harmed. Then they investigate afterward. The punishment is trying to prohibit it from happening again, like deterrence.

Governments and industry sectors largely work in the same way with something like this. I think we're going to see the harm before we're going to see a lot implemented to prevent that harm. It’s gonna be like whack-a-mole. And, with the slow wheels of justice, the law moves incrementally. It's imperfect too, especially when you're talking about these complex frameworks where you're trying to superimpose legal concepts. These are messy data flows and especially with AI and machine learning.

HULR: With the law so far behind in terms of AI and data privacy, being such a complex system, what would you say to individual people that are concerned about their data, either being misused or their data privacy or their online footprint? What are some of the recommendations that you might give to the common individual to protect themselves in the event that something might happen?

Ms. Duran: That's kind of a hard one to answer because I think it depends on the individual’s tolerance. I am a data privacy lawyer and I read through every single terms-of-service document. Do I read through every single privacy policy in my life? No. Those are long and not super user-friendly documents. In terms of what people can do to protect themselves, I think they should understand what they’re consenting to when they share their data, making sure that they’re verifying sources.

I think everybody at this point gets text messages from fake Amazon saying their account got canceled. Slowing down and using common sense I think helps avoid a lot of those easy slips where you click on something and you accidentally give sensitive personal information to somebody who wants to use it to steal your identity.

It doesn't hurt to check privacy settings. Google has a neat feature where it prompts you every so often during your privacy settings to make sure everything looks good. Same thing with your security settings. Don't ignore those things. I think it’s always helpful from an information-sector perspective too, to sign up for identity monitoring. Again, that's one of those things where you're screaming about the sky falling until the sky actually falls and then it made sense cause the sky fell. It doesn't make sense to spend money on identity monitoring until something bad has happened. You could find out faster that something bad has happened if somebody has got your social security number on a bad site. I think those are definitely the key components that people should consider.

HULR: Absolutely. The bottom line is that data is going to continue increasing day by day. With it comes the rise of AI and neural networks that are already beginning to predict things like financial market trends, modeling, and behavioral prediction. How do you see the importance of your field sort of in the coming decades concerning data privacy?

Ms. Duran: I think it's hugely important, specifically around privacy. There are a number of sci-fi movies that came out in the last 30 years that I think are really good reasons why we should care about privacy, generative AI, and AI in general. Neural networks that predict human behavior or predict things about people that haven't come to pass yet are particularly dangerous. They're also vulnerable. For example, facial recognition software misidentified black women 30% of the time. So, if that's then being used to identify people who potentially committed a crime, then you've got an increased chance of misidentifying the person. Getting arrested can lose somebody their job and create real-world harm. So I think data privacy is hugely important. And it's gonna continue to be even more important in the coming years as we've just got massive amounts of data generated all around us. I think being judicious about when and where you're sharing information is also important. Do you need to post every location you arrive at on social media? I don’t know.

Privacy is going to be around the school of thought and the field's going to be around to sort through the bad things when they go boom. Until then, I think, companies, governments, and people need to be really careful about when they're using something that's predictive about a person or about a thing that hasn't happened yet. That's when we get into predicting the future, and when you're predicting the future, all bets are off.

Christopher Shen

Christopher Shen is a member of the Harvard Class of 2026 and an HULR Staff Writer for the Fall 2022 Issue.

Previous
Previous

Diving Deeper into Human Rights: An Interview with Langdell Professor of Law Martha Field

Next
Next

Revolutionizing Genomic Data Storage: The Power of Blockchain