Regulating the Future of Precision Medicine

Carmel Shachar is the Executive Director of the Petrie Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Shachar is also a lecturer at Harvard Law School. Her area of expertise is law and health policy with a particular focus on regulated access to care such as telehealth and digital health technologies, for vulnerable communities. Shachar has co-edited many books including, Disability, Health, Law and Bioethics. She is currently working on a book entitled Consumer Genetics: Ethical and Legal Considerations of New Technologies. Shachar earned her B.A. in Bioethics and Religion from Wellesley College and holds a JD and MPH from Harvard University.

This interview was conducted in Fall 2022. It has been edited for brevity and clarity.

Harvard Undergraduate Law Review: Could you please tell us what precision medicine is?

Carmel Shachar: Precision medicine is the idea that we should be able to use genetics and predictive algorithms to better understand when a medicine or treatment might work for an individual, as opposed to looking at the broader population to say that a medicine works.

HULR: Thank you. What motivates you personally to work in this field, especially on the legal side of it?

CS: I'm very interested in systems around medicine. I think it's a very worthwhile calling to become a physician to provide care. But, what I'm more interested in is looking at who is getting that care. How can we better ensure that the largest number of people gets the best possible quality care?

HULR: In your opinion, what are the three main benefits that precision medicine offers?

CS: I think there are some medicines or treatments that maybe don't show enough of a benefit on a population wide system, but they could have benefits for specific subpopulations. I think it's also about pushing the medical system to think of people as individuals and to tailor treatment to each individual. In some ways, it could possibly be a more efficient way of delivering treatment and care to say: "We want to make sure that limited resources are used in a way that maximizes best outcomes. So we're going to identify which patients are going to react really well to it."

HULR: In the book "Disability, Health, Law, and Bioethics," your co-authors explore how legal and social rights should be afforded to disabled people. Do you think it's possible for precision medicine and an increased focus on genetics to create new forms of discrimination against disabled people?

CS: I think about this a lot, being a lawyer and an ethicist who works on innovation in healthcare and in biomedical technologies. I think the truth is that there are very few tools themselves that are inherently biased or inherently discriminatory. But, when we are not careful to implement them in a way that addresses bias that minimizes discrimination, if not eliminates it, then it's very easy for those things to creep in. For example, Professor Ziad Obermeyer had a great paper years ago, where he took an algorithm that was meant to identify who might need additional resources. The algorithm showed that the data that they had used to create this algorithm had a lot of underlying bias in it, especially in terms of how patients of color received services and what the outcomes were. Even though algorithms don't have discriminatory intent, they have no intent. But if they are saying "garbage in," then "garbage" comes out. This can apply to a lot of technologies. If you're putting in garbage, if you're putting in discrimination and bias, then chances are the results you are getting out are garbage.

HULR: Yes, thinking about the difference between intention and impact in terms of these technologies is really important. Going off of that, how would you describe the ways in which we can legally combat the possibility of these technologies being used in a discriminatory way?

CS: One needs to really understand the ecosystem of biomedical research and healthcare delivery because you can't just pick one point or one intervention and say that this is going to solve all discrimination. For example, we may want to say that any sort of algorithm that's used to make clinical decisions needs to undergo some sort of regulatory review, maybe with the FDA, maybe with Health and Human Services, to demonstrate that it is not biased. Then we may say that we also want providers who are in some ways the end users, the hospitals, who buy the services, to have the legal responsibility to evaluate the products to not just assume that a product that they are buying is unbiased and that they do not need to be worried. We may also want to say that we want to empower patients who feel like they have been harmed to have some sort of private right of action to bring to lawsuits because we think that private law claims will strongly encourage the other actors in this space to deal with bias issues. But, if none of these things happen, there isn't a natural incentive for developers to minimize bias, there's only an incentive for them to show some sort of results. Sometimes that will lead to unbiased results, and sometimes it may lead to biased results.

HULR: You were saying earlier how technology does not have intent and it operates on the information it is given by its developers. The people who create these technologies can bring in their subconscious biases, which can lead to unintended outcomes by all actors. What do you think could be some ways that can prevent this from happening in terms of building the teams to create these technologies in order for the biases to be as small as possible?

CS: From the start, there needs to be a lot of emphasis on ethics by design to say that we want to train developers, we want to train computer scientists, and we want to train researchers. They need to be thinking about ethical designs and a diverse set of stakeholders so that the teams who build these products are diverse. For example, I am a cisgendered woman, that's my lived experience, and sometimes talking with my male colleagues, I will bring up aspects of gender that they never really had to think about before. And if there's nobody on the development team that necessarily has my lived experience, are they going to realize when something is overlooked, or when something is missing? Potentially no and that can go for a whole lot of biases. But then I think there also needs to be more done at every level to say that we want regulators to be really aware and to think of bias as something that needs to be combative. We want providers and patients to become really sophisticated and say any product that isn't explicitly built and implemented in a way to avoid bias is unacceptable.

HULR: This really ties into the idea of making sure that there is diversity of thought at every level of medicine and law where people are contributing to these technologies. The next question steers away from this into a different direction. In the 1930s, when eugenics was influential in American culture and when pseudoscience was able to regulate and influence a lot of laws that were put into place, which led to a lot of forced sterilization of people and to removing people's liberties. In the wake of precision medicine and more advanced genetic science, do you anticipate that some of these laws could resurface? Or do you think that now that the data or the scientists are stronger than the pseudoscience of eugenics?

CS: (...) You want to think like we are making forward progress, but at the same time, there are a lot of ideas and theories and decisions that at the time, people thought they were making the right decisions. And looking back, we think that was awful, that caused a lot of human suffering. And that is just not an acceptable decision. That is not an acceptable theory. And so I would like to think that we are more sensitive than we were in the age of eugenics. But I'm sure that, given 70 years, somebody will be sitting there saying, "I can't believe that we let people do XYZ". And we can't see it now. Because we're in the moment.

HULR: Now we are going to look at your new book, "Consumer Genetic Technologies: Ethical and Legal Considerations". Could you tell us a bit about your argument in that book, specifically, the ethical lens that you're using to examine consumer genetics?

CS: So this is an edited volume that I co-edited with three other people. And we have about 20 contributions from various authors. The idea there was that consumer genetic technologies, which right now is mostly genetic testing, like 23andMe, but also fetal screening. In the future it might look something more like genetic editing with the way that CRISPR is going, it has a lot of implications about identity, for example, about family building, about how we feel the course of our lives will play out. And then it was really time for that discussion. I will say some of that was motivated by the fact that I did 23andme testing, and my family did as well. And we discovered that myself and some of the other women in my family have genes that predispose us to breast cancer. And so what does that mean about my mom's medical choices? What does it mean about my medical choices and my sisters? I now have a daughter and so firsthand understanding the impact that these new products can have. This isn't just a party trick. This is something that has really potentially heavy implications for people.

HULR: Thank you for that answer. It really sounds like this book or your work in general can have great impacts on people's choices to get their genome analyzed, because it could be well possible that some people, while it would be beneficial, might still not want it. So, what specific impact do you hope this book or your argument specifically, will have on the use of consumer genetics?

CS: So I think very often, there's a perception that lawyers and bioethicists who work on the frontier of biomedical sciences are reactionary, are saying that we have this technology, but here's why we shouldn't use it. Here's why it's problematic. One thing that I've really enjoyed with my work and one thing that I've really enjoyed with the Petrie-Flom Center is that we don't think innovation is a dirty word. We think we have new technology that's really exciting. Let's talk about how it could be really, really beneficial. Like there's a piece, I believe, in the book that talks about testing for an elevated enzyme that might indicate a higher likelihood of developing Alzheimer's, which, right now, has no real treatment for, so you find out that you might have Alzheimer's, that's like a terrible thing in some ways to know about your future that can actually be really empowering to say: "I know that I need to have my legal docs in order, I know that I need to have a clear plan for care, I don't want to put this on my family, let me take control of my life." And a lot of the work I do is to ask: "How can we get the best possible outcomes from innovation, while avoiding some of the worst possible outcomes?" And that feels really rewarding to do.

HULR: How do you anticipate that the law will adapt to protect the privacy of patients that will or are using precision medicine?

CS: I think that the law is working here, and some of it too is you know with changing technology, it's constricting cultural expectations, we have some data privacy laws, such as HIPAA that are really bound up in ideas of traditional medicine. In a lot of ways, precision medicine gets covered really well there because a lot of it tends to be delivered in the doctor's office or in the hospital with medical records. I think there's other sorts of innovations, you know, like using Siri or Alexa to monitor your wellness where, because it's outside of traditional medical systems, we don't yet have a lot of data protection built into our system for that.

HULR: So in the event that parents were, for example, to have their children do this test and receive information that they're genetically predisposed to certain illnesses, or if the parents themselves were to take this test and see that they're genetically predisposed. Should they consider whether they want to have children? Or if they're already at the stage where they have children, should they be telling their child that information? Do you have any kind of guidance on that, how should we be thinking about that in terms of ethics and genetic testing?

CS: First of all, I will say, there often isn't a one size fits all. And, you know, we might say that we, as a society, have such a strong preference to a particular answer that we should have a law: sometimes yes, sometimes no. But, when it comes to really personal things like this, I think there's a range of ways to interact with these technologies. So, I'm somebody who is in her late 30s at this point. So, I've had a lot of friends who have had children and because of professional careers, often had them later and so they have been offered genetic testing to screen for Down Syndrome (...) and seeing the different choices made. I have a friend who is very catholic. Her family has been very involved in the church. She and her husband take a lot of meaning from their religious affiliation and she said that she is not doing this testing because it will not impact her decision whether to carry this pregnancy to term. Therefore, why would she do it? And that is one way to think about that technology to say: "It doesn't serve me. So I don't need to use it." I have a friend who probably would not terminate a pregnancy, but said: "I want to know because I want to be prepared." And that's another individual preference. I have a lot of friends who said: "I want to know, because there are some conditions where there's enough suffering that I don't want to put my child through it and so I would take action." And all of these are very individual preferences. And what was the preference for my friend who decided that genetic testing was not the right choice for her, that would not be the right decision for me and somebody who wanted to know and felt like there were some conditions that I was not comfortable with letting a child of mine suffer through. And so I think some of the challenge is determining, okay, when do we need to make a law? And when should it be left up to the individual? And when it's up to the individual, how do we give them the resources and empower them to make that decision? So, my friend who rejected genetic testing has thought a really long time about it and spent time talking to me, for example, because of the work I do. So she made a really informed choice. But somebody who, you know, was part of a religious organization that just told her, "You should not get genetic testing, because we as a church are against abortion." So, what does it matter? If it doesn't reflect her individual preference then, you know, where are we going wrong, that we're not giving somebody the tools to make the right decision for themselves?

HULR: What do you think are some of the greatest barriers to implementing precision medicine, or consumer genetics, based on the uncertainties that people will be considering, or the impacts that this testing can have?

CS: So I think that there's a variety of challenges. I think, first of all, you know, the science always could stand to be perfected a little bit better. Precision medicine is still in its infancy (...). We have these genetic tests, but are we testing necessarily for the right things? Do we really fully understand what we're saying when we say, okay, you have this gene, therefore, it means XYZ. And so that, I think, can be a real challenge. For example, 23andMe tests for some variants of BRCA 1 and BRCA 2 that lead to a heightened risk of breast cancer, but they don't test for all variants. The variants that they test for tend to come from populations that are really well represented in genetic studies, like Ashkenazi Jews. So my variant showed up when I did 23andMe. I have a friend who has a family history that is incredibly, strongly predictive, that she has some sort of gene that's going to bias her towards breast and ovarian cancer. All of the women on her mother's side of her family have had breast or ovarian cancer, often at young ages. But when she did 23andMe, nothing showed up because it probably was a variant that they don't test for. And how do you communicate that to say just because you get a negative result that doesn't mean you're in the clear. On the flip side, I have a variant that raises my chance of breast cancer, but it doesn't guarantee 100% that I'll have breast cancer. So how do you educate me? Just say, "Okay, we think that there's a higher chance you should get screened, but it's not 100% because most genetic testing isn't." And I think some of that also speaks to issues of equality and access that show up so much for our society. Where, you know, I have the money to do 23andMe, I have the education to understand the result. This gives me the social capital and I have the good health insurance to preemptively go to a cancer geneticist and say, "I got this test result, what do I need to do to make sure that I can be alive for as long as possible?" Those are all things that come from me, from the particulars of my situation. And I'm not sure that's replicable for a lot of women out there. And certainly, I think when you add in issues of socioeconomic class, issues of racism, issues of homophobia, and transphobia, you start to add on a lot of barriers to being able to take advantage of these new technologies.

HULR: So now we want to move on to talking about The Diagnosing in the Home Initiative of the Petrie-Flom Center, which was launched in 2021. We're wondering, how the COVID 19 pandemic potentially motivated this research initiative? And what have been some of the major successes and challenges in translating medical services into home health care so far?

CS: It very much came from the COVID-19 experience where we were seeing this real boom of telehealth, just like everybody was using telehealth because it was a way to avoid risk of infection. We thought some of the easiest care to translate is the ongoing care where you already have a physician where you know you're diabetic. But on the flip side, some of the hardest care to translate into the virtual space is that early stage where maybe you don't know that you're at risk of diabetes, maybe you don't have a physician who thinks to test your A1C. And we thought that was a really good challenge and question that was made interesting. Also, by the fact that so much of what's going on in the Digital at Home space is often done under the guise of wellness. So it's almost like a pre diagnostic, like, we're not going to diagnose you, but we'll let you know if you're well, like, you know, if you maybe need to go get a diagnostic test. But then also that there were issues of, again, equity and equality there that we know, people of color tend to be less connected into care, and that the medical system has failed them time and time again. And so how do you lower the barriers to being successfully diagnosed at home? So that people who aren't already connected into care aren't shut out of the telehealth boom.

HULR: What are the main ethical implications that the initiative is focusing on?

CS: So we think a lot about privacy. Because (...) everybody talks about the home being a sacred space and at home being private. When you look at law, there's a lot about whether the government can search your home, because we think there's heightened privacy expectations there. That's a big ethical challenge. And I think it comes up too because at home, you know, very few of us exist in a vacuum. So if you have a technology at home, you're using it, but maybe your brother's using it, maybe your friend who's staying with you is using it and what does that all mean? And then we think a lot again, about that medical device versus wellness product divide. In some ways, it can be good, some things don't need to be medical products. But in some ways, it can be, I think, a little deceiving to consumers who think because they're buying something that might be like counting coughs or their heart rate that they're buying a medical device, and that comes with certain guarantees of safety and efficacy that they're not getting if it's a wellness product.

HULR: Does home health care have the potential to be cheaper than the status quo? And if yes, how so? And if not, how would the increased costs be a major barrier for implementation of home health care?

CS: So I think you could look at this question in a couple of ways. You could ask ‘Overall is home health care cheaper for the healthcare system’? I think, if implemented correctly, it can have major cost savings, especially right now. The health care workforce is experiencing such a labor crunch. I believe 112,000 health care workers left during the pandemic. So any technologies that can push health care into the home and do remote patient monitoring and remote diagnostics, and just get care more efficiently delivered, I think, is really necessary, because we need to move from a twentieth century model of care to a twenty-first century model of care.

If you're asking, ‘Is it cheaper for the individual’? That's a harder question because a lot of this is not reimbursable by insurance, especially if it's not a medical device, insurance often doesn't pay for it.

It’s also important to consider if you're building all of these additional functionalities into smart homes, well, who tends to have a smart home, and who doesn't? The same way that now there's an increased awareness that food is medicine, housing is medicine and that these are social drivers of health, in twenty years, or maybe even sooner, we may talk about access to broadband internet as a driver of health.

HULR: Thank you. Our next question builds off of what you mentioned about different drivers of health. It seems that we’re starting to move away from older ways of viewing health and understanding the complexities of its intersectionality.

In terms of responsibilities for making sure that access to this care is equitable, do you have any opinions on where that responsibility would fall or how that responsibility should be distributed to make home health care as equitable as possible?

CS: So I think again, it has to really be across the system. Every stakeholder needs to own it. Regulators need to have it as a priority. I think political leadership: governors, presidents, congresspeople and state legislators, need to view it as something that they need to hold people accountable for. I think you also need to have providers say ‘It is a goal of ours to provide equitable care’.

Then we need to focus on how we can do that and push our teams to deliver equitable care. I think there's also some debate over whether you call patients consumers of health care because it creates a very capitalist framing. Health care is not sneakers. It's not like people can just as easily vote with their values for what health care products they do or don't consume. I always say when push comes to shove you're not asking if your chemo is sustainably manufactured. But I think, whenever possible, I like to say that as a healthcare consumer and healthcare patient, it is important for me that the medical system has those values of equity and equality implicit in everything it does. And how can we do that? And I think there are patient advocacy groups that are holding people accountable in a great way.

HULR: Thank you. We will move on to the last question now.

We want to create a take away something that you would like the readers to know. What do you believe to be the biggest question health care providers and lawmakers should ask themselves in order to decide if the advantages of new medical technologies such as precision medicine or consumer genetics outweigh the uncertainties?

CS: I think that a good question might be ‘Have I listened enough’?

That means listening to the people who built the technologies and saying ‘Do I really understand what this is’? It's listening to the people who use the technologies. It’s listening to the people whose lives could be affected by the technologies to gain a full sense of what happens when we just implement this [technology] ad hoc.

It means understanding what we are trying to avoid and what we are trying to maximize. I don’t think you can do that unless you consider as many different perspectives as possible, so that you're not just touching one part of the elephant, but you're really seeing the whole of the elephant.

Laura Wegner and Cassandra Brewster

Laura Sophie Wegner is a member of the Harvard Class of 2025 studying Economics with a Secondary in Psychology. She is from Germany and Bulgaria, which sparked her passion for international law.

Cassandra Brewster is a member of the Harvard Class of 2024 and an HULR Staff Writer for the Fall 2022 Issue.

Previous
Previous

Drake, Google, and Dobbs: Trademark and Data Privacy Law in the 21st Century

Next
Next

Intellectual Property Rights and International Law