Skip to main content

tv   Experts Testify on Big Tech Accountability - PART 3  CSPAN  January 2, 2022 5:33pm-7:01pm EST

5:33 pm
-- what's that? we are going to check to see if we have any republicans on remote, because i am willing to stay and get the last two done. ok, we are going to take a recess and we will be back right after our votes. sorry about that.
5:34 pm
[indistinct conversations] chairman doyle: welcome back. we are going to recognize mr. volokh. unmute yourself. >> can you hear me? can somebody pull up the powerpoint?
5:35 pm
ok. are the powerpoints up by any chance? chairman doyle: i think that our staff is putting it up. let's hold on a second here. ok, we are going to get started. we are still trying to get that up. you can start your testimony. mr. volokh: thank you so much
5:36 pm
for inviting me. it's an honor to be asked to testify. i was asked to be technocratic here, to talk about the particular language of some of the bills. and identify the things that may not be obvious about them. i will start with the justice against malicious algorithms act. one important point to think about is that it basically creates a strong disincentive for any personalized recommendations that a service would provide, because it asserts immunity for recommending information if the provider -- can i see that please? if the provider is making personalized recommendations, and such a recommendations lead to severe or emotional injury.
5:37 pm
that means it would be a huge disincentive for youtube and those kinds of entities, from giving recommendations based on information about you, about your location, about your past search history, because it might be worried that the information is liable. the incentive would be the generic recommendations, the generally popular material, not personalized, or to big business produced material which is likely to be safe and provide compensation for the platform, if there is a lawsuit. so the consequence is mainstream media would win and user generated content would lose, in that some creator is putting up things lots of people like and the platform has declined to recommend or would no longer be inclined to recommend it, once
5:38 pm
they are subject to liability. you could think that is good or bad depending on how you think about user generated content, but i think it would be a consequence. next side. -- slide. so, now i will turn to the preservation of constitution protected speech act. the thing that is not a surprise is it clearly authorizes state laws that protect against discrimination. those laws may be preempted by 230, which can be read as giving platforms the ability to block any material they find objection to. this modification would allow states, if they wanted to ban discrimination by platforms, to do so. there could be an interesting first amendment problem there, a hard question, but at least it would remove the section 230 obstacles to those kinds of laws that require platforms to treat
5:39 pm
all opinions equally. next slide. another thing about the statute, about the bill, is it would strip away immunity when the content provider utilizes an algorithm to -- to post content to a user, unless the user knowingly and willfully selects an algorithm. all suggestions stem from algorithms, they recommend the most popular things. recommends a random thing, that is an algorithm. so the real question is, what it would take for a platform to comply with the knowing selection requirement. if it is something like i agree this will be selected by an algorithm would be enough to comply, then in that case the bill would not do much harm, though i am not sure it would do good to require everybody to take the extra time to agree to the algorithm. on the other hand, if it requires an explanation, or an
5:40 pm
array of choices available to users, that could be a problem because computers cannot work without algorithms, so -- the recommendations a platform can supply, or litigation about what counts is knowingly willful selection. next. um, the third major feature of preserving constitutionally protected speech act is it would require an appeals process and transparency. there is a lot to be said for the value of transparency requirement, even imposed and big businesses when the platforms are essential to american political life. at the same time, it depends on how transparent it has to be. the requirement requires that the company clearly state why content was removed. what if they say it is hateful. why are you saying that? we said hateful, is that clear enough? what about pornography.
5:41 pm
it is not pornography, it is our. is that clear enough? clearly stating what counts as a reasonable or user-friendly appeals process, not like defining the phrase of appeals or user-friendly, that is not a legal term. next. the safe tech act. i want to see a few things about this. there is no immunity under the act, except in payment to make speech available, or -- of the speech. that means paid hosting services would be stripped of immunity. so, the hosting services like blogging software, those kinds of things, they would not be able to charge, or else they would be liable. free services advertising support, i am not sure that that is a good thing to required, but that is what the law requires. it would also mean a company would be liable for anything
5:42 pm
posted by creators or funded by revenue. so, youtube shares advertising revenue with creators, and it would be liable in that situation. i'm not sure that is a good idea. it may not be intentional. chairman doyle: can you wrap up the testimony? you are a minute and a half over. mr. volokh: let me just close and i will be happy to answer questions later. chairman doyle: let's see. we want to recognize mr. lyons. mr. lyons: thank you to the members of the committee. my name is daniel lyons, i am a senior fellow at the american enterprise institute and a professor at boston college law school, where i write about internet policy. i want to focus on two themes. section 230 provides critical infrastructure in the online ecosystem.
5:43 pm
we tinker with it at our peril. secondly, algorithms risk doing more harm than good for inter-based -- internet-based companies and for users, while unleashing litigation related to the issue that the subcommittee six to address. one cannot emphasize enough the importance of section 230 to the modern internet landscape. for accurately describing the statue as the 26 words that created the internet. the hearing is focused primarily upon the larger social media platforms, such as facebook, but it is important to recognize a wide range of companies rely heavily on section 230 every day to acquire and share user content to millions of americans. section 230 provides the legal framework that allows platforms to facilitate users speech at mass scale, and promotes competition among the platforms. it relieves startups from the costs of content moderation, which reduces barriers to entry online. because section 230 is wound
5:44 pm
into the fabric of online society, it's difficult to predict in advance how changing the statute will ripple throughout the ecosystem. one thing we know is that the ecosystem is complex and dynamic, which creates a greater risk of unintended consequences. professor eric goldman argues by reducing section 230 protections makes it harder for disruptive new entrance to challenge prominent companies. costs would rise for everybody, but the incumbent can afford that more easily than a startup. it would be ironic if seeking to reduce facebook's influence, this committee inadvertently protected them against competition. congress's previous mmi highlights of the risk of unintended consequences. in 2017, they eliminated intermediated liability for sex trafficking. the purpose was noble, to reduce online sex trafficking, but good intentions do not justify bad consequences. subsequent studies by academic and by the gao, show that fosse
5:45 pm
made it harder, not easier for law-enforcement detect perpetrators, made conditions more dangerous for sex workers, and had a chilling effect on free speech. the bills before the committee present similar risks, this is true of attempts to regulate platform algorithms. we have heard a lot about how algorithms can promote socially undesirable content, but we must recognize that they also promote millions of socially beneficial connections every day. yes, they make it easier for neo-nazis to find each other, but it also makes it easier for other minorities to find each other, like lgbtq, social activists, or bluegrass musicians. they benefit from the company's use of algorithms to organize and current user generated content. it would be a mistake to eliminate those benefits because of the risk of abuse. the genius of the internet has been the reduction of information costs. one click allows a user to access a vast treasure trove of
5:46 pm
information, transported around the planet at the speed of light for nearly zero cost. the downside is filtering the cost, users must sort through the treasure trove to find what they want, and it differs from user to user. internet companies compete fiercely to help users sort the information and they do so through algorithms. descent devising them to reduce those services, in part because of the word vaguely. jama defines personalized algorithms as using any information specific to an individual, that is a broad phrase. if any algorithmic recommendation contributes to physical or severe emotional injury, also vague terms, the platform is stripped of its protections. so the incentives for the platforms are clear, whatever social gains we reap by reducing algorithmic promotion of content would likely be dwarfed by the loss of the ability to
5:47 pm
personalize one's feed. now, this vagueness could also prompt litigation only related tangential to the purpose. i teach my students to identify ambiguous terms, because those are the terms that are most likely to prompt litigation. here, terms like process, materially contributes and severe and emotional injury are catnip to creative trial lawyers, particular in a dynamic environment where innovation creates new opportunities for litigation. that was the lesson of the dcpa, the anti-robo call statute that found new life in 2010 to target conduct that to congress neither intended or contemplated. these ultimately fail, but they still proposed legal costs and of the costs disproportionately affect startups. thank you. chairman doyle: we have concluded our second panel's opening statements. we will move to members' second
5:48 pm
round of questions. each member has five minutes. i will start by recognizing myself for five minutes. ms. goldberg, thank you for being here and taking up the fight for individuals who have suffered tragic and unimaginable harm. it is important work. i have often heard that by amending section 230, congress would unleash an avalanche of lawsuits upon companies, which would break the internet and leave only the largest platforms standing. can you tell me your thoughts on the matter and go into greater detail on the hurdles that users would still have to overcome, would still have to overcome to bring a successful suit against a platform? ms. goldberg: there's so much concern about the idea that if we remove section 230, that it will flood the courts and litigants will stampede in. to that i say, what about the
5:49 pm
frivolous section 230 defenses we see? there is a case against a facebook for discrimination where they claim that mark zuckerberg is immune from liability for lies said to congress, orally and in person. let me tell you why section 230 is not going to create grounds for suing. it is unlawful already to file frivolous litigation. it is sanctionable and it is a violation of the rules of professional response ability. two, the onus is on the plaintiff to prove liability. people say that removing section 230 creates liability. no, the pleading standards are very high and hard, and removing an exemption does not create liability, that is still the hard work of the plaintiff. three, basic economics deter low
5:50 pm
injury cases from going forward. litigation is arduous, expensive and requires stamina for years. it takes thousands of hours of attorney time. and these are personal injury cases that are contingencies. the costs of experts, depositions, those at up and few lawyers will take cases where the costs of litigation are increment with damages. that leads to the most serious cases being litigated. four, nothing will be procedurally different without section 230. motions to dismiss on other grounds are filed by defendants at the same time, the statute of limitations, um, poor pleadings. and five, anti-slapp. it's a faster and harsher deterrent for defendants to get
5:51 pm
constitutionally protected speech based claims dismissed. plaintiffs bringing frivolous cases are deterred by anti-slap, which shifts the sea, so that if a defendant brings an anti--slapp motion, then a plaintiff that loses has to actually pay the defendant's legal fees. so it is very expensive to bring a speech based claim, punitive. six, uninformed plaintiffs sue anyway. section 230 doesn't deter people from filing lawsuits -- there is no barrier to getting an index number and filing a lawsuit. chairman doyle: thank you. it's good to have you back, by the way, matt.
5:52 pm
your organization is committed to ensuring all communities have a voice online and can connect and communicate across technologies. we have been told by large tech platforms and others that in changing section 230, we must create exemptions for smaller online platforms, but you do not think that is true. we have a fairly small exemption of that type in the justice against malicious algorithms act. can you explain your view on small business exemptions, generally? mr. wood: you are right, we do not think it is the way to go. ms. goldberg's answer is amazing and it shows the balances we have to strike. a small business exemption could prevent an increase in litigation and even strategic lawsuits against litigation, so there is danger there. we also think that big platforms can generate beneficial engagement, and small ones can cause harm, so that is why we
5:53 pm
would be careful about only attaching liability to the largest platforms and making sure that smaller ones cannot be held accountable. chairman doyle: ok, thank you. my time is up. i want to recognize the ranking member for five minutes. >> thank you very much for the panel. mr. volokh, twitter has a new ceo and his earliest statements indicate he is not a fan of the first amendment. twitter has expanded the scope of their private information policy to prohibit the sharing of private media, such as images or videos, without their consent. this is abuse of power by these companies to show that they are being arbiters of truth, however twitter goes on to say that they will take into consideration whether the images are publicly available and or is being covered by journalists, or being shared in the public interest, o isr relevant to the community. i understand that twitter is
5:54 pm
protected under section 230 for this type of action, how where the action be interpreted under the first amendment if it was the government taking this action? you need to unmute. mr. volokh: i'm sorry. under the first amendment, the government could not --. if it was a newspaper doing it, it could do that. and newspapers do that kind of thing. the congress should consider it more like a newspaper, or more like the post office, which is
5:55 pm
government run, or ups or fedex. we do not expect them to decide, oh, there is bad things being done, so we are going to shut off a phone service. we do not expect ups to say we will not deliver books from this publisher because we think that they are bad. they are only the carrier. the question is, is twitter letting people contribute to others on the feed, should the law view twitter more like a phone company, more like a post office or more like ups, or more like a magazine, which is supposed to be making editorial judgments. that is the question. >> mr. wood, you talked about how a ruling opened the door to providing platforms protections for material that a platform deems as unlawful because of the
5:56 pm
subsequent description of that material was viewed by the court as republication of that material. there seems to be an area of general agreement between scholars on the ends of the political spectrum from --. as part of the big tech accountability platform, that we create a bad carve out that would --protection for platforms that facilitate illegal activity. how with this proposal help hold tech companies responsible for illegal activity on their platforms? mr. wood: as you noted, we talked about people on opposite sides of the political spectrum who have taken that view and we have explained that gesture bidders could be liable. there is some appeal to thinking of every time a website serves as a publication, that is not the only way to think about it, so the algorithms use other
5:57 pm
techniques, it could be seen as separate from the original liability exams and they could be held liable for. that is what we are talking about, how could we do it, whether it is a majority or a minority, or our proposal -- there should be ways or whether we call them bad samaritans or not, hold companies accountable when they know that their choices are causing harm. >> dr. franks, in your testimony you seemed to agree with this assessment, but in your suggestion you said you had a second concept of indifference to the bad samaritan platform, would you elaborate? dr. franks: my apologies. yes, that the deliberate indifference standard is intended to set a bar for how intermediaries would need to respond to certain types of unlawful content, that they would not -- the shield is simply because there was content, that's assuming that
5:58 pm
they knew about the content and refused to take steps to prevent it. >> my time is about to expire. i yield back. chairman doyle: glitzy, -- let's see. you are now recognized. >> i appreciate the thoughtful way we are approaching reform, which can clearly have wide ranging effects across the internet ecosystem. mr. wood, what specific reforms can we make to section 230 that will ensure -- ensure platforms are not patting their bottom lines or knowingly harming vulnerable populations? mr. wood: we have not endorsed any of the approaches, but we think are good ideas and all of them. the wide-ranging impacts he discussed. finding a way to hold these
5:59 pm
platforms accountable when they know they are causing harm, whether by examining that interpretation of the protections in c understanding that distribution and amplification1 are different. there are other approaches, but we are all looking at the same problems and talking about how to address them, not whether we should. >> that is a question, how do we address this complicated topic? appreciate your thoughts. thank you for coming today again. i know you have thought a lot about how it might be appropriate to carve out some type of algorithm for legal immunity -- some types of algorithm for legal immunity under section 230. what do you think about the general product design, features as was suggested in one report?
6:00 pm
>> in general, one thing that's been helpful at facebook, as we heard from the first panel, is that attention has been shifted from debating the content and the right to post and who gets to decide if it gets taken down and looking upstream at the practices of the platform, the design, before something goes viral, that make it go viral or pushes it into someone's newsfeed or child's instagram feed. one of the studies that come out as a result of her work is that facebook employees themselves admit the mechanics of the platform are not neutral and are in fact spreading hate and misinformation, so the bill the chair has introduced and the other bill, those especially, by
6:01 pm
focusing on either nontransparent algorithms or knowing and reckless use of algorithms that then result in extraordinary harm, whether it is international terrorism or serious physical or emotional harm, that that narrow carve out where it has caused bad stuff seems to get at some of the most egregious issues without wax -- issues. >>. -- pushes misinformation. >> thank you. dr. franks, how does the status quo of section 230 allow hate to
6:02 pm
proliferate without any accountability to the harmed public? >> disinformation is one of the key issues we are worried about in terms of the amplification and distribution of harmful content, fraudulent and otherwise. one provision of section 230 safeguards the intermediaries promoting this content from any kind of liability, so there's no incentive for companies to think hard about whether the content they are promoting will cause harm, so they have no incentive to review it, think about taking it down or think about whether it should be on the platform at all. >> thank you. ms. goldberg, even if we reform section 230, as you have mentioned, there are other barriers to plaintiffs court cases. how can we ensure plaintiffs have access to the information they need to properly plead their case? >> well, we create the
6:03 pm
exceptions and exemptions of immunity so plaintiffs can get to the point of discovery, where the defendant is compelled and required to turn over information that's relevant to the case so that a plaintiff has a shot at building a viable lawsuit. >> thank you. mr. chairman, i yield back. >> the gentleman yields back. the chair recognizes mr. guthrie. >> thank you and i appreciate the witnesses being near. i know it has been a long day and i appreciate you being here. i have a concern. as i said earlier today, when i talked to the republican leader, a real concern about opioid addiction, opioid sales, enabling the opioid trade, and the sale of opioids on social
6:04 pm
media platforms has skyrocketed. in many cases, law enforcement shares information or leads with platforms to take down this content, but those calls sometimes go unheeded. so for you or anyone who has time to answer this, my question is first to you. can you explain which sections of section 230 provide immunity for platforms when they know a specific instances where this content is on their platform and don't take action to remove it? and if so, would you recommend modifying section 230 two address this issue? how would you balance the need for accountability while fostering the platform's ability to remove this content? i can repeat any of that if you need me to. >> sure. so i did not think section 230
6:05 pm
needs to be modified light of this -- modified in light of this. if we are talking about federal criminal law enforcement generally speaking, section 230 does not preempt federal prosecutions. so federal investigators looking at people who are actively involved in this will already prosecute them. section 230 would preclude civil liability lawsuits against these platforms. i am not sure that would be much by way of civil liability for platforms even if they are alerted something is going on in this particular group. i am not sure we want platforms to be held liable for. in fact, the people engaged in this illegal activity, that is helpful for law enforcement to
6:06 pm
have to be done in a way where they can come on and see and pursue prosecution. platforms certainly are not barred from cooperating with law-enforcement in such things. in fact, the right approach, i think, is not to enforce platforms as opioid comps. they are not well-suited. but instead, to have law enforcement use the information they can find on the platforms to prosecute illegal transactions. >> thank you for that. and ms. goldberg, you have an answer? >> thank you for having this on your mind. we represent four families who have each lost a child, one as young as 14, who bought one fentanyl laced opioid pill during the pandemic, kids home from college, board, experimenting.
6:07 pm
you asked where in section 230 precludes us from being able to hold a platform responsible for facilitating these kinds of sales. the fact is that if we look at section 230 as it is written, we could agree that the matching and peering is not information content. it is not speech based -- it is not a speech based thing the user posted. however, the way the courts have interpreted section 230 is more of a problem than how it is drafted, because it is so extravagantly interpreted that it has included all product liability cases. you cannot sue anything that's related to the product design or the defects. you cannot even sue a company for violating its own terms of service. >> about 40 seconds. how would you change it, and what would you --
6:08 pm
>> one provision in the safe tech act is a carveout for wrongful death. i think if we have the most serious harms overcome section 230, or removed from section 230 for the most extreme harms, that is how. >> i still have 15 seconds. do you have more? thank you. i will you'll back. thank you for your answer. >> the gentleman yields back. the chair recognizes ms. clarke for five minutes. >> thank you, mr. chairman, and thank you to our panel witnesses for your testimony and patience as we came back from voting this afternoon. as many today have stated, section 230 has served its intended purpose of allowing a free and open internet the opportunity to blossom and connect us in ways previously
6:09 pm
thought unimaginable. unfortunately, it has also aided in the promotion of a culture in big tech that lacks accountability. speech in the real world and online is paramount and we can acknowledge the role section 230 plays in creating the conditions for free speech to flourish,. -- flourish online. many companies have used this as a shield for discriminatory or harmful practices, particularly with respect to targeted online advertising. that is why i was proud to introduce the civil-rights modernization act, to ensure that civil rights laws are not sidestepped. section 230 already provides exemptions to its liability shields in federal criminal prosecutions, intellectual property disputes, and certain
6:10 pm
prosecution related to sex trafficking. it can be used to exclude people from voting, housing, job opportunities, education and other economic activity on the basis of race, sex, age and other protected status. now is the time to codify and modernize our civil rights protections to ensure our most vulnerable are not left behind in this increasingly digital age, so my first question is for mr. wood, before giving the other panelists the opportunity to chime in as well. in your testimony, you made clear your belief that a complete repeal or drastic weakening of section 230 would not sufficiently address the harms we have been discussing today. why do you feel a more targeted approach is the better option? >> thank you, representative clarke. that is our belief and it speaks
6:11 pm
to the harms you are talking about here. if we were to repeal section 230 , that would beg the question, what will people sue for? if there is no remedy underneath that, there could still be no relief for the plaintiff who has been harmed. we have support and sympathy for the ideas in your bill, getting civil-rights back into the equation, making sure platforms cannot evade civil rights law. the only questions we have about the approach and how to do that is whether we ought to say only targeted ads should trigger that change in the shield. perhaps -- i will go beyond perhaps -- there are clearly ways platforms could discriminate that don't involve targeted advertising, so we would go to look at where are they knowingly distributing or promoting material that promotes discrimination, addressing those issues wherever they arise, whatever the message or
6:12 pm
technological background of that harm. >> thank you. dr. franks, you spoke about the principle of collective responsibility. you expound on -- can you expand online protection from liability can run counter to that ideal? >> this idea is something we are familiar with in normal times. we know the things that cause harm often have multiple causes. we know there are people who act intentionally to cause harm but also people who are careless, people who are sometimes reckless. there are people who are simply not properly incentivized to be careful. the concept of collective responsibility pretty much everywhere except in the online context tells us all those people, those parties, do have responsibility to be careful, and when people are negligent or reckless or contribute in some way to harm, they can and should
6:13 pm
be found responsible. what that does, importantly, is encourage people to be more careful, encourage businesses not to seek to maximize profits but also consider the ways they might allocate resources to think about safety, innovation, harm that might result from their practices. >> thank you. i think all of you for appearing before us. i yield back the balance of my time. >> the gentlelady yields back. the chair recognizes ms. rogers for five minutes for questions. >> thank you, mr. chairman. mr. volokh, i wanted to ask about a provision in the legislation i have been working on related to section 230 relating to platforms who take on content that's
6:14 pm
constitutionally protected and requires an appeals process and transparency in their enforcement decisions. how would a repeal of section 230 impact speech online? >> it is complicated. i don't know the answer to that fully. here is the advantage. by modifying section 230 to strip away immunity for, for example, political censorship or religious-based censorship, censorship of scientific claims, that would make it possible for states to pass laws requiring nondiscrimination. you might think that is a good thing. there is a lot to be said for that, because the platforms are tremendously powerful, wealthy entities, and one could
6:15 pm
certainly argue they should not be able to leverage that kind of economic power, political power, that we should not have these very wealthy corporations deciding what people can and cannot say online politically. on the other hand, there would be downsides. there would be a lot more litigation, some of it probably funded by public at busy groups, where people say, my item was deleted because of his politics. -- of its politics. and the platform says, no, it was pornographic. and they say, no, i think the real reason was politics. so there might be extra litigation to this and extra chilling on platforms with regard to moderation of content that should be removed. death threats. you would still be allowed, but there would always be the possibility of litigation, where someone could say i was not really threatening, so i think it is pluses and minuses.
6:16 pm
>> a follow-up on that related to the justice against malicious algorithms act, which would amend section 230 to allow liability protections to be narrowed for platforms that host content causing severe emotional injury. do you think that would silence individual american voices? >> i think it would because platforms would realize that recommending things, using algorithm, again, using an algorithm, that any personalized recommendation is dangerous. libel, defamation often causes severe emotional injury. they may be worried about that. they cannot tell what is libel and what isn't.
6:17 pm
they can tell what is risky and what isn't. and what is risky is personalized recommendations freezers. so either they will not recommend anything, and that is bad for business because recommendations keep people on the system, so instead, instead of recommending videos it thinks you like, it will recommend videos that others like, which will not be fun, but safer for the platform, o theyr, mainstream media content, where there is less risk of possibly emotional injury, and those companies could also indemnify them against liability because they have deep pockets. so not so good for user generated content, which will no longer be recommended even if it is perfectly fine. >> if an internet user felt a
6:18 pm
political opinion they disagreed with caused severe emotional harm, could the users to the platform under this bill? >> they certainly could. it remains to be seen whether the court would recognize it, but the term emotional harm is not defined in a way that would exclude that. so i agree that that policy would not offer any personalized algorithm at all because you may inadvertently suggest content that could trigger liability. >> thank you for being here. i yield back. >> the chair now recognizes mr. mceachin for five minutes. >> thank you, mr. chairman, and again, i urge my colleagues to take the view that when we talk
6:19 pm
about immunity, we are saying, don't trust our constituents, because they make up juries, and we are saying they cannot follow instructions, cannot get the answer right in a tribe -- a trial. they are wise enough to deal with issues of death in terms of criminal liability or freedom in terms of criminal liability, but we cannot trust them when it comes to big tech and these immunities? that seems to be incongruent. i trust my constituents and i think they are capable of deciding these issues. that being said, ms. goldberg, you have put together what i call -- what you call, what i assume you think is a good piece of legislation for what we are trying to do. >> i think i misunderstood what you said.
6:20 pm
sorry. >> when i looked at your testimony, you have what you call a -- that seems to be a bill that suggests it might be a model for going forward with 230 relief. >> thank you. >> let me just -- full on, ms. goldberg. i want to make sure you understood the purpose of that. what are the substance of differences between -- where the substantial differences between your model bill and safe tech? >> there are just a few additional carveouts in the bill that i propose, namely, that -- >> would you say what those are? >> sure. i feel there needs to be a part about -- a carve out, injunctive relief, and a carve out for a court ordered conduct.
6:21 pm
there needs to be -- i am trying to think -- a blanketing exemption for product liability claims, which i don't see in safe tech currently, and i also don't see anything that carves out child sexual abuse and exploitation, which, in my opinion, along with the wrongful death claims that you have, those are the types of claims that are the most serious and need specific carveouts. >> ok. i appreciate that. we will look into those things. i would suggest to you that, if you look at the bill again, you might be looking at an old one. the safe tech act. sorry, i did not pass your name, but the gentleman from free
6:22 pm
press action. could you tell me your name again? >> certainly. matt wood. >> oh, it is mr. wood. i thought i heard another name. you seem to believe that the act would adversely affect free-speech. >> i would not see adversely affect free-speech. i think it would tend to lower the shield in some cases, and obviously is aimed at remedying a lot of harm, but we have some concerns about the kinds of civil procedure and litigation proceedings that ms. goldberg was speaking to earlier. >> let me ask you this. you look at the car abouts that would -- look at the car veouts we have. [indiscernible] you are subject to liability. i don't hear those topics being
6:23 pm
suggested that my or your free speech is being limited in any way, so how is it limited when we apply it elsewhere? >> again, i would say i am not talking about limiting free-speech. when you have the lowering of the shield upon receipt of any request for injunctive relief -- >> let me ask you this question. let me just ask you this question. if you and i can be subject to these things, why can't big tech? >> they can be. the question is is that a better state of the world? the question is -- >> why is it not? >> because these platforms do provide -- these platforms do provide special benefits for people to communicate, and yet i think they should be held liable when they go beyond that.we
6:24 pm
recommend not taking away the shield upon simple receipt of a request for injunctive relief. some could be meritorious, some could not be. we do not think there should be an automatic trigger that takes away this shield that has free benefits but can cause harm when abused. >> the jill mccluskey time has expired. -- the gentleman's time has expired. the chair recognizes mr. wahlberg for five minutes. >> mr. volokh, i want your thoughts on my discussion draft that would establish a carveout section 230 -- carveout from section 230 on reasonably foreseeable cyber bullying abusers. in the draft, cyber bullying is defined as intentionally engaging in conduct reasonably foreseeable and places a
6:25 pm
reasonable fear of death or serious bodily injury, causes or attempts to cause or would reasonably be expected to cause an individual to commit suicide. this would mean that an interactive computer service would need to know of a pattern of abuse on its platform, so, mr. volokh, do you think that nearly opening up liability this way would lead to behavioral changes by tech companies that reduce cyber bullying online? >> i think it will lead to some changes on the platforms, but i'm not sure that it would be big changes. the problem is -- this is what i call reverse fireman principal, which is with, with great responsibility comes great power.
6:26 pm
-- putting me in fear of violence from third parties and may also lead me to feel suicidal or something. do we want that? where the opposition, where they are deciding who is bullying and who isn't, and whether it is the sort of material that should be taken down? i don't think that should be left to platforms.
6:27 pm
-- investigate this and deal with it in some situations. i don't think the platforms that don't have subpoena power, don't have investigative power, should be made internet bullying cops . >> thank you. appreciate that. mr. wood, on the case of cyber bullying online, in the case of cyber bullying not being illegal, it can lead to illegal actions in the real world. do you believe in light of that that my carveout in section 230 for cyber bullying would provide a pathway for families and children to seek relief? >> we do. we tend not to favor the carveout method. rather than time to of liability exemption for removal of --
6:28 pm
exemption or removal of same, we would take a more comprehensive approach, saying anytime a platform knowingly facilitates farm, they should be liable for damages and not necessarily solely for the initial user post. that is a spectrum but we think a court should have a chance to look at that and not be precluded from ever examining it. >> thank you. appreciate that. mr. chairman, i took more than my time in the first panel so i get this back. >> that is very generous of you, mr. wahlberg. i appreciate that. mr. soto, you are recognized for five minutes. >> i think you and the ranking member for our spirited debate in panel one. i want to focus on common ground i have gathered after hearing from colleagues on both sides of the aisle on section 230.
6:29 pm
there are many things that, in the real world, would have consequences, but doing them virtually, you are exempt, whether it is criminal activity, violating civil rights, even injuring our kids. many of these things, if you did them in real life, as a newspaper, radio station or business, you would be liable, but you would not be in, magically, in the virtual world, and that happens because of section 230. three areas of common ground i saw this morning, protecting civil rights, stopping illegal transactions and conduct, and protecting kids. ms. goldberg, you have this proposition to remedy civil rights violation. i want your thoughts on the importance of injunctions when these civil rights violations are ongoing and your thoughts on damages?
6:30 pm
>> injunctive relief is important. the current standard is you cannot enforce an injunction against a tech company because of section 230 but you cannot include them as a defendant because of section 230. take my client, for example. she was the victim of extreme cyber stalking. her ex-boyfriend impersonated her, made bomb threats all around the country to jewish community centers and he was charged with 60 months in federal prison. and a lot of the threats he was making were on twitter. he smuggled a phone into prison, got in trouble for it, got resentenced, and twitter will not take that content down, even though it was the basis of his sentence and really, you know, very much related to why he was
6:31 pm
in trouble in the first place. i can't get an injunction against them, but i cannot -- if i try to get a defamation order, i cannot enforce it because twitter would say their due process is violated. >> thank you. we see time is of the essence. even when it is not, there is nothing you can do without an to have injunctions. another common ground issue is protecting our kids. ambassador, i know you discussed in your testimony a little bit. where is the line on how to protect kids under 18 on these social media sites in your opinion? >> i think what we see is, again, the platform design, as ms. goldberg has discussed and as we have seen in some facebook papers, connects people who can harm children and promotes content into their feeds that can harm children, so as you look at remedies, figuring out
6:32 pm
how you can hold them accountable without creating some of the negative effects nearly targeting their designed -- their design, the physical harm, and, if there is a way, to cordon the emotional harm. we hear of this epidemic of mental health issues, especially among young girls, and they go back on and on. that is where their social life is and yet they are fed these damaging self images that hurt them. >> thank you, ambassador. in any other situation, the commercial entity would be viable -- would be liable for endangering our children like that. dr. franks, welcome to the sunshine state.
6:33 pm
i want to talk about stopping illegal conduct and transactions beyond just the civil rights arena and get your advice on what we can do to pursue stopping illegal transactions, drug deals and things like that, among other illegal conduct. >> part of the reason why i am somewhat hesitant to endorse approaches that take a piecemeal carveout approach is what you point out, that there are numerous categories of harmful behavior, and these are just the ones we know about today. the ones that will happen in the future are going to be different. they are hard to anticipate. this is why the most effective way of reforming section 230 is focusing on the fundamental problem of the perverse incentive structure, that is, we need to ensure that this industry, like any other, has to think about the possibility of being held accountable for harm, whether that's illegal conduct, harassment, bullying. they need to plan their
6:34 pm
resources and allocate them and think about their product along those lines before they ever reach the public. they need to be afraid they will be held accountable for the harms they may contribute to. >> your time has expired. >> i go back. -- i yield back. >> the chair recognizes ms. rice for five minutes. >> thank you, mr. chair. it is important for us to remember that, the last time both houses of congress agreed to change internet liability laws, was in 2018, when the stop sex trafficking act was passed. even though not much time has passed since then, i believe our understanding of how online platforms operate, how they are designed, has evolved with this conversation about section 230 liability protection in recent years.
6:35 pm
ms. goldberg, as an attorney specializing in cases relating to revenge porn and other online abuse, can you talk about how this act has affected these cases? >> basically, it has come to be a bit problematic in my practice area, because it relates -- conflates child sex trafficking consensual sex work, but i did plea that act in the omegle case i told you about, which basically says that omegle did facilitate sex trafficking on its platform when it matched my 11-year-old client with a 37-year-old man who then forced her into sexual servitude for three years. they are going to still claim they are immune from liability,
6:36 pm
and it is, right now, the best hope we have when it comes to child sexual predation on these platforms. >> so if you could talk maybe more about the -- noun -- about the concerns raised by many people about the impact on sex workers. you mentioned the sex work. it is my understanding that it amends some state suits and civil restitution suits dealing with sex trafficking and prostitution, separately and importantly creating criminal liability for websites that support prostitution. could you talk about how the act operates? >> from my understanding, there's been one case doj has brought, and platforms basically
6:37 pm
use -- it has not done that much and created concern for workers who feel their lives are endangered by having to go out onto the street. >> thank you for your time. i yield back the balance of my time. >> the chair recognizes the representative for five minutes. >> thank you, mr. chairman, and thank you to the witnesses on the second panel. this may be one of the longest
6:38 pm
hearings the chairman has overseen and i appreciate your patience. to ambassador kornbluh, and i asked this because you are a veteran of the house intelligence committee, in your testimony, you discuss the national security risk associated with action on clarifying section 230, and you especially mention how terrorists use online platforms. it is chilling. could you tell us more briefly about how terrorists use social media platforms? >> am i on now? thank you for your leadership on
6:39 pm
these issues. one example, one person is been argued -- one person, it has been argued, was allowed to post content on facebook produce d by hamas supporting violence against israel. when a circuit court reeled section 230 -- court ruled section 230 protected facebook, the chief judge dissented, urging congress to better tune section 230, adding that this
6:40 pm
could leave dangerous activity unchecked and that congress might want to consider whether or not allowing liability for tech companies that encourage terrorism, propaganda and extremism is a question for legislators, not judges, with a similar set of concerns in gonzales v. google, where family members of an individual killed in a nightclub massacre in distant goal -- in istanbul sued as well. >> chilling. it seems to me, as a nonlawyer, both in terms of testimony today, but also reading the, i think, really very well drawn memo on the part of the committee staff, that the courts are saying to congress you need to do something about this.
6:41 pm
and i said earlier, yesterday, when the first panel -- i was a conferee on the 1996 telecom act. we certainly did not write section 230 to allow any social media platforms to be able to undertake activities that you have described, so thank you to you and your good work. mr. wood, i really appreciate your thoughtful and nuanced testimony. can you just further elaborate on your recommendation that congress should clarify the plain text of 230? you have the court's interpretation of -- you claim
6:42 pm
that the court's interpretation in zaren v. aol was overbroad. that was a long time ago. it created a precedent for how the courts interpret section 230 today? i think an overbroad way -- in an overbroad way, but can you clarify that? >> you got that right. 1997. some claimants have gotten over that hurdle in product liability cases. in one case, snapchat was held liable for a filter they were providing and allowing users to layer over their own generated content. -- user generated content in the offing, so clarifying that, a distinction where there is a publication, where they are not liable, but some kind of further
6:43 pm
amplification or distribution, whether algorithmically or not, there could be some reports for victims where that conduct is either creating harm or exacerbating harm. >> thank you. on mr. volokh's written testimony, we received that about an hour before the hearing began. i don't know if the committee had it earlier or if it is just late, but -- in order to examine it, we really need it the night before, so as we are preparing for the hearing, we can read the testimony, which is what i do the night before, so i don't know why or how -- >> i don't have an answer for you. we will figure that out. your time is expired.
6:44 pm
the chair recognizes the representative for five minutes. >> thank you much, mr. chairman, and for allowing us to have this hearing. earlier this year, alongside senator lujan and klobuchar, raising the alarm over the increasing rate of spanish and other non-english misinformation and disinformation across platforms and their lack of transparency with regard to efforts to limit the spread of harmful content in all languages, content i couldn't sometimes does result in loss of life. if they are still in investing in combating spanish and other non-english misinformation, moderation efforts on social media sites, including facebook, failed to tackle the widespread viral disinformation targeting
6:45 pm
hispanics and others, and others promoting human smuggling, vaccine hoaxes, and other misinformation. it result in the loss of life and other verandas actions that might be happened upon victims. what can be done to ensure the consistent and put a bull enforcement of content moderation policies across all languages in which a platform operates, not just english? >> thank you for the question and for joining us in calling attention to this issue. it is something we have done a lot of work on, highlighting this grave disparity. i don't know. obviously 230 is central to this hearing and to everything the platforms do. i don't know that there's a 230 response. when these platforms have terms of service that prohibit content, however clear were good those are people can debate,
6:46 pm
should enforce them equitably and not solely in english, leaving out spanish and other linkages -- english, leaving up spanish and in other languages content they thought was harmful enough to be taken down in english. but i don't see a 230 angle here. obviously it is central to everything so they could maybe be held liable for failing to honor their terms of services and engaging in deceptive practices by the tc. the answer is yes. companies have contemplated raising 230 defenses even against the ftc in suits regarding unfair or deceptive applications of their terms of service, so there might be something there as this moves forward. >> i think if we actually reapplied section 230 so that
6:47 pm
these massive, massive information organizations that are actually profiting from the proliferation of truths or lies. it appears from the testimony we have heard today from ms. haugen and others is that lies seems to make them more money, negative discourse makes them more money, having people interact with each other on a negative basis gets them more money, so the idea that they can hide behind negative liability, it is our responsibility to reset section 230, to more clearly do so. that being the case, do you think that may offer a deterrent for them to stop ignoring their ability to do more to protect people from harmful content? >> yes, it could. as i have discussed, in our view, when platforms know they
6:48 pm
are causing harm, that is different from merely publishing were hosting content the first instance -- content in the first instance. we are supportive of section 230 at free press action and think it is an important lots of retain. however, when platforms are described as having the time, energy and money to find out what people like connect with each other and analyze personal data when it makes the money, but don't have the resources to do that when it is causing harm, that is hard to believe. that is where companies like to wave the wand and say, well, we don't have a specific burden. they seem to find a time to do it when it affects their bottom line. >> we had testimony earlier from a whistleblower who clearly stated that facebook alone, that one platform, will be talking about a profit this year of tens of billions of dollars, and clearly pointed out with facts
6:49 pm
and information she had diebold's through her whistleblower actions, that those profits do soar when they ignore life and what is best for the human interests of their viewers. >> your time has expired. >> i yield back. >> i think the gentleman. the chair recognizes ms. kelly for five minutes. >> thank you, mr. chair, and thank you all for testifying today and for your patience. you said the dominant model of social media for making money is advertising. i was concerned about the tiktoks at the start of the school year encouraging students to destroy school property. can you explain how a model that prioritizes advertising revenue
6:50 pm
leads to social media and other platforms to promote more harmful information? >> thank you. the advertising model essentially means we are not asking people to pay for a product. that is to say people think they are getting something for free. the only way for it to be profitable is for them to be able to sell you more and more ads that are more and more targeted. what that sets up in terms of the incentive structure for these companies is to maximize engagement. that means the more people who choose to live on these platforms, to become addicted to these products, and we want to learn as much about them as we can. so that is what it is being allowed to unfold socially without any hindrance. if that is your entire model, you are simply trying to keep them on their platform.
6:51 pm
unfortunately, because of human nature, the things that keep people addicted are things that are dangerous, provocative, illimitable -- polemical, extreme. >> how does the use of personalized algorithms or other design choices by some social media companies and other platforms amplify this problem? >> a couple different directions. we can think about particular kinds of long abilities. if someone is well aware that the person using their platform is vulnerable to body images, suicidal thoughts, these are things that can feed them more and more of because of the way the algorithm is picking up on those tendencies, so that is one way in which personalized algorithms can lead to harm. the other thing is when the user is looking to cause harm, is
6:52 pm
looking for search terms, resources and ideas about how they can distribute harm, and in that sense, that is something they are putting into the system and getting back, an incredible array of entryways and rabbit holes to more and more extreme versions of content and ways to harm other people. >> thank you. ambassador, do you have anything you would like to add to this? >> one thing i think that is often said is that the platforms have no incentive to cause harm, that would be a pr hit, that sentiment runs in the other direction, but i worry that the incentives run toward these harms. there's a sort of regulatory arbitrated, where these platforms -- regulatory arbitrage, where these platforms, unlike other companies, are not subject to certain regulations that this congress and previous congresses
6:53 pm
passed. cable television showed violence but not entirely. they could get more eyeballs, advertising dollars, but it is flouting so many societally beneficial norms. similarly with the companies that operate on these platforms. i talked to a vaccine expert who said i feel as though the conspiracy theorists are using the engine of social media and i'm fighting it. >> thank you. i go back. >> the gentlelady yields back. i want to thank our witnesses for their participation today, for your patience, excellent answers to our members questions, and it is going to be very helpful as we try to work
6:54 pm
together in a bipartisan way to get a bill that we can pass in the house and get passed in the senate and have the president sign. i know we have a lot of work ahead of us, but we are committed to working with our colleagues in the republican party to put our heads together and come up with a good bill and edit thoroughly and put it before the members. you have all been very helpful in that process so we appreciate it. i request unanimous consent to enter the following records into the -- following testimony and letters into the record. the letter from the national hispanic media coalition in support of hr5596, the justice against malicious algorithms statement from preamble, in support of the protecting americans from dangerous algorithms act, and hr 5596, a letter from the coalition for a
6:55 pm
safer web in support of that, in addition to other pending committee legislation, a letter from the anti-defamation league in support of reforming section 230 to hold platforms accountable, a letter from the accountant -- the alliance to counter crime online in support of reforming section 230 of the communications decency act, a letter from the victims of illicit drugs of clotting members of the committee for working on reforming section 230 , a letter from a civil rights association acknowledging the threat created to civil-rights on tech company platforms, a press release from the coalition for a safer web, an article for -- from an mit journal
6:56 pm
review, an article called company documents show facebook and instagram toxic to teen girls, an article titled a secret elite is exempt, an article from the wall street journal, a clipping from the new york times titled what is one of the most dangerous toys for kids? the internet. an article from the washington post titled a race blind hate speech policy came at the expense of black users, a letter from the chamber of progress in support of the same sex worker study act, a paper by guy rosen, meta, titled, our work on keeping people informed and limiting disinformation
6:57 pm
about covid, a letter from the american action form, remarks by them president trump, vice president pence and members of the coronavirus task force, and finally, a letter from the computer and communications industry association. without objection, so ordered. i remind members that, pursuant to committee rules, they have 10 member -- 10 business day to submit additional questions for the record to be answered by the witnesses who appeared. i would advise the witnesses to respond promptly to any such questions. the committee is adjourned. [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. visit ncicap.org] [captions copyright national cable satellite corp. 2021]
6:58 pm
>> tonight on q&a, a washington post finance columnist on her book. >> it is not a matter of if there will be another economic crisis, but win. we want to set you up for the next crisis. it is not all about covid, but what recession will come down the road. it may be long, it may be short, but life is going to happen and i need you to prepare now.
6:59 pm
i do a lot of financial seminars in my community, and it is so hard to get people to save and prepare when they are doing well. because they are doing well, they don't think tomorrow is going to have an issue. i say you need to save and they are like, i will get to it. when a crisis hits, everybody is in frugal mode and ready to do it, but that is too late. the time to do that is when you have the resources, when you have the ability to cut. it is easy to cut when you can't pay for anything or things are shut down. i wanted to say, let's prepare, let's be like the fireman or woman who is next -- who is ready for the next fire. they hope it won't happen that they are prepared. >> michelle singleterry tonight
7:00 pm
on q&a. you can also listen on announcer: c-span is your unfiltered view of government. co-funded by these television companies and more. >> broadband is a force for empowerment. that is why charter has invested billions tilting infrastructure, updating technology, empowering opportunity in communities big and small. charter is connecting us. announcer: charter communications supports c-span as a public service as well as these other public service -- television providers. they are going to talk political stories. rebecca is a democratic strategist and former advisor to the elizabeth warren it campaign and

53 Views

info Stream Only

Uploaded by TV Archive on