Tag Archives: Facebook

How to use Facebook’s Free New Video Chat Option

By Tracey Dowdy

Never one to let the competition get too far ahead, Facebook has come up with a new video chat alternative to its competitors, Zoom, Skype, Jitsi Meet, and Google Meet. With Messenger Rooms, up to 50 people can chat in a room at once, with no time limit. Participants don’t even need an account to use the room.

Messenger Rooms offers more features than its Facebook Messenger video chat option, allowing up to 50 people on screen with no time limit through either the main Facebook app or through the dedicated Messenger one.

Zoom became especially popular in the early days of self-quarantining, but issues around security leading to Zoom-bombing soon became an issue. Facebook is no stranger to security and privacy problems. Still, in a livestream earlier this month, CEO Mark Zuckerberg said that the company has been “very careful.” He tried to “learn the lessons” from issues users have experienced with other video conference tools over the past several months. 

Facebook also owns WhatsApp, with over 700 million accounts participating in voice and video calls every day on both platforms. In a press release in April, Facebook noted that the number of calls has more than doubled in many areas since the coronavirus outbreak began.

Facebook seems to be taking the potential security risks seriously. Messenger Rooms promises these features:

  • Locking: Rooms can be locked or unlocked once a call begins. If a room is closed, no one else can join, except a Group administrator for rooms created through a Group. 
  • Removing a participant: The room creator can remove any unwanted participants. If the room creator removes someone from the call or leaves, the room will lock automatically, and the room creator must unlock the call for others to join. 
  • Leaving: If at any point, users feel unsafe in a room, they can exit. Locking down a room prevents others from entering, not participants from leaving.
  • Reporting: Users can report a room name or submit feedback about a room if they feel it violated Facebook’s Community Standards. However, since Facebook doesn’t record Messenger Room calls, so reports and feedback will not include audio or video from the room.
  • Blocking: You can block someone on Facebook or Messenger who may be bothering you, and they will not be informed. When someone you’ve blocked is logged into Facebook or Messenger, they won’t be able to join a room you’re in, and you won’t be able to join theirs.

Make sure you have the latest version of the Facebook and Messenger mobile apps downloaded from the App Store or the Google Play Store to create a room on your phone. 

  • Open the Messenger app.
  • Tap the People tab at the bottom right of your screen. 
  • Tap Create a Room and select the people you want to join. 
  • To share a room with people who don’t have a Facebook account, you can share the link with them. You can also share the room in your News Feed, Groups, and Events. 
  • You can join a room from your phone or computer — no need to download anything, according to Facebook.

To create a room on your laptop or desktop, go to your Home Page and to the box at the top where you would usually post. Click on “Create Room” and follow the prompts to name your chat, invite guests, and choose your start time.

Currently available to everyone in the US, Canada, and Mexico, Messenger Rooms is rolling out worldwide over the next week.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Facebook Cracking Down on Fake COVID-19 News

By Tracey Dowdy

It’s nothing new for Facebook to be under scrutiny for fake news and hate speech. It’s been an issue for years and was never more evident than in the wake of the 2016 presidential electionThey’ve made concerted efforts to rein in misinformation, but it’s an ongoing battle. 

Facebook has been open about the challenges both human reviewers and AI have in identifying and removing offensive content. While things have improved, the number of users posting makes it challenging to curate information accurately.

One area where their efforts are glaringly deficient is the amount of COVID-19 related misinformation in languages other than English.  Avaaz, a crowd-funded research group, analyzed more than 100 pieces of Facebook coronavirus misinformation on the website’s English, Spanish, Portuguese, Arabic, Italian and French versions. 

They found that:

  • It can take Facebook up to 22 days to issue warning labels for coronavirus misinformation, with delays even when Facebook partners have flagged the harmful content for the platform.
  • 29% of malicious content in the sample was not labeled at all on the English language version of the website.
  • It is worse in some other languages, with 68% of Italian-language content, 70% of Spanish-language content, and 50% of Portuguese-language content not labeled as false.
  • Facebook’s Arabic language efforts are more successful, with only 22% of the sample of misleading posts remaining unlabelled. 
  • Over 40 percent of the coronavirus-related misinformation on the platform — which had already been debunked by fact-checking organizations working alongside Facebook — was not removed despite being told by these organizations that the content was based on misinformation. 

Avaaz’s research led Facebook to begin alerting users if they’d been exposed to false information. So, according to a Facebook blog post and a report from BuzzFeed News, both Facebook and YouTube are cracking down yet again and using AI to weed out the volumes of misleading content. 

Facebook has been forced to rely more heavily on AI as the COVID-19 pandemic has reduced its number of full-time employees. They still rely on contractors, many of whom, like the rest of us, are working from home. The content review team prioritizes posts that have the greatest potential for harm, including coronavirus misinformation, child safety, suicide, and anything related to self-harm.

CEO Mark Zuckerberg said, “Our effectiveness has certainly been impacted by having less human review during COVID-19. We do unfortunately expect to make more mistakes until we’re able to ramp everything back up.”  

Currently, if a fact-checker flags a post as false, Facebook will drop it lower on a user’s News Feed and include a warning notice about the veracity of the content. The challenge in removing misinformation is that it’s much like dandelions on your lawn – you can remove them from one spot, but there’s already countless more popping up somewhere else.  

Facebook uses a tool called SimSearchNet to identify the reposts and copies by matching them against its database of images that contain misinformation. The problems stem from users being quick to hit the “Share” button before checking to see if the source is a reputable organization.

Facebook Chief Technology Officer Mike Schroepfer admits  AI will never be able to replace human curators. “These problems are fundamentally human problems about life and communication. So we want humans in control and making the final decisions, especially when the problems are nuanced.” 

So before you hit “Share” or are tempted to gargle with vinegar or Lysol, head to UCF Libraries Fake News and Fact Checking page, Snopes, the CDC website, and do a little homework.

As Abraham Lincoln warned Americans during the Civil War, “You can’t believe everything you read on the internet.”

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Beware Facebook Quizzes

By Tracey Dowdy

Which Disney mom are you? Which Hogwarts house do you belong in? Only a true genius will score 100 percent on this quiz. 

How many times a day do you see a quiz like this pop up in your Facebook feed? You may have even been tempted to test your knowledge or play along because the topis piques your interest. That’s no coincidence. Facebook’s complex algorithms and data-gathering technology have been gathering information on users since it’s inception, and one of the most effective ways is through quizzes. 

According to CBC Information Morning tech columnist Nur Zincir-Heywood, though these quizzes may seem innocuous and fun, taking them leaves you vulnerable to identity theft or fraud. “Never do these,” said Zincir-Heywood, a cybersecurity expert who teaches in the computer science department at Dalhousie University in Halifax, Nova Scotia. 

But it’s not just Facebook itself that’s gathering information. Security experts, media literacy groups, The Better Business Bureau, and law enforcement agencies across the country warn that hackers and scammers – not Facebook itself – are behind many of these social media quizzes, collecting, using and profiting from the personal information you share.

Zincir-Heywood cautions that social media quizzes often ask the same questions your financial organizations use for security purposes to verify your identity when you need to change your password or access your account without a password such as your mother’s maiden name or the name of your first pet.

Though the different questions may not all be on the same quiz, multiple quizzes can collect enough information to enable a cybercriminal to access your banking or credit card information.

“Maybe they are watching [your] social media in general, they know your location, they know other things about you,” Zincir-Heywood said. “All of these then put together is a way to collect your information and, in your name, maybe open another account or use your account to buy their own things. It can go really bad.”

She offers the following tips to protect yourself from their more nefarious side of social media quizzes: 

  • Be careful. Just like in real life, nothing is ever really free. Those quizzes offered on social media actually aren’t free, they come with a hefty cost – your personal information is data mined for companies to use in targeted advertising, or for cybercriminals to sell on the dark web.
  • If you can’t resist the temptation, use fake information, especially for sections that ask for similar information to security questions used by your financial institutions. For example, if you are asked, ‘What’s the name of your childhood best friend,’ use a fake name.
  • Remember, once you take these quizzes, you can’t take back the information you’ve provided. Keep a close eye on your online transactions for unusual or unauthorized banking or credit card activity.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

 

Your Smart TV is Watching You

A recent study of smart TV privacy and security by Consumer Reports asked, “How much does your Smart TV know about you?” They looked at several major TV brands: LG, Samsung, Sony, TCL—which use the Roku TV smart TV platform—and Vizio.

Smart TVs connect to the internet, allowing users to stream videos from services such as Hulu, Amazon Prime, and Netflix. Consumer Reports found that all smart TVs can collect and share considerable amounts of personal information about their viewers. Not only that, so can the countless third-party apps that work within the platforms. 

The Oregon office of the FBI released a warning back in December cautioning consumers that some smart TVs are vulnerable to hacking and a number of them have built-in video cameras. The good news is that newer models have eliminated the cameras – Consumer Reports’ labs haven’t seen one in any of the hundreds of new TVs tested in the past two years.

However, privacy concerns are still an issue. Researchers at Northeastern University and Imperial College London discovered that many smart TVs and other internet-connected devices send data to Amazon, Facebook, and Doubleclick, Google’s advertising business. Nearly all of them sent data to Netflix –  even if the app wasn’t installed – or the owner hadn’t activated it. 

A third study, this one conducted by researchers at Princeton and the University of Chicago, looked at Roku and Amazon Fire TV, two of the more popular set-top streaming devices. Testing found the TV’s tracking what their owners were watching and relaying it back to the TV maker and/or its business partners, using a technology called ACR, or “automated content recognition.” There were trackers on 69% of Roku’s channels and 89% of Fire TV’s channels – the numbers are likely to be the same for smart TVs that have Roku’s and Amazon’s native platforms. 

Testing found the TV’s tracking what their owners were watching and relaying it back to the TV maker and/or its business partners, using a technology called ACR, or “automated content recognition.”

On the surface, we love the technology behind ACR because it’s what makes our systems intuitive and recommend other shows we might enjoy watching. The downside is that the same information can be used for targeted advertising or be bundled with other aspects of our personal information to sold to other marketers. 

Justin Brookman, director of privacy and technology at Consumers Union, the advocacy arm of Consumer Reports, says “For years, consumers have had their behavior tracked when they’re online or using their smartphones. But I don’t think a lot of people expect their television to be watching what they do.”

If you have privacy concerns about your Smart TV, check the manual on how to revert the device TV to factory settings and set them up again. Be sure to decline to have your viewing data collected.

For a more detailed analysis and instruction on protecting your privacy, check out Consumer Reports story How to Turn Off Smart TV Snooping Features.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Parents, It’s Time to Talk About Our Social Media

By Tracey Dowdy

As a Gen Xer, my daughters’ childhoods are captured in framed photos, memories, and photo boxes in the closet off my home office. I didn’t start using Facebook until they were both tweens, and perhaps that’s why I understood the importance of not posting photos or posts about them without permission. Tweens are at an age when even having parents is mortifying, and though I sometimes overstepped, I have their consent for what’s in those old Facebook albums and posts.

Fast forward to today, where the oldest members of the millennial cohort are – gasp – turning 40. Lifestyle blogging was in its heyday during the late nineties and early 2000s, and for a while, it seemed like everyone had a blog, especially moms. It wasn’t uncommon to hear graphic stories of diaper blowouts, potty training mishaps, mispronounced words, and other content that exposed the most intimate details of their child’s milestones and behavior.

The issue is that many of those children are now old enough to Google themselves, and those blogs and Facebook posts are impacting them in ways parents didn’t, and arguably couldn’t have anticipated. The children who were the subjects of those posts are in some cases mortified by the content, while the majority simply resents having had no say over their online presence. There’s even a portmanteau for the phenomenon – sharenting.

Perhaps there’s no better example of the conflict between the two perspectives than that of Christie Tate and her daughter. Back in January, Tate, who has been blogging about her family for over a decade, wrote an essay for the Washington Post titled, “My daughter asked me to stop writing about motherhood. Here’s why I can’t.” Though she’s been writing about her children since they were in diapers, it’s only recently that her nine-year-old daughter became aware of what her mom has been writing, and asked her to stop. Tate refused, stating,

They’ve agreed to a compromise where Tate will use a pseudonym rather than her daughter’s real name, and Tate has “agreed to describe to her what I’m writing about, in advance of publication, and to keep the facts that involve her to a minimum.” Her daughter also has the right to veto any pictures of herself she doesn’t want to be posted.

Tate faced considerable backlash, with many calling her selfish and coldhearted. Many on social media sites like Reddit have roasted her, though she did receive some support.

Fourteen-year-old Sonia Bokhari wrote an honest, insightful piece for Fast Company about what it was like to finally be allowed her own social media accounts – long past the age many of her friends had become active – only to discover that her mother and older sister had been documentary her life for years. “I had just turned 13, and I thought I was just beginning my public online life, when in fact there were hundreds of pictures and stories of me that, would live on the internet forever, whether I wanted it to be or not, and I didn’t have control over it. I was furious; I felt betrayed and lied to.”

Bokhari’s mother and sister meant no harm; they posted photos and things she had said that they thought were cute and funny. She explained her feelings to her mother and sister, and they’ve agreed that going forward, they’ll not post anything about her without her consent.

It wasn’t just the embarrassment of having the letter she wrote to the tooth fairy when she was five or awkward family photos. Her digital footprint that concerned Bokhari as well. “Every October my school gave a series of presentations about our digital footprints and online safety. The presenters from an organization called OK2SAY, which educates and helps teenagers about being safe online, emphasized that we shouldn’t ever post anything negative about anyone or post unapproved inappropriate pictures, because it could very deeply affect our school lives and our future job opportunities.” Bokhari concluded that “While I hadn’t posted anything negative on my accounts, these conversations, along with what I had discovered posted about me online, motivated me to think more seriously about how my behavior online now could affect my future.”

Her response to what she learned? Bokhari eventually chose to get off social media altogether.

“I think in general my generation has to be more mature and more responsible than our parents, or even teens and young adults in high school and college… being anonymous is no longer an option. For many of us, the decisions about our online presence are made before we can even speak. I’m glad that I discovered early on what posting online really means. And even though I was mortified at what I found that my mom and sister had posted about me online, it opened up a conversation with them, one that I think all parents need to have with their kids. And probably most importantly, it made me more aware of how I want to use social media now and in the future.”

For many of us, trying to clean up our digital footprint or that of our children feels a lot like trying to get toothpaste back into the tube or trying to make toast be bread again. Still, it’s important to try. You’re not only curating your own reputation; you’re shaping your child’s before they’ve ever had a chance to weigh in.

Consider your audience and your motivation, then evaluate whether or not what your sharing is worth the potential ramifications. The internet is the wild wild west – maybe you need to start acting as the sheriff of your own town.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

 

Change Your Default Privacy Settings

By Tracey Dowdy 

In a recent article, Washington Post technology columnist Geoffrey A. Fowler asked, “It’s the middle of the night. Do you know who your iPhone is talking to?”

In the story, Fowler outlines a problem most iPhone users aren’t even aware of, that being the volume of data-mining that occurs while you – and your phone – are asleep. “On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.

And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -— once every five minutes,” Fowler says.

Data mining is nothing new, but it’s becoming an increasingly bigger problem. Though Apple stated in a recent ad, “What happens on your iPhone stays on your iPhone,” Fowler’s investigation proves that’s far from the truth. Another problem is that some of it is our fault. Charles Arthur points out that 95% of us don’t change any of the default settings on our devices, and how many of us take the time to read updates on Privacy Policies? It’s the Rule of Defaults. We’re just too lazy to try and Scooby-Doo the mystery.

Fowler published an excellent article last June that maps out how to start setting boundaries on all the information we willing hemorrhage into the ether via everything from our smartphones, laptops, tablets, and smartwatches to our smart home devices like Alexa, and our Nest doorbell.

If you’re wondering whether it’s worth the trouble to dive into the deep end and change those default settings, consider this, by default:

Fowler calls his suggestions “small acts of resistance,” but if The Handmaid’s Tale has taught us anything, those small acts of resistance are critically important. Blessed be.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Reporting Cyber-Abuse on Social Media

By Tracey Dowdy

For as long as there has been life on the planet, there have been those who find pleasure in tormenting others or demonstrate their perceived authority by denigrating those they see as weak or vulnerable. With the advent of social media, those abusive behaviors moved from the real world to the digital world. It’s become nearly impossible for victims to escape. Through social media, the bullying follows you into the privacy of your home, making it seem like there are no safe places.

According to DoSomething.org, nearly 43% of kids have been bullied online, and 1 in 4 have experienced it more than once, yet only 1 in 10 victims will inform a parent or trusted adult of their abuse. A study by the Universities of Oxford, Swansea, and Birmingham found that youth who have been cyberbullied are twice as likely to commit self-harm or attempt suicide than their non-bullied peers. Unfortunately, when those bullies grow up, they often continue their behavior. Pew Research Center found that 73% of adults state they’ve witnessed online harassment and 40% reporting being the target themselves. It’s not just individuals being bullied. Hate groups often utilize platforms like Facebook and Twitter to disseminate their message, and as a result, online hate speech often incites real-world violence.

The message, “If you see something, say something,” is more than a catchy slogan. It’s your responsibility if you see abusive or hate-fueled messages and images online. Here’s how to report offensive content.

Twitter clearly maps out how to report abusive behavior. You can include multiple Tweets in your report which provide context and may aid in getting the content removed more quickly. If you receive a direct threat, Twitter recommends contacting local law enforcement. They can assess the validity of the threat and take the appropriate action. For tweet reports, you can get a copy of your report of a violent threat to share with law enforcement by clicking Email report on the We have received your report screen.

Facebook also have clear instructions on how to report abusive posts, photos, comments, or Messages, and how to report someone who has threatened you.  Reporting doesn’t mean the content will automatically be removed as it has to violate Facebook’s Community Standards. Offensive doesn’t necessarily equate to abusive.

You can report inappropriate  Instagram posts, comments or people that aren’t following Community Guidelines or Terms of Use.

Users can report abuse, spam or any other content that doesn’t follow TikTok’s Community Guidelines from within the app.

According to Snapchat support, they review every report, often within 24 hours.

If you or someone close to you is the victim of harassment, and bullying, you have options. If the abuse is online, submit your report as soon as you see the content. If it’s in the real world, take it to school administration, Human Resources, or the police, particularly if there is a direct threat to your safety.

Finally, if you’re having suicidal thoughts due to bullying or for any other reason, contact the National Suicide Prevention Lifeline online or call 1-800-273-8255 for help.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

 

Facebook to Monitor Anti-Vax Content

By Tracey Dowdy

According to reports published by the US Centers for Disease Control and Prevention, the percentage of children in the US who received no vaccine doses as well as the number of parents who have requested exemptions for their children continues to rise. While coverage for a certain vaccines “remained high and stable overall,” the number of unvaccinated kids under the age of two rose from 0.9% for those born in 2011 to 1.3% for those born in 2015. The report doesn’t address the reasons for the increase but suggests it may be due to caregivers not knowing where to access free vaccines and the shortage of pediatricians and other health care providers in many rural areas.

Another more subtle and pervasive reason may be the volume of misinformation surrounding vaccines and their – debunked – ties to autism. Two platforms at the center of the problem – Facebook and YouTube – have recently announced they will crack down on anti-vax misinformation content on their platforms. On Facebook, anti-vaccination sites promoting fake science and conspiracy theories related to vaccines appear at the top of searches when parents search for information about vaccinations. Also featured prominently is Andrew Wakefield, the discredited doctor behind the bogus science linking the MMR vaccine to autism.

Unlike Google, which filters out anti-vax sites to promote information from the World Health Organization, Facebook searches appear to be based on the most popular and active sites regardless of whether or not the information presented is based on fact or fiction. The changes will also impact Instagram, owned by Facebook.

“The consequences of publishing misleading information is a genuine risk to the public’s health – you only have to look at the widespread panic and confusion that was caused by unfounded claims [by Dr. Wakefield] linking the MMR vaccine to autism in the 1990s,” says Professor Helen Stokes-Lampard, chair of the Royal College of GPs in the UK. Stokes-Lampard says she finds it “deeply concerning” that Facebook allowed posts that promoted “false and frankly dangerous ideas” about not only the MMR vaccine but other vaccination programs as well.

Ethan Lindenberger, who testified before Congress on March 5, 2019, stated that he had not been fully vaccinated because at the time he was due to be inoculated, his mother’s believed that vaccines are dangerous and could result in autism. Lindberger, who has since been vaccinated against his mother’s wishes, stated at the hearing, “For my mother, her love and affection and care as a parent was used to push an agenda to create a false distress. And these sources, which spread misinformation, should be the primary concern of the American people…My mother would turn to social media groups and not to factual sources like the [Centers for Disease Control and Prevention]. It is with love and respect that I disagree with my mom.”

Lindberger, along with other speakers including Washington state Secretary of Health John Weisman; Dr. Jonathan McCullers of the University of Tennessee; John Boyle, president of the Immune Deficiency Foundation; and Emory University epidemiologist Dr. Saad Omer, challenged the federal government to fund vaccine safety research and launch campaigns to counter anti-vaccine messages similar to past anti-Tobacco campaigns.

YouTube (owned by Google) is also taking action. In a letter responding to a challenge by US Rep. Adam Schiff (D-Calif.), Karan Bhatia, Vice President Global Public Policy and Government Affairs said it has been blocking anti-vax videos from appearing in its recommendation engine and search results. “I agree with you that anything discouraging parents from vaccinating their children against vaccine-preventable diseases is concerning,” she wrote.

Both Facebook and YouTube intend to discourage people from accepting conspiracies about vaccinations at face value and going forward will attach anti-vaccine material with educational information from authoritative medical sources.

Monika Bickert, Facebook’s head of product policy and counterterrorism said, “We are exploring ways to give people more accurate information from expert organizations about vaccines at the top of results for related searches, on Pages discussing the topic, and on invitations to join groups about the topic. We will have an update on this soon.” 

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

 

Facebook, Google and Twitter Doing Better at Removing Hate Speech 

By Tracey Dowdy

 The European Commission, the European Union‘s executive arm, recently released data from research done as part of its “code of conduct” for social media platforms. The EC’s launched an initiative back in 2016 aimed at removing hate speech including racist and xenophobic content from online platforms. Facebook, Google, Twitter and Microsoft were among the tech companies that signed on, committing to searching out and eliminating offensive content.

“Today, after two and a half years, we can say that we found the right approach and established a standard throughout Europe on how to tackle this serious issue, while fully protecting freedom of speech,” said Vera Jourova, a European commissioner for justice, consumers and gender equality, in a press release.

The European Commission defines “hate speech” as “the public incitement to violence or hatred directed to groups or individuals on the basis of certain characteristics, including race, color, religion, descent and national or ethnic origin.”

According to the report, Facebook removed 82% of objectionable content in 2018 – up from a mere 28% back in 2016. That’s good news for the social media giant that’s been under scrutiny and attack for the volume of fake news disseminated on the platform, particularly during the last federal election.  Just last week Facebook announced it had removed nearly 800 fake pages and accounts with ties to Iran.

Instagram, YouTube, and Google+ also showed significant improvement, though Twitter removed a mere 43% of illegal hate speech posted to the platform. That’s down from 45% for the same time frame in December 2017. Twitter’s director of public policy for Europe, Karen White, told CNBC that they’re reviewing 88% of all notifications received within 24 hours. “We’ve also enhanced our safety policies, tightened our reporting systems, increased transparency with users, and introduced over 70 changes to improve conversational health,” she said. “We’re doing this with a sense of urgency and commitment, and look forward to continued collaboration with the European Commission, Governments, civil society and industry.”

“Let me be very clear, the good results of this monitoring exercise don’t mean the companies are off the hook,” Vera Jorouva, European Commissioner for Justice, Consumers and Gender Equality warned in a press conference. “We will continue to monitor this very closely, and we can always consider additional measures if efforts slow down. It is time to balance the power and the responsibility of the platforms and social media giants.” 

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.