Tag Archives: Mark Zuckerberg

How to use Facebook’s Free New Video Chat Option

By Tracey Dowdy

Never one to let the competition get too far ahead, Facebook has come up with a new video chat alternative to its competitors, Zoom, Skype, Jitsi Meet, and Google Meet. With Messenger Rooms, up to 50 people can chat in a room at once, with no time limit. Participants don’t even need an account to use the room.

Messenger Rooms offers more features than its Facebook Messenger video chat option, allowing up to 50 people on screen with no time limit through either the main Facebook app or through the dedicated Messenger one.

Zoom became especially popular in the early days of self-quarantining, but issues around security leading to Zoom-bombing soon became an issue. Facebook is no stranger to security and privacy problems. Still, in a livestream earlier this month, CEO Mark Zuckerberg said that the company has been “very careful.” He tried to “learn the lessons” from issues users have experienced with other video conference tools over the past several months. 

Facebook also owns WhatsApp, with over 700 million accounts participating in voice and video calls every day on both platforms. In a press release in April, Facebook noted that the number of calls has more than doubled in many areas since the coronavirus outbreak began.

Facebook seems to be taking the potential security risks seriously. Messenger Rooms promises these features:

  • Locking: Rooms can be locked or unlocked once a call begins. If a room is closed, no one else can join, except a Group administrator for rooms created through a Group. 
  • Removing a participant: The room creator can remove any unwanted participants. If the room creator removes someone from the call or leaves, the room will lock automatically, and the room creator must unlock the call for others to join. 
  • Leaving: If at any point, users feel unsafe in a room, they can exit. Locking down a room prevents others from entering, not participants from leaving.
  • Reporting: Users can report a room name or submit feedback about a room if they feel it violated Facebook’s Community Standards. However, since Facebook doesn’t record Messenger Room calls, so reports and feedback will not include audio or video from the room.
  • Blocking: You can block someone on Facebook or Messenger who may be bothering you, and they will not be informed. When someone you’ve blocked is logged into Facebook or Messenger, they won’t be able to join a room you’re in, and you won’t be able to join theirs.

Make sure you have the latest version of the Facebook and Messenger mobile apps downloaded from the App Store or the Google Play Store to create a room on your phone. 

  • Open the Messenger app.
  • Tap the People tab at the bottom right of your screen. 
  • Tap Create a Room and select the people you want to join. 
  • To share a room with people who don’t have a Facebook account, you can share the link with them. You can also share the room in your News Feed, Groups, and Events. 
  • You can join a room from your phone or computer — no need to download anything, according to Facebook.

To create a room on your laptop or desktop, go to your Home Page and to the box at the top where you would usually post. Click on “Create Room” and follow the prompts to name your chat, invite guests, and choose your start time.

Currently available to everyone in the US, Canada, and Mexico, Messenger Rooms is rolling out worldwide over the next week.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Facebook Cracking Down on Fake COVID-19 News

By Tracey Dowdy

It’s nothing new for Facebook to be under scrutiny for fake news and hate speech. It’s been an issue for years and was never more evident than in the wake of the 2016 presidential electionThey’ve made concerted efforts to rein in misinformation, but it’s an ongoing battle. 

Facebook has been open about the challenges both human reviewers and AI have in identifying and removing offensive content. While things have improved, the number of users posting makes it challenging to curate information accurately.

One area where their efforts are glaringly deficient is the amount of COVID-19 related misinformation in languages other than English.  Avaaz, a crowd-funded research group, analyzed more than 100 pieces of Facebook coronavirus misinformation on the website’s English, Spanish, Portuguese, Arabic, Italian and French versions. 

They found that:

  • It can take Facebook up to 22 days to issue warning labels for coronavirus misinformation, with delays even when Facebook partners have flagged the harmful content for the platform.
  • 29% of malicious content in the sample was not labeled at all on the English language version of the website.
  • It is worse in some other languages, with 68% of Italian-language content, 70% of Spanish-language content, and 50% of Portuguese-language content not labeled as false.
  • Facebook’s Arabic language efforts are more successful, with only 22% of the sample of misleading posts remaining unlabelled. 
  • Over 40 percent of the coronavirus-related misinformation on the platform — which had already been debunked by fact-checking organizations working alongside Facebook — was not removed despite being told by these organizations that the content was based on misinformation. 

Avaaz’s research led Facebook to begin alerting users if they’d been exposed to false information. So, according to a Facebook blog post and a report from BuzzFeed News, both Facebook and YouTube are cracking down yet again and using AI to weed out the volumes of misleading content. 

Facebook has been forced to rely more heavily on AI as the COVID-19 pandemic has reduced its number of full-time employees. They still rely on contractors, many of whom, like the rest of us, are working from home. The content review team prioritizes posts that have the greatest potential for harm, including coronavirus misinformation, child safety, suicide, and anything related to self-harm.

CEO Mark Zuckerberg said, “Our effectiveness has certainly been impacted by having less human review during COVID-19. We do unfortunately expect to make more mistakes until we’re able to ramp everything back up.”  

Currently, if a fact-checker flags a post as false, Facebook will drop it lower on a user’s News Feed and include a warning notice about the veracity of the content. The challenge in removing misinformation is that it’s much like dandelions on your lawn – you can remove them from one spot, but there’s already countless more popping up somewhere else.  

Facebook uses a tool called SimSearchNet to identify the reposts and copies by matching them against its database of images that contain misinformation. The problems stem from users being quick to hit the “Share” button before checking to see if the source is a reputable organization.

Facebook Chief Technology Officer Mike Schroepfer admits  AI will never be able to replace human curators. “These problems are fundamentally human problems about life and communication. So we want humans in control and making the final decisions, especially when the problems are nuanced.” 

So before you hit “Share” or are tempted to gargle with vinegar or Lysol, head to UCF Libraries Fake News and Fact Checking page, Snopes, the CDC website, and do a little homework.

As Abraham Lincoln warned Americans during the Civil War, “You can’t believe everything you read on the internet.”

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.