Tag Archives: Facebook

Parents’ Guide to Facebook’s Messenger Kids

By Tracey Dowdy

Facebook introduced its free video calling and messaging app Messenger Kids with the tag, “Made for Kids. Controlled by Parents.” Targeted at kids under 13, Messenger Kids is designed to be a bridge between child-friendly devices like Leap Pads and full access to social media platforms like Facebook, Snapchat, or TikTok.

Kids still can’t sign up for a Facebook account. Instead, they can create one through their parent or guardian’s account. Once the account has been authenticated by a parent, kids – with a parent’s help and or supervision – can set up a mini-profile with their name (it can be a nickname) and photo (it can be a photo of anything). Kids can use the app either on their device or on yours, but remember: if you give them your phone, they’ll have access to all the photos and videos on your device. Parents can choose whether to add the child’s gender and birth date. Once the profile is complete, parents can approve any friend requests through the Messenger Kids bookmark in the main Facebook app. Messenger Kids is interoperable within Facebook’s Messenger app, so parents don’t have to download the Kids app.

To further protect their privacy, Messenger Kids users can’t be found through Facebook search, so if a child wants to chat with a friend, their parent must first friend that child’s parent, then choose to approve the friend request.

When users open Messenger Kids, they’ll see a color-customizable home screen with tiles representing their existing chat threads and all approved contacts. The interface is user friendly, making it easy for kids to jump into a video chat or text thread with their contacts. They can also block and unblock their parent-approved contacts. Good news parents – there are no in-app purchases to worry about. 

The app offers loads of kid-friendly creative tools, like fidget spinners, dinosaur AR masks, carefully curated gifs (native to the app – no external third party sites), and crayon-style stickers. “Video calls become so much more playful with AR,” says Marcus. 

Facebook won’t monetize Messenger Kids, but will automatically migrate kids to regular accounts when they turn 13. Nor will they be collecting data to remain in compliance with Children’s Online Privacy Protections Act (COPPA) law. The app also includes a reporting interface written specifically for kids so they can flag anything suspicious to a dedicated support team working 24/7.

Facebook’s head of Messenger David Marcus says, “When you think about things at scale that we do to get people to care more about Messenger, this is one that addresses a real need for parents. But the side effect will be that they use Messenger more and create family groups.”

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Create Your Facebook Avatar 

By Tracey Dowdy

Earlier this month, Facebook released its Bitmoji-like avatars. This new feature allows users to make a cartoon-style character with features similar to your own. You can customize your avatar with a variety of faces, hairstyles, and clothes. You can even use them as stickers 

You’ll then be able to use the avatar when you comment on a Facebook post, in your stories, as your profile picture, and when you use Facebook Messenger. As a bonus, you can use them as stickers on Snapchat, Twitter, Mail, and on Instagram. 

“So much of our interactions these days are taking place online, which is why it’s more important than ever to be able to express yourself personally on Facebook,” said Fidji Simo, head of the Facebook App. “We’re excited to bring this new form of self-expression to more people around the world…With so many emotions and expressions to choose from, avatars let you to react and engage more authentically with family and friends across the app. “

To create your avatar, follow these steps: 

  • Open the Facebook app on your phone and tap the menu (three stacked lines) On iPhone it’s in the lower right corner, the upper right corner for Android.
  • Scroll down to “See More.”
  • Select Avatars > Next  > Get Started.
  • Choose your skin tone, then tap Next. 
  • Choose a Short, Medium or Long hairstyle for your avatar, then tap the Color icon.
  • Next, choose your Face icon to select your face’s shape, complexion, and lines or wrinkles. 
  • When you’re done, tap the Eye icon. Select your eye shape, color, and lash length. Tap the Eyebrows icon and select your brow shape and color, and add glasses. 
  • Select your nose shape and then choose the shape and color of your lips and any facial hair. 
  • Finally, select your body shape, an outfit that’s similar to your style, and then add your accessories. 
  • Once you’re happy with your choices, tap the checkmark in the upper right corner. Tap Next > Done.

Any time you want to, access your avatar, tap the smiley face icon in the “Write a comment” section. 

Have fun! 

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

How to use Facebook’s Free New Video Chat Option

By Tracey Dowdy

Never one to let the competition get too far ahead, Facebook has come up with a new video chat alternative to its competitors, Zoom, Skype, Jitsi Meet, and Google Meet. With Messenger Rooms, up to 50 people can chat in a room at once, with no time limit. Participants don’t even need an account to use the room.

Messenger Rooms offers more features than its Facebook Messenger video chat option, allowing up to 50 people on screen with no time limit through either the main Facebook app or through the dedicated Messenger one.

Zoom became especially popular in the early days of self-quarantining, but issues around security leading to Zoom-bombing soon became an issue. Facebook is no stranger to security and privacy problems. Still, in a livestream earlier this month, CEO Mark Zuckerberg said that the company has been “very careful.” He tried to “learn the lessons” from issues users have experienced with other video conference tools over the past several months. 

Facebook also owns WhatsApp, with over 700 million accounts participating in voice and video calls every day on both platforms. In a press release in April, Facebook noted that the number of calls has more than doubled in many areas since the coronavirus outbreak began.

Facebook seems to be taking the potential security risks seriously. Messenger Rooms promises these features:

  • Locking: Rooms can be locked or unlocked once a call begins. If a room is closed, no one else can join, except a Group administrator for rooms created through a Group. 
  • Removing a participant: The room creator can remove any unwanted participants. If the room creator removes someone from the call or leaves, the room will lock automatically, and the room creator must unlock the call for others to join. 
  • Leaving: If at any point, users feel unsafe in a room, they can exit. Locking down a room prevents others from entering, not participants from leaving.
  • Reporting: Users can report a room name or submit feedback about a room if they feel it violated Facebook’s Community Standards. However, since Facebook doesn’t record Messenger Room calls, so reports and feedback will not include audio or video from the room.
  • Blocking: You can block someone on Facebook or Messenger who may be bothering you, and they will not be informed. When someone you’ve blocked is logged into Facebook or Messenger, they won’t be able to join a room you’re in, and you won’t be able to join theirs.

Make sure you have the latest version of the Facebook and Messenger mobile apps downloaded from the App Store or the Google Play Store to create a room on your phone. 

  • Open the Messenger app.
  • Tap the People tab at the bottom right of your screen. 
  • Tap Create a Room and select the people you want to join. 
  • To share a room with people who don’t have a Facebook account, you can share the link with them. You can also share the room in your News Feed, Groups, and Events. 
  • You can join a room from your phone or computer — no need to download anything, according to Facebook.

To create a room on your laptop or desktop, go to your Home Page and to the box at the top where you would usually post. Click on “Create Room” and follow the prompts to name your chat, invite guests, and choose your start time.

Currently available to everyone in the US, Canada, and Mexico, Messenger Rooms is rolling out worldwide over the next week.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Facebook Cracking Down on Fake COVID-19 News

By Tracey Dowdy

It’s nothing new for Facebook to be under scrutiny for fake news and hate speech. It’s been an issue for years and was never more evident than in the wake of the 2016 presidential electionThey’ve made concerted efforts to rein in misinformation, but it’s an ongoing battle. 

Facebook has been open about the challenges both human reviewers and AI have in identifying and removing offensive content. While things have improved, the number of users posting makes it challenging to curate information accurately.

One area where their efforts are glaringly deficient is the amount of COVID-19 related misinformation in languages other than English.  Avaaz, a crowd-funded research group, analyzed more than 100 pieces of Facebook coronavirus misinformation on the website’s English, Spanish, Portuguese, Arabic, Italian and French versions. 

They found that:

  • It can take Facebook up to 22 days to issue warning labels for coronavirus misinformation, with delays even when Facebook partners have flagged the harmful content for the platform.
  • 29% of malicious content in the sample was not labeled at all on the English language version of the website.
  • It is worse in some other languages, with 68% of Italian-language content, 70% of Spanish-language content, and 50% of Portuguese-language content not labeled as false.
  • Facebook’s Arabic language efforts are more successful, with only 22% of the sample of misleading posts remaining unlabelled. 
  • Over 40 percent of the coronavirus-related misinformation on the platform — which had already been debunked by fact-checking organizations working alongside Facebook — was not removed despite being told by these organizations that the content was based on misinformation. 

Avaaz’s research led Facebook to begin alerting users if they’d been exposed to false information. So, according to a Facebook blog post and a report from BuzzFeed News, both Facebook and YouTube are cracking down yet again and using AI to weed out the volumes of misleading content. 

Facebook has been forced to rely more heavily on AI as the COVID-19 pandemic has reduced its number of full-time employees. They still rely on contractors, many of whom, like the rest of us, are working from home. The content review team prioritizes posts that have the greatest potential for harm, including coronavirus misinformation, child safety, suicide, and anything related to self-harm.

CEO Mark Zuckerberg said, “Our effectiveness has certainly been impacted by having less human review during COVID-19. We do unfortunately expect to make more mistakes until we’re able to ramp everything back up.”  

Currently, if a fact-checker flags a post as false, Facebook will drop it lower on a user’s News Feed and include a warning notice about the veracity of the content. The challenge in removing misinformation is that it’s much like dandelions on your lawn – you can remove them from one spot, but there’s already countless more popping up somewhere else.  

Facebook uses a tool called SimSearchNet to identify the reposts and copies by matching them against its database of images that contain misinformation. The problems stem from users being quick to hit the “Share” button before checking to see if the source is a reputable organization.

Facebook Chief Technology Officer Mike Schroepfer admits  AI will never be able to replace human curators. “These problems are fundamentally human problems about life and communication. So we want humans in control and making the final decisions, especially when the problems are nuanced.” 

So before you hit “Share” or are tempted to gargle with vinegar or Lysol, head to UCF Libraries Fake News and Fact Checking page, Snopes, the CDC website, and do a little homework.

As Abraham Lincoln warned Americans during the Civil War, “You can’t believe everything you read on the internet.”

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Beware Facebook Quizzes

By Tracey Dowdy

Which Disney mom are you? Which Hogwarts house do you belong in? Only a true genius will score 100 percent on this quiz. 

How many times a day do you see a quiz like this pop up in your Facebook feed? You may have even been tempted to test your knowledge or play along because the topis piques your interest. That’s no coincidence. Facebook’s complex algorithms and data-gathering technology have been gathering information on users since it’s inception, and one of the most effective ways is through quizzes. 

According to CBC Information Morning tech columnist Nur Zincir-Heywood, though these quizzes may seem innocuous and fun, taking them leaves you vulnerable to identity theft or fraud. “Never do these,” said Zincir-Heywood, a cybersecurity expert who teaches in the computer science department at Dalhousie University in Halifax, Nova Scotia. 

But it’s not just Facebook itself that’s gathering information. Security experts, media literacy groups, The Better Business Bureau, and law enforcement agencies across the country warn that hackers and scammers – not Facebook itself – are behind many of these social media quizzes, collecting, using and profiting from the personal information you share.

Zincir-Heywood cautions that social media quizzes often ask the same questions your financial organizations use for security purposes to verify your identity when you need to change your password or access your account without a password such as your mother’s maiden name or the name of your first pet.

Though the different questions may not all be on the same quiz, multiple quizzes can collect enough information to enable a cybercriminal to access your banking or credit card information.

“Maybe they are watching [your] social media in general, they know your location, they know other things about you,” Zincir-Heywood said. “All of these then put together is a way to collect your information and, in your name, maybe open another account or use your account to buy their own things. It can go really bad.”

She offers the following tips to protect yourself from their more nefarious side of social media quizzes: 

  • Be careful. Just like in real life, nothing is ever really free. Those quizzes offered on social media actually aren’t free, they come with a hefty cost – your personal information is data mined for companies to use in targeted advertising, or for cybercriminals to sell on the dark web.
  • If you can’t resist the temptation, use fake information, especially for sections that ask for similar information to security questions used by your financial institutions. For example, if you are asked, ‘What’s the name of your childhood best friend,’ use a fake name.
  • Remember, once you take these quizzes, you can’t take back the information you’ve provided. Keep a close eye on your online transactions for unusual or unauthorized banking or credit card activity.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

 

Your Smart TV is Watching You

A recent study of smart TV privacy and security by Consumer Reports asked, “How much does your Smart TV know about you?” They looked at several major TV brands: LG, Samsung, Sony, TCL—which use the Roku TV smart TV platform—and Vizio.

Smart TVs connect to the internet, allowing users to stream videos from services such as Hulu, Amazon Prime, and Netflix. Consumer Reports found that all smart TVs can collect and share considerable amounts of personal information about their viewers. Not only that, so can the countless third-party apps that work within the platforms. 

The Oregon office of the FBI released a warning back in December cautioning consumers that some smart TVs are vulnerable to hacking and a number of them have built-in video cameras. The good news is that newer models have eliminated the cameras – Consumer Reports’ labs haven’t seen one in any of the hundreds of new TVs tested in the past two years.

However, privacy concerns are still an issue. Researchers at Northeastern University and Imperial College London discovered that many smart TVs and other internet-connected devices send data to Amazon, Facebook, and Doubleclick, Google’s advertising business. Nearly all of them sent data to Netflix –  even if the app wasn’t installed – or the owner hadn’t activated it. 

A third study, this one conducted by researchers at Princeton and the University of Chicago, looked at Roku and Amazon Fire TV, two of the more popular set-top streaming devices. Testing found the TV’s tracking what their owners were watching and relaying it back to the TV maker and/or its business partners, using a technology called ACR, or “automated content recognition.” There were trackers on 69% of Roku’s channels and 89% of Fire TV’s channels – the numbers are likely to be the same for smart TVs that have Roku’s and Amazon’s native platforms. 

Testing found the TV’s tracking what their owners were watching and relaying it back to the TV maker and/or its business partners, using a technology called ACR, or “automated content recognition.”

On the surface, we love the technology behind ACR because it’s what makes our systems intuitive and recommend other shows we might enjoy watching. The downside is that the same information can be used for targeted advertising or be bundled with other aspects of our personal information to sold to other marketers. 

Justin Brookman, director of privacy and technology at Consumers Union, the advocacy arm of Consumer Reports, says “For years, consumers have had their behavior tracked when they’re online or using their smartphones. But I don’t think a lot of people expect their television to be watching what they do.”

If you have privacy concerns about your Smart TV, check the manual on how to revert the device TV to factory settings and set them up again. Be sure to decline to have your viewing data collected.

For a more detailed analysis and instruction on protecting your privacy, check out Consumer Reports story How to Turn Off Smart TV Snooping Features.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Parents, It’s Time to Talk About Our Social Media

By Tracey Dowdy

As a Gen Xer, my daughters’ childhoods are captured in framed photos, memories, and photo boxes in the closet off my home office. I didn’t start using Facebook until they were both tweens, and perhaps that’s why I understood the importance of not posting photos or posts about them without permission. Tweens are at an age when even having parents is mortifying, and though I sometimes overstepped, I have their consent for what’s in those old Facebook albums and posts.

Fast forward to today, where the oldest members of the millennial cohort are – gasp – turning 40. Lifestyle blogging was in its heyday during the late nineties and early 2000s, and for a while, it seemed like everyone had a blog, especially moms. It wasn’t uncommon to hear graphic stories of diaper blowouts, potty training mishaps, mispronounced words, and other content that exposed the most intimate details of their child’s milestones and behavior.

The issue is that many of those children are now old enough to Google themselves, and those blogs and Facebook posts are impacting them in ways parents didn’t, and arguably couldn’t have anticipated. The children who were the subjects of those posts are in some cases mortified by the content, while the majority simply resents having had no say over their online presence. There’s even a portmanteau for the phenomenon – sharenting.

Perhaps there’s no better example of the conflict between the two perspectives than that of Christie Tate and her daughter. Back in January, Tate, who has been blogging about her family for over a decade, wrote an essay for the Washington Post titled, “My daughter asked me to stop writing about motherhood. Here’s why I can’t.” Though she’s been writing about her children since they were in diapers, it’s only recently that her nine-year-old daughter became aware of what her mom has been writing, and asked her to stop. Tate refused, stating,

They’ve agreed to a compromise where Tate will use a pseudonym rather than her daughter’s real name, and Tate has “agreed to describe to her what I’m writing about, in advance of publication, and to keep the facts that involve her to a minimum.” Her daughter also has the right to veto any pictures of herself she doesn’t want to be posted.

Tate faced considerable backlash, with many calling her selfish and coldhearted. Many on social media sites like Reddit have roasted her, though she did receive some support.

Fourteen-year-old Sonia Bokhari wrote an honest, insightful piece for Fast Company about what it was like to finally be allowed her own social media accounts – long past the age many of her friends had become active – only to discover that her mother and older sister had been documentary her life for years. “I had just turned 13, and I thought I was just beginning my public online life, when in fact there were hundreds of pictures and stories of me that, would live on the internet forever, whether I wanted it to be or not, and I didn’t have control over it. I was furious; I felt betrayed and lied to.”

Bokhari’s mother and sister meant no harm; they posted photos and things she had said that they thought were cute and funny. She explained her feelings to her mother and sister, and they’ve agreed that going forward, they’ll not post anything about her without her consent.

It wasn’t just the embarrassment of having the letter she wrote to the tooth fairy when she was five or awkward family photos. Her digital footprint that concerned Bokhari as well. “Every October my school gave a series of presentations about our digital footprints and online safety. The presenters from an organization called OK2SAY, which educates and helps teenagers about being safe online, emphasized that we shouldn’t ever post anything negative about anyone or post unapproved inappropriate pictures, because it could very deeply affect our school lives and our future job opportunities.” Bokhari concluded that “While I hadn’t posted anything negative on my accounts, these conversations, along with what I had discovered posted about me online, motivated me to think more seriously about how my behavior online now could affect my future.”

Her response to what she learned? Bokhari eventually chose to get off social media altogether.

“I think in general my generation has to be more mature and more responsible than our parents, or even teens and young adults in high school and college… being anonymous is no longer an option. For many of us, the decisions about our online presence are made before we can even speak. I’m glad that I discovered early on what posting online really means. And even though I was mortified at what I found that my mom and sister had posted about me online, it opened up a conversation with them, one that I think all parents need to have with their kids. And probably most importantly, it made me more aware of how I want to use social media now and in the future.”

For many of us, trying to clean up our digital footprint or that of our children feels a lot like trying to get toothpaste back into the tube or trying to make toast be bread again. Still, it’s important to try. You’re not only curating your own reputation; you’re shaping your child’s before they’ve ever had a chance to weigh in.

Consider your audience and your motivation, then evaluate whether or not what your sharing is worth the potential ramifications. The internet is the wild wild west – maybe you need to start acting as the sheriff of your own town.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

 

Change Your Default Privacy Settings

By Tracey Dowdy 

In a recent article, Washington Post technology columnist Geoffrey A. Fowler asked, “It’s the middle of the night. Do you know who your iPhone is talking to?”

In the story, Fowler outlines a problem most iPhone users aren’t even aware of, that being the volume of data-mining that occurs while you – and your phone – are asleep. “On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.

And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -— once every five minutes,” Fowler says.

Data mining is nothing new, but it’s becoming an increasingly bigger problem. Though Apple stated in a recent ad, “What happens on your iPhone stays on your iPhone,” Fowler’s investigation proves that’s far from the truth. Another problem is that some of it is our fault. Charles Arthur points out that 95% of us don’t change any of the default settings on our devices, and how many of us take the time to read updates on Privacy Policies? It’s the Rule of Defaults. We’re just too lazy to try and Scooby-Doo the mystery.

Fowler published an excellent article last June that maps out how to start setting boundaries on all the information we willing hemorrhage into the ether via everything from our smartphones, laptops, tablets, and smartwatches to our smart home devices like Alexa, and our Nest doorbell.

If you’re wondering whether it’s worth the trouble to dive into the deep end and change those default settings, consider this, by default:

Fowler calls his suggestions “small acts of resistance,” but if The Handmaid’s Tale has taught us anything, those small acts of resistance are critically important. Blessed be.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.

Reporting Cyber-Abuse on Social Media

By Tracey Dowdy

For as long as there has been life on the planet, there have been those who find pleasure in tormenting others or demonstrate their perceived authority by denigrating those they see as weak or vulnerable. With the advent of social media, those abusive behaviors moved from the real world to the digital world. It’s become nearly impossible for victims to escape. Through social media, the bullying follows you into the privacy of your home, making it seem like there are no safe places.

According to DoSomething.org, nearly 43% of kids have been bullied online, and 1 in 4 have experienced it more than once, yet only 1 in 10 victims will inform a parent or trusted adult of their abuse. A study by the Universities of Oxford, Swansea, and Birmingham found that youth who have been cyberbullied are twice as likely to commit self-harm or attempt suicide than their non-bullied peers. Unfortunately, when those bullies grow up, they often continue their behavior. Pew Research Center found that 73% of adults state they’ve witnessed online harassment and 40% reporting being the target themselves. It’s not just individuals being bullied. Hate groups often utilize platforms like Facebook and Twitter to disseminate their message, and as a result, online hate speech often incites real-world violence.

The message, “If you see something, say something,” is more than a catchy slogan. It’s your responsibility if you see abusive or hate-fueled messages and images online. Here’s how to report offensive content.

Twitter clearly maps out how to report abusive behavior. You can include multiple Tweets in your report which provide context and may aid in getting the content removed more quickly. If you receive a direct threat, Twitter recommends contacting local law enforcement. They can assess the validity of the threat and take the appropriate action. For tweet reports, you can get a copy of your report of a violent threat to share with law enforcement by clicking Email report on the We have received your report screen.

Facebook also have clear instructions on how to report abusive posts, photos, comments, or Messages, and how to report someone who has threatened you.  Reporting doesn’t mean the content will automatically be removed as it has to violate Facebook’s Community Standards. Offensive doesn’t necessarily equate to abusive.

You can report inappropriate  Instagram posts, comments or people that aren’t following Community Guidelines or Terms of Use.

Users can report abuse, spam or any other content that doesn’t follow TikTok’s Community Guidelines from within the app.

According to Snapchat support, they review every report, often within 24 hours.

If you or someone close to you is the victim of harassment, and bullying, you have options. If the abuse is online, submit your report as soon as you see the content. If it’s in the real world, take it to school administration, Human Resources, or the police, particularly if there is a direct threat to your safety.

Finally, if you’re having suicidal thoughts due to bullying or for any other reason, contact the National Suicide Prevention Lifeline online or call 1-800-273-8255 for help.

Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.