Connect with us

For Edem Fairre, making a fashion statement isn’t coincidental; it’s a way of life. The young model and tv personality who was recently announced as the brand ambassador for Woodin’s new collection connexion de Woodin never shies away from a confident or peculiar choice. And lately, we’ve been taking note more than ever before.

It’s clear that Fairre is killing it on the carpet and really, that shouldn’t come as much of a surprise. The Woodin Ghana brand ambassador is a timeless beauty that is rare to find these days. Her charm, intelligence, and natural poise make her one-of-a-kind.

Be it a red carpet appearance or any other function, she always manages to make us gasp with her style. Here, we’ve rounded up some of her most exciting red carpet outfits to date, see photos below.


Facebook Comments

Kubidyza is a Global Celebrity Blogger, Music Promoter and a Social Media Influencer | Most Influential Blogger In Ghana For Bookings:


Mobile phone use after 10pm likely to lead to depression, loneliness : Study



A study of more than 91,000 people has found that scrolling through your Instagram and Twitter feeds from the comfort of your pillow in the wee hours could increase the likelihood of developing a number of psychological problems such as depression, bipolar disorder, and neuroticism.

Late night phone-usage is just one of the disruptive behaviors pointed out by the researchers, who attributed the links to the aforementioned symptoms to body clock disturbance.

This study, published in The Lancet Psychiatry by professors at The University of Glasgow, is the first to monitor body clock disruption on such a large scale.

Participants aged 37 to 73 had their activity levels monitored by wrist-work accelerometers, which they wore for a seven day period, enabling researchers to measure the extent at which their circadian rhythmicity was disturbed during this time.

However, there were caveats to the findings, given that participants were only monitored for a week and were exclusively middle-aged and above.

These people were 11 percent more likely to have bipolar disorder and six percent more likely to be battling depression, they found.

Plus, they also reported lower happiness levels and greater rates of loneliness.

Such people suffer from “very poor sleep hygiene”, said lead author Daniel Smith of the University of Glasgow and would engage in late night activities such as playing on their mobile phones or making cups of tea.

While Smith advocated imposing a 10 pm limit to phone usage to help combat this, he added that daytime activities have a part to play too, explaining that a healthy sleep pattern is often the result of being active during the day and inactive at night.

Facebook Comments
Continue Reading


Nike’s Newest Sneaker Is on Fire, and These Just-Released Colors Are the Best Yet | SEE




Nike has released new colorway after new colorway since it dropped the Air Max 270a couple months back. But the flyknit variation, which comes with a larger swoosh that’s nearer to the front of the shoe, hasn’t received such love until… today. Nike’s Air Max 270 Flyknit now comes in plum/white and black/white options, both far easier to fold into everyday outfits than what was previously available (orange/blue/white and yellow/orange/blue). Also, given the flyknit is far more breathable than the mesh of the standard 270, it makes for a much better everyday summer shoe.

The first Air Max was released in 1987 and among its first purely lifestyle sneakers, and the 270s are no different — it’s a cushy, chunky shoe that should not be worn running any significant mileage, lifting weights or any other agility-based activity. What the shoe lacks in sport and athletic function, it makes up in comfortable everyday wear. The 270 in its name refers to the 270-degree heel cushion that provides a nice landing pad for each stride. Though the 270 wasn’t technically the first Air Max with a 270-degree cushion — that title belongs to the Air Max 93s, which also received two new color updates — it is the biggest by way of height.

The plum/white and black/white colorways are available now on Nike’s site.




Facebook Comments
Continue Reading


Facebook closed 583m fake accounts in first three months of 2018



Facebook took moderation action against almost 1.5bn accounts and posts which violated its community standards in the first three months of 2018, the company has revealed.

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam and shut down a further 583m fake accounts on the site in the three months. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East, and Africa.

The amount of content moderated by Facebook is influenced by both the company’s ability to find and act on infringing material and the sheer quantity of items posted by users. For instance, Alex Schultz, the company’s vice-president of data analytics, said the amount of content moderated for graphic violence almost tripled quarter-on-quarter.

One hypothesis for the increase, Schultz said, is that “in [the most recent quarter], some bad stuff happened in Syria. Often when there’s real bad stuff in the world, lots of that stuff makes it on to Facebook.” He emphasized that much of the moderation in those cases was “simply marking something as disturbing”.

Sign up to the Media Briefing: news for the news-makers
Read more
Several categories of violating content outlined in Facebook’s moderation guidelines – including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement – are not included in the report.

On child exploitation imagery, Schultz said that the company still needed to make decisions about how to categorize different grades of content, for example, cartoon child exploitation images.

“We’re much more focused in this space on protecting the kids than figuring out exactly what categorization we’re going to release in the external report,” he said.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious. Those tools worked particularly well for content such as fake accounts and spam: the company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and “nearly 100%” of the spam.

Automatic flagging worked well for finding instances of nudity, since, Schultz said, it was easy for image recognition technology to know what to look for. Harder, because of the need to take contextual clues into account, was moderation for hate speech. In that category, Facebook said, “we found and flagged around 38% of the content we subsequently took action on, before users reported it to us”.

Facebook has made moves to improve transparency in recent months. In April, the company released a public version of its guidelines for what is and is not allowed on the site – a year after the Guardian revealed Facebook’s secret rules for content moderation.

The company also announced measures that require political advertisers to undergo an authentication process and reveal their affiliation alongside their advertisements.

Facebook’s moderation figures come a week after the release of the Santa Clara Principles, an attempt to write a guidebook for how large platforms should moderate content. The principles state that social networks should publish the number of posts they remove, provide detailed information for users whose content is deleted explaining why, and offer the chance to appeal against the decision.

“This is a great first step,” said Jillian York from the Electronic Frontier Foundation. “However, we don’t have a sense of how many incorrect takedowns happen – how many appeals that result in content being restored. We’d also like to see better messaging to users when an action has been taken on their account, so they know the specific violation.”

Facebook isn’t the only platform taking steps towards transparency. Last month YouTube revealed it removed 8.3m videos for breaching its community guidelines between October and December.

“I believe this is a direct response to the pressure they have been under for several years from different stakeholders [including civil society groups and academics],” said York.

Facebook Comments
Continue Reading