Connect with us
yateso.com yateso.com

Lifestyle

Shades Of The Black Skin: Meet Model Rama De Jesus

Published

on

Rama De Jesus

They Said We Could Not Be Successful Due To Our Skin Colour! Africa We Shall Rise

Shades of the black skin under the theme: Am Confident In My Own Skin is a project that aims to educate and create awareness about the appreciation of our individual skin shades.

History has us cooped up in the ideologies that We as African females cannot break the glass ceiling and succeed due to various different factor. One particular factor we notice was our skin.

Our mission is to break such societal norms, raise and cultivate young leading females who are confident in their skin.

Through helping women find confidence in their natural skin,how to treat their skin,and maintain their shades,this project has been Put together to educate and cultivate our generation that no matter your skin shade you are beautiful and can achieve whatever you put your mind to without hinderances and we used the medium as platform to also emphasise that no one shade is superior to the other. We are all one and we can change the narrative just the way we are!

Introducing Rama De Jesus, born Ramatu Issah who hails from Tafo in Kumasi, a student of the University of Education Winneba offering B.A in English Education was born in Awaso in the western region.

She is the first born her mother and has two younger siblings. She is a model, actress, and an entertainment personality. She believes self-confidence counts a lot in everything we do as she aspires to become one of the Forbes most influential women in the entertainment industry in African.

She is currently running a project on skin shades in African “shades of the black under the theme am confident in my skin”.

The vision behind this project is to educate people on the different types of shades we have as blacks, how to take good care of our skins to avoid cancer; how to apply the right foundation in order to get the best of our skin and in doing that we get to love our skin and feel confident in your skin and yourself as a whole

The whole project is to help young people mostly women to be confident in themselves.

Project by Rama De Jesus

Pioneered by De’Legacy Group

Follow them on the handles below!!!

Instagram  –@amconfidentinskin

@rama.dejesus

@delegacygroup

Facebook – Am Confident In My Skin

Facebook Comments

Kubidyza is a Global Celebrity Blogger, Music Promoter and a Social Media Influencer | Most Influential Blogger In Ghana For Bookings: Kubinho80@gmail.com

Lifestyle

Mobile phone use after 10pm likely to lead to depression, loneliness : Study

Published

on

A study of more than 91,000 people has found that scrolling through your Instagram and Twitter feeds from the comfort of your pillow in the wee hours could increase the likelihood of developing a number of psychological problems such as depression, bipolar disorder, and neuroticism.

Late night phone-usage is just one of the disruptive behaviors pointed out by the researchers, who attributed the links to the aforementioned symptoms to body clock disturbance.

This study, published in The Lancet Psychiatry by professors at The University of Glasgow, is the first to monitor body clock disruption on such a large scale.

Participants aged 37 to 73 had their activity levels monitored by wrist-work accelerometers, which they wore for a seven day period, enabling researchers to measure the extent at which their circadian rhythmicity was disturbed during this time.

However, there were caveats to the findings, given that participants were only monitored for a week and were exclusively middle-aged and above.

These people were 11 percent more likely to have bipolar disorder and six percent more likely to be battling depression, they found.

Plus, they also reported lower happiness levels and greater rates of loneliness.

Such people suffer from “very poor sleep hygiene”, said lead author Daniel Smith of the University of Glasgow and would engage in late night activities such as playing on their mobile phones or making cups of tea.

While Smith advocated imposing a 10 pm limit to phone usage to help combat this, he added that daytime activities have a part to play too, explaining that a healthy sleep pattern is often the result of being active during the day and inactive at night.

Facebook Comments
Continue Reading

Lifestyle

Nike’s Newest Sneaker Is on Fire, and These Just-Released Colors Are the Best Yet | SEE

Published

on

Nike

Nike has released new colorway after new colorway since it dropped the Air Max 270a couple months back. But the flyknit variation, which comes with a larger swoosh that’s nearer to the front of the shoe, hasn’t received such love until… today. Nike’s Air Max 270 Flyknit now comes in plum/white and black/white options, both far easier to fold into everyday outfits than what was previously available (orange/blue/white and yellow/orange/blue). Also, given the flyknit is far more breathable than the mesh of the standard 270, it makes for a much better everyday summer shoe.

The first Air Max was released in 1987 and among its first purely lifestyle sneakers, and the 270s are no different — it’s a cushy, chunky shoe that should not be worn running any significant mileage, lifting weights or any other agility-based activity. What the shoe lacks in sport and athletic function, it makes up in comfortable everyday wear. The 270 in its name refers to the 270-degree heel cushion that provides a nice landing pad for each stride. Though the 270 wasn’t technically the first Air Max with a 270-degree cushion — that title belongs to the Air Max 93s, which also received two new color updates — it is the biggest by way of height.

The plum/white and black/white colorways are available now on Nike’s site.

Nike-AM270-Gear-Patrol-Slide-1

Nike-AM270-Gear-Patrol-Slide-2

Nike-AM270-Gear-Patrol-Slide-3

Facebook Comments
Continue Reading

Lifestyle

Facebook closed 583m fake accounts in first three months of 2018

Published

on

Facebook took moderation action against almost 1.5bn accounts and posts which violated its community standards in the first three months of 2018, the company has revealed.

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam and shut down a further 583m fake accounts on the site in the three months. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East, and Africa.

The amount of content moderated by Facebook is influenced by both the company’s ability to find and act on infringing material and the sheer quantity of items posted by users. For instance, Alex Schultz, the company’s vice-president of data analytics, said the amount of content moderated for graphic violence almost tripled quarter-on-quarter.

One hypothesis for the increase, Schultz said, is that “in [the most recent quarter], some bad stuff happened in Syria. Often when there’s real bad stuff in the world, lots of that stuff makes it on to Facebook.” He emphasized that much of the moderation in those cases was “simply marking something as disturbing”.

Sign up to the Media Briefing: news for the news-makers
Read more
Several categories of violating content outlined in Facebook’s moderation guidelines – including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement – are not included in the report.

On child exploitation imagery, Schultz said that the company still needed to make decisions about how to categorize different grades of content, for example, cartoon child exploitation images.

“We’re much more focused in this space on protecting the kids than figuring out exactly what categorization we’re going to release in the external report,” he said.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious. Those tools worked particularly well for content such as fake accounts and spam: the company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and “nearly 100%” of the spam.

Automatic flagging worked well for finding instances of nudity, since, Schultz said, it was easy for image recognition technology to know what to look for. Harder, because of the need to take contextual clues into account, was moderation for hate speech. In that category, Facebook said, “we found and flagged around 38% of the content we subsequently took action on, before users reported it to us”.

Facebook has made moves to improve transparency in recent months. In April, the company released a public version of its guidelines for what is and is not allowed on the site – a year after the Guardian revealed Facebook’s secret rules for content moderation.

The company also announced measures that require political advertisers to undergo an authentication process and reveal their affiliation alongside their advertisements.

Facebook’s moderation figures come a week after the release of the Santa Clara Principles, an attempt to write a guidebook for how large platforms should moderate content. The principles state that social networks should publish the number of posts they remove, provide detailed information for users whose content is deleted explaining why, and offer the chance to appeal against the decision.

“This is a great first step,” said Jillian York from the Electronic Frontier Foundation. “However, we don’t have a sense of how many incorrect takedowns happen – how many appeals that result in content being restored. We’d also like to see better messaging to users when an action has been taken on their account, so they know the specific violation.”

Facebook isn’t the only platform taking steps towards transparency. Last month YouTube revealed it removed 8.3m videos for breaching its community guidelines between October and December.

“I believe this is a direct response to the pressure they have been under for several years from different stakeholders [including civil society groups and academics],” said York.

Facebook Comments
Continue Reading

ads




Like Facebook Page

Sparkling New Music (Recommended)

New Music! (Recommended)

New Music!! (Recommended)

New Music!!! (Recommended)

New Music (Recommended)

Sparkling New Music (Recommended)

New Music (Recommended)

New Music! (Recommended)

New Music!!!

New Music

Trending This Week

Latest Stories

Trending