However, I feel we are focused too much on the downsides here. It is becoming popular to focus on the problems when there is evidently a useful tool here that is simply being manipulated by greedy companies and radicalised groups. So, where did it all go wrong?
This is not a simple question to answer, that much should be immediately obvious. However, though it is difficult to narrow down the issues here, I feel the problem lies in motivation. Social media companies gain their revenue from advertising - this much I feel is common knowledge nowadays. Therefore, the more time spent on an app, the more advertisements can be shown to a user which means the more money that said social media will earn (after all, advertisers pay per view or impression of an ad). Thus we can conclude it is in the best interest of a social media to keep the user on the app as long as possible, maximising profit without a single thought for the person behind the screen.
This intention is not unknown, though seems to evade the public eyes despite essays and books such as the works of Dr Abebe Rediet, Daron Acemoglu, and many others who draw attention to the uses of AI in modern society. We can even date warnings of such greed to 1977 wherein Joseph Weizenbaum, a revolutionary computer scientist, draws to our attention that the issues regarding AI will forever be ethical and rarely mathematical. As he states algorithms have “no compassion or wisdom” and thus should be kept out of issues where these aspects are required. Machine learning does as it is instructed, and it is the morals of the creator that dictate such, hence the algorithms cannot be blamed for how they act, the motives of said creator must take that blame.
Such motives have lead the algorithms powering the social media sites to become dangerous, causing these issues we so commonly see in the media. Radicalisation, addiction and mental health issues are prevelant in the youth of today because the algorithms are intended to maximise our time online, leading to harmful spirals of individuals falling down “rabbit holes” of content that may at first seem innocent and harmless yet can easily turn sour if the machine lumps said individual in with the wrong crowd - after all it is but some code on a server and cannot truly know what it is showing to people. Statistics from California State University claim around 10% of Americans experience social media addiction in some form, and we can assume such an effect is common amongst all other internet frequenting countries. Facebook themselves released statistics revealing that over 64% of new members of radicalised groups are lead there due to social media recommendation.
The algorithms have not always been this way - back in the earlier days of social media, before they became the massive corporations they are today, sites such as YouTube and Facebook prioritised user content, showing you what you came to see and not a machine learning powered endless stream of things it thinks you will like. However, the growing popularity of such sites in the late 2000s/2010s lead large companies to profit of off our social interactions taking us to this point we have reached today.
“What can be done?” you may ask. Sure, this all looks rather bleak but I assure you all is not as evil as it seems. Whilst it is not ideal that the owners of social media companies are so profit focused, that is of course somewhat inevitable with adverts allowing the companies to afford the ability to keep sites and apps online, and allow social media to remain “free” (though you are paying with your information), as well as allowing great opportunities for smaller businesses to spread word of themselves. What we need however is them return in ways to the original intentions of social media, that being an extension of our nature to socialise.
A balance must be struck between profit and good will of sorts, and this balance is already in the process. Companies such as Meta are starting to be held accountable for their more evil actions, and mental health is starting to be prioritised by companies such as TikTok, who now offer the option of a controlled amount of screentime on the app, and Instagram, who are starting to allow more personal control over the type of content you see.
Whilst these actions still are not solving the problems, they are small steps in the right direction, that direction being a brighter future wherein social media is mutually beneficial for all, and the companies behind it are held fully accountable for its effects.
By Sam Balderstone for the SHOO Academy
Wider reading:
01943 430245
Info@shoosocialmedia.co.uk