I have reached decision fatigue.
It starts in the morning. I stare at my closet and wonder what to wear. I’m not a good dresser. As long as clothes don’t clash, I’m good. My personal style? My zipper isn’t down. That is the extent of it. Steve Jobs was famous for wearing the same clothes every day. That brief mental reprieve doesn’t feel like much, but we make thousands of decisions daily, consciously and unconsciously. Every decision you don’t have to make, no matter how minute, is like a mental nap.
Part of this is the democratization of technology and the seemingly infinite choices it creates. There is a glut of products and services. And many brands for each. Unless a business is majorly capital intensive, there is no friction in starting a company. For most of human civilization, creating products was extremely challenging. The raw materials were hard to source. Craftsmen had to build the end product by hand. The problem was never demand, but supply. The Industrial Age, with sophisticated global supply chains supported by shipping lanes protected by navies, improved logistics, and technology, fixed all that. Want to create a new USB-C phone charger? Design it, send it over to China, and they will ship back as many containers as you want. Software? Look at the number of projects that show up on Product Hunt every single day.
A decision is a commitment. When presented with many alternatives, committing is a challenge. Anecdotally I see this with myself and friends. We spend an excessive amount of time scrolling through Netflix versus watching. The behavior is understandable. When you log into any Netflix-like platform, the user is presented with an infinite scroll of choices. “Do I really want to watch this? It’s only recommended to me at 92%. What if there is something better?” Even with algorithm recommendations, do you want to watch a show that might be canceled soon or watch another movie that has a similar plot to what you liked before? Mind-numbingly scrolling is relaxing, almost like another mental nap.
Yesterday, I was getting ready to go listen to a talk. I pulled up the confirmation email to find the address, suite number, and parking info. In the arrival instructions it stated that the doors to the suite would be locked, so text them on Telegram, and someone will come and open the door. Now, I have nothing against Telegram, but it’s not on my phone. I stared at the email. How badly did I want to go? I have to install an app just to go to this event. I already have iMessages, Facebook Messenger, WhatsApp, Google Hangouts, Skype, and Signal.
This fatigue also comes from our choices in terms of what and how we want to engage. Last night we went to an Aziz Ansari show. This is how one of his bits went:
AA: You guys see the picture on social media where that woman got a pizza delivered, and the pepperoni was arranged in the shape of a swastika? How many of you thought it looked like a swastika?
(Quarter of the crowd hoots and hollers)
AA: How many of you thought it looked like a regular pizza?
(A different quarter hoots and hollers)
(Then Aziz stares out at the crowd)
AA: What the hell are you guys talking about?!? I just made that up!
His point was that we are required to have hot-takes about everything. Make a decision no matter what, no matter how informed or uninformed we are, we must have an opinion.
AA: The half of you who didn’t clap aren’t off the hook. Because you all were thinking how the hell didn’t I know about this?!?
He continues talking about how we used to get news. You read the paper in the morning. Went on with your day. Maybe learned and discussed issues a bit more at the water-cooler. You watched the evening news.
A person could process information over time.
That is no longer possible. We are reading or viewing news while still in bed every morning. We are refreshing our feeds throughout the day. The hot-takes are immediate and constant online and 24-hour news channels.
The Internet and media landscape changed how or what we are supposed to know. Before, trying to learn about many different things was difficult since we didn’t have access to all the knowledge. Socially and in the business world, that is no longer an option. Just because a lot of information is at our fingertips, we shouldn’t be required to have an opinion about everything. Nowadays, it’s not acceptable to say “I don’t know.”
I’m balding. Well, not balding, but my hair is thinning. This was bound to happen. I see hereditary catching up to me.
Every barber that has cut my hair in the last 5 years has told me to do something about it. Usually, they bring around whatever product their chain is incentivized to sell. Nothing wrong with that, but I’m totally clueless about haircare and thought I should do research.
First, I hopped on Amazon to look at the reviews of the product the last barber recommended to me. Between the different sizes and combinations of shampoo, conditioner, “scalp treatment,” they averaged 3.5 to 4.5 stars. Then I went to fakespot.com to see if the company had a history of BS reviews. Nope, looks good.
Thinking that I’m somewhat intelligent and should do a modicum of research since I’m putting this gunk in my hair, I decided to Google and find out what ingredients I should be looking for, what alternative products have those ingredients, and if the product recommended by the barber had those ingredients.
Obviously, there are many review sites. You know, everyone has to be in the content business nowadays. Most had long posts about the science, pros and cons, and of course, an Amazon affiliate link. Nothing wrong with the affiliate link, publishers need to get paid somehow. But there was no way, for me as a reader, to evaluate if the science and list of pros and cons were legit. Not to mention the reviews weren’t based on trying all the different products themselves. There is also the variable that such products may or may not work depending on the individual and their body chemistry. So, it’s a crapshoot.
I don’t even buy my own shampoo. My wife does. The parameters are nothing flowery smelling. Beyond that, I don’t care. She gets me Dove’s men shampoo and body wash, repeatedly, because I don’t complain about it. That’s the extent of her rationale for buying it. Calling it “buying” is a stretch. Buying implies a conscious choice. The Dove’s products are on our Amazon’s monthly Subscribe & Save plan. She made the decision once, and since I haven’t complained, they just show up every month, and I keep using it.
The wife had seen TV ads for the brand the barber recommended. She didn’t know much about it, but it gave the brand some credibility.
And I had no interest in figuring it all out.
So, we are left with:
An overwhelming amount of unverified content (lack of trust) + decent Amazon reviews for incentivized recommended product + lack of interest = purchase of the incentivized recommended product.
That is part of building a brand. The mental shortcut so we don’t have to think. Instead, many companies are focused on the direct marketing aspect, which requires active decision-making.
A few years ago, the repair costs on my car were getting too high. I kept avoiding the process of researching new cars. I wanted to avoid the dealerships like the plague. At the same time, my wife wanted a bigger car because our son was going to be born soon. I took her car, and she got a new car.
I was saved.
There is a reason platforms move to algorithms for feeds and recommendations. There is just too much. Of everything.
Between our own lack of self-control and the use of dark patterns (that are easier to test and execute nowadays), how does this problem get better?
What happens VR/AR/holograms get to the point where we can ignore most of our analog live? Is the scene from Wall-E too far off?
What happens when every action in our virtual worlds become micro-transactions (like mobile video games today)?
What happens when our work environment looks like this?
How will we cope?
I came back recently from 2 weeks in Thailand. I say Thailand, but 93% was spent in Bangkok. The trip was fun, but tiring for the 3-year old boy in 100% humidity and 95-degree temperature.
We saw lots of temples and Buddhas. My takeaway from the trip was I’m not sure we as a world are capable of creating things that will last the test of time. We make so much more, but as a ratio of day to day stuff versus the awe-inspiring, we are headed in the wrong direction. Maybe that isn’t true. Biology demands quantity to achieve quality. We are programmed to procreate so the most adaptable will survive. Our genes constantly mutate to produce as many permutations as possible. It’s possible since we can see and have access to all the junk today the quality seems far and between.
But tell me, what is the equivalent in our time that comes close to matching the brilliance in design and detail to the above?
The wife has a cousin who moved to Bangkok a few years back. He got married, settled down and now has a couple of kids. His wife was a reporter. Once they had kids, she decided that the hours required weren’t conducive to raising children. So, she got a job in external communications for Facebook. That piqued my interest. I asked what her job entailed. She said she liked her job because she could interact with former colleagues. Their job was to ask for Facebook’s position on different matters. For example, why Facebook wouldn’t take down posts that encouraged genocide.
Now, throughout history, companies have always done business with the bad guys. Companies like IBM and General Motors, through subsidiaries, profited massively working with the Nazis. Of course, Facebook is not itself encouraging genocide, but even sites like Breitbart would actively remove comments espousing those types of views. Facebook’s problem is that with the sheer volume of postings on Facebook, Instagram, WhatsApp, now matter how much AI or how many human moderators they hire, there is no feasible way to take down all terrible content. If Facebook makes the transition to remove the News Feed and transition to ephemeral encrypted one to one or group communications, will the situation get any better?
In Thailand, for the first time in my life, I experienced selfie-culture on a grand scale. The number of selfie sticks and posing in front of 700-year-old Buddhas was ridiculous. Organically, lines formed at the best places to pose for photos. The fact that the guide is discussing that this Buddha was moved here when the Burmese attacked 300 years ago, the fact that Thailand is the only Southeast Asian country that has never been colonized, takes time away from getting that perfect group selfie shot.
Eugene Wei, a former Facebook employee, has an excellent essay on why social platforms have been successful. Basically, social platforms are a new way of accruing social capital. If you are a Gen X’er or older, more than likely, you have built up enough social capital. You’ve been plugging away at jobs and are middle management (or will be soon), you are married with kids, have a house, neighbors, and friends. Your path is more or less clear. Social platforms then are really a tool for communications (generally what they were intended for in the first place). But if you are a millennial or Gen Z’er, your support network isn’t as strong and you are probably financially hamstrung. There is a high probability that you are under-employed and don’t quite have that predictable future path forward. The way to create social capital is to present a curated version of “living your best life” nonsense. And this works because youngins have one thing that older generations don’t: time.
An example of generating social capital. While standing in line to board BTS (Bangkok’s train system), there were 3 young professionally dressed young women in front of us. Glancing, nonchalantly and non-creepily, over at their phones, all three had uploaded short videos showing the stage of a panel discussion from a conference they had probably just attended. They were scrolling and clicking furiously through the list of people who viewed and liked their videos. There was nothing educational or informative about the videos. It was to show off that they had been to a conference and they had exciting careers. Social capital generated one photo at a time, one video clip at a time.
On to a darker subject, as a technology, I don’t know how Facebook can deal with bad guys live-streaming their stupid and evil actions. The algorithms that surface “interesting” and “engaging” content from friends also surface videos of evil people doing evil things. To programmatically create an exception for edge cases (and for the volume of material posted on Facebook, these are a still relatively small numbers) has always been a sheer nightmare for anyone who has designed systems or written code. And QA’ing such outliers is even tougher. And that is what we are asking Facebook to do.
Theoretically, Facebook will try to filter out and remove more hate-related content. I don’t have high hopes.
Of course, the question becomes how are previous technologies not criticized for similar bad actors? Bank robbers used to plan using land lines. The Internet changed the level of connectedness. One person cannot call 500 friends to share news easily. The network effects were limited by what we could do physically. One could share Nazi propaganda, but there was a limitation on how many pamphlets one could print out and the speed they could disseminate.
That friction is gone.
There are 2 ways obvious ways the situation could be fixed. First, as users of social platforms, we could stop viewing offending content or engaging with it. But as humans, we lack the self-control to do this, hence why we have so much click-bait. Second, Facebook could add a ‘Dislike’ button. Unfortunately, this may never be implemented since Facebook will have to address it with advertisers. The assumption would be since YouTube has a dislike button, Facebook could implement it as well. Maybe since YouTube has had it since the beginning, users and advertisers are comfortable with it, but Facebook fears addressing downvotes with the bad PR they are hammered with constantly already.
This past weekend, my mother-in-law was telling us which photos our relatives liked from our Thailand trip. She had been sharing the pictures on WhatsApp with relatives all over the world. I mumbled I didn’t really want our photos on Facebook. She responded that she was using WhatsApp, not Facebook. When I told her that Facebook owns WhatsApp, she shrugged. How do you tell others not to share bits and pieces of you without your permission? I may have given a friend or colleague my phone number and email address, I may have shared photos with them. I didn’t consent for him or her uploading it to Facebook, Twitter, or LinkedIn. We have no laws for this kind of stuff. We never needed it because the technology didn’t exist to process and identify and make connections, aggregate data, and build profiles in real-time, continuously. The problem isn’t just that we have given up our own privacy, it’s that our friends and families are violating our privacy unknowingly and none of us are adequately able to converse about it with them.
These following thoughts have rattled around in my head for a few years now. Some of us in the marketing space don’t feel right doing digital advertising. The lack of transparency in advertising platforms is mind-boggling. I finally decided that our company needs to stop doing digital advertising for clients. We have spent too many hours talking to clients about YouTube and brand safety, too many hours discussing the need to exclude hate sites from Google AdWords. The underbelly of digital advertising platforms gets worse and worse every year.
This isn’t without its own tribulations. Convincing current and potential clients that we will do the marketing plan and strategy, messaging, voice and tone, visuals, and creative concepts, but not create online ads is a tricky sell. Especially when every marketing agency is knocking on their doors promising expertise in all parts of the marketing stack.
Over time we have transitioned most of our clients to use other agencies specifically for digital advertising, and we are working on moving the rest over this year.
It’s the right path for us.
2018 has been particularly busy here. Besides client work, my time has gotten sucked up by more AI courses and a couple of blockchain ones. I started a VR set of classes but just had no time for it.
My social media activity has suffered. I barely look at Twitter nowadays. It’s a function of the people I’m following don’t interest me anymore, or they are sharing articles that 10,000 have retweeted before or regurgitating nonsense from the marketing/advertising/technology echo chamber like it is gospel. Frankly, people who use Twitter mainly for work end up being boring (yes, that is a broad generalization). Not that I blame anyone, I do the same thing. Theoretically, I should prune who I follow and find new people to follow, but that requires effort I’m not willing to commit to. Instead, I created a second Twitter account that is not work focused at all. Not that I tweet much out of there, but it is way more entertaining.
I got rid of my Facebook account a couple of years ago but created another one with my work email address for the sole purpose of scanning the metrics of ads we run for the clients.
I have mixed feelings about Facebook. Everyone I know is majorly invested in Facebook, especially private Facebook Groups. Some publications have set up private groups so subscribers can interact with journalists without being trolled. No other website or platform has successfully duplicated the functionality. On the flip-side, using the same feature to instigate massacres and genocide is unacceptable. Every time Facebook says it took down hundreds of pages/groups run by bad actors, you know there are thousands more they haven’t identified yet. When they say 20 million accounts have been hacked, it’s probably 200 million in reality.
Snapchat…good lord. The UX and ads suck. Advertisers spending on a platform where the most significant demographic has the least amount of money is…bizarre. But I have it because it’s the only way to communicate with younger relatives.
The social app I use the most is Instagram. I think I created my account in 2011. Over the years I had followed so many random people that my feed was sheer garbage. Then layered with the algorithm nonsense, I deleted my account and started a new one. Sure, it’s not a good personal branding strategy, but I don’t care. Life is too short.
Besides, the algorithmic changes, Instagram is such a great place to get a global perspective on all forms of design, from the blandness of neutral colored interior decorating to the vibrant colors of Tokyo streetscapes. Over short periods of time, you can see trends in palettes and styles forming.
One of the topics that has interested me for a bit is how technology is changing the trajectory the lives of males. Empirically, I see kids who should be going to college have no interest in college, college grads who should be looking for jobs having no interest in careers.
Considering what they have seen growing up, it is understandable. Parents and parents of friends probably got let go from jobs, parents probably panicked during the Great Recession, maybe lost homes.
But I think there are more significant structural problems. Historically, males needed to follow the trajectory: get a job, meet a girl, settle down, buy a house and get a mortgage, have kids, buy a bigger house with a bigger mortgage, retire. For food, entertainment and “adult” relations, men had to leave home and go get a job.
The Internet changed all that. Anyone can get cheap food delivery, inexpensive cable/Netflix/Hulu/Prime/ad-supported video, social bonding through planet-scale online gaming, access to infinite adult streaming sites or use Tinder-like apps. Why would anyone want to deal with the corporate world where there is no guarantee of jobs/income and the competition is not local, but global?
Why leave home when you could have all your needs fulfilled relatively cheaply while sitting in your underwear?
Turns out there is data to prove this is happening. From Princeton researchers:
“We estimate that technology growth for recreational computer activities, by increasing the marginal value of leisure, accounts for 23 to 46 percent of the decline in market work for younger men during the 2000s. Based on CPS data, men ages 21-30 reduced their market work hours by 12 percent from 2000 to 2015, whereas the decline was only 8 percent for men ages 31-55. Our estimates suggest that technology growth for computer and gaming leisure can explain as much as three-quarters of that 4 percent greater decline for younger men.”
Causing a 3% decrease in joining the labor pool may not seem like a significant change, but is there any reason that percentage won’t get bigger? Throw in VR for immersive gaming or adult purposes. Why again should males leave home?
Imagine the social and economic ripple effects.
On and off I’ve been building an app that does handwriting recognition on the iPhone. It’s ugly as sin and not meant for app stores. It’s my way of keeping up with machine learning and app development.
There are nuances to creating apps, practices specific to platforms, user experience, and the purpose of apps. They tend to be well understood and are not particularly interesting…to me at least.
Machine learning, and of AI in general on the other hand, is like the wild wild west. People are trying lots of things, some of it pointless, some of it plain amazing.
One of the main pain in the butt parts about machine learning is something called hyperparameters. They define how fast to learn and fit the data and describe the structure of components that will do the learning. It’s a bit of art and science. There are general rules, but depending on the data set and what you are modeling, hyperparameters can differ significantly. The hyperparameters for identifying a tumor from a scan will be nothing like predicting stock prices. While selecting hyperparameters is a challenge, another problem is that it can’t be changed mid-stream. You define the hyperparameters and kick off the learning process. Then you wait. There are tools make it easier to identify if the model you designed is starting to work as it learns, but it is still a time sucking process.
Obviously, this is a problem desperately in need of a solution. So researchers and developers have come up with tools called automated machine learning (AutoML). Basically, the tools look at the data, make educated guesses trying to identify relevant data, select a subset of the types of models and hyperparameters to try, and then execute to see which is the most accurate.
Granted, they won’t be super-optimized, but it gets you to 60% incredibly quickly.
I decided to try out one of the AutoML tools. The test was to predict home prices given you already had the selling price for past sold homes and their features like the number of bedrooms, bathrooms, square footage, etc. This is a common practice problem for anyone who has taken a statistics class.
Sidenote: linear regression has been in use since the 50’s. It was time-consuming to do at scale, but the math behind it has been around for a long time. A vast majority of applied AI and machine learning today is regression. If you can logically structure your data in Excel (not necessarily the volume of data), it can probably be solved by some type of regression analysis 75% of the time. What has changed is we have so much more data, and we have access to the hardware that can crunch it all fast. When most new services yap about leveraging newfangled AI, it’s usually regression. Unless they say it isn’t, feel free to roll your eyes at them. I do.
Back to house prices. Here is a screenshot of me testing out AutoML.
The first 10 lines are for loading libraries needed to crunch the data and loading the data for use. The next 4 is a bit of cleanup of the data (a bit real estate specific). The following 9 lines tell AutoML what type of data I’m providing and what kind of models to test. You don’t even have to give some of that info, it can throw everything but the kitchen sink at it and try to figure it out. The next few lines kick off the training process and make predictions on test data.
In 25 lines of code, you have the starting point for building a robust model.
This test was done on Google Colaboratory, a free tool. Talk about democratizing technology.
What’s the point of all this? We are building technology to solve technology problems. If the technology doesn’t work, we will create another layer of technology that can monitor, fix, and optimize it. What’s one of the value propositions for the cloud? It can auto-scale as needed. Your e-commerce store is getting way more traffic than expected? The software will know to fire up more (virtual) servers automatically and then shut them down when appropriate. Auto-technology isn’t about cars, it’s technology self-serving itself. Like Elon Musk said when building the Gigafactory: “The factory is the machine that builds the machine.”
What happens when all factories autonomously can build factories as needed?
The quote is management dogma. It makes sense. How do you measure success? How do you quantify what needs to be improved? What is a metric everyone can work towards?
In many ways, measurement is a proxy for intuition, trust, and complexity. Most of us have crappy intuition (checking my dating history). As for trusting data, “There are three kinds of lies: lies, damned lies, and statistics.” Complexity is increasing at an exponential rate. A hundred years ago, two brothers built an airplane. Nowadays we don’t fully understand how AI makes decisions. So we boil everything down to a few numbers.
That’s why Goodhart’s Law is important. It states “When a measure becomes a metric, it ceases to be a good measure.” More specifically, “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
Measurements lose or hide fidelity. There are components we don’t understand or are black boxes (see any ad tech platform) or have secondary or tertiary level interconnectedness we cannot see. Plus we don’t fully understand causal relationships between people (their intentions and behaviors) and the interactions with systems. Yet, we are required to make decisions based on the ambiguity. We align incentives with the direction we want the metric to go.
As a slight tangent, we have the McNamara Fallacy, where one makes decisions based on easily identifiable numbers, and disregarding what cannot be easily measured. That is all of digital marketing. Digital-only marketing proponents keep pushing the idea that it is better because it is measurable. Not true. Easy measurement of digital does not mean it is more efficient. Not being easily measurable does not mean it is not relevant. In online advertising, all we hear about is CTR, CRO, CPM, CPC, CPA, confidence levels, etc., creating an industry of spreadsheet junkies that eventually lead to:
The danger is in choosing the wrong metric.
Let’s take Facebook as an example. Their measurement of success is making sure Facebook stock price goes up constantly. This is driven by increasing the number of clicks (oh oops, “engagement”) with ads or publisher content. The platform needs to predict what content a user will click on. Humans are tribal creatures. We look for ways to re-affirm our beliefs and social structures. The success of Facebook inherently generates and amplifies our reptilian instincts. Is it possible for any social platform to not become a channel for culture wars when the ultimate metric is clicks?
Similarly, if you are YouTube, and creators have to play the SEO game for revenue, you end up with garbage. (We will ignore the fact if PBS had done this, they would have been de-funded already. The consequences for Google? Nada. But, you know, Facebook and Google aren’t media companies, they are technology companies.)
Another tangent, the Streetlight Effect where people look where it is the easiest to look. The joke:
A police officer sees a drunken man intently searching the ground near a lamppost and asks him the goal of his quest. The inebriate replies that he is looking for his car keys, and the officer helps for a few minutes without success then he asks whether the man is certain that he dropped the keys near the lamppost.
“No,” is the reply, “I lost the keys somewhere across the street.” “Why look here?” asks the surprised and irritated officer. “The light is much better here,” the intoxicated man responds with aplomb.
From François Chollet’s The impossibility of intelligence explosion:
“And what is the end result of this recursively self-improving process? Can you do 2x more with your the software on your computer than you could last year? Will you be able to do 2x more next year? Arguably, the usefulness of software has been improving at a measurably linear pace, while we have invested exponential efforts into producing it. The number of software developers has been booming exponentially for decades, and the number of transistors on which we are running our software has been exploding as well, following Moore’s law. Yet, our computers are only incrementally more useful to us than they were in 2012, or 2002, or 1992.”
Who is this François guy? An AI researcher at Google who created one of the most popular Deep Learning frameworks.
In the context of digital, it is fascinating to realize that digital hasn’t changed how we live our lives. We buy/rent a house or apartment. We drive our cars or ride the train to work. We push digital paper around at work. Netflix made TV a better experience, but it is still watching TV. Airbnb made it easier to book an exciting home somewhere, but it hasn’t changed travel. The channels and tools have changed, lessened the friction we have to deal with, maybe made us more productive, but at a fundamental level, our activities have not changed.
And that’s ok. The problem is that our expectations are out of whack: we think every new piece of technology is life-changing. All information gets broadcasted at hyper-speed and hyper-volume now. We have no practical way to filter and process. We end up thinking and treating everything equally important all time.
Of course, we can see this in its full glory with bitcoin and blockchain technology today. I ran into this visual earlier in the week:
From an AngelList newsletter:
“Key takeaway: Blockchains are the biggest technological breakthrough since the Internet.”
Now, I have nothing against blockchain and bitcoin. There are extremely promising uses for it, but every time I see a ‘once in a generation’ type statement, I’m reminded of Roy Amara’s quote:
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Very few companies have gotten big data right. Most current applications of AI, more specifically supervised learning, requires large amounts of clean data. So that is going to be a challenge. And instead of figuring out all that, we will be moving on to the blockchain. Funny. Or not.
Read about the type of discussion McKinsey is having with clients about digital and marketing. Is your data in silos? Do you have to re-orient employee mindsets to put customers at the center?
It is almost 2018. Why are we still having these types of discussions?
Back to AI. Like most executions of digital which have focused on process, efficiency, and cost, instead of being transformational, AI might be the same. Check out Andreessen Horowitz’s State of AI video. Lots of examples of reducing friction and doing existing activities better and faster, but nothing new.
It is not to say that old economy jobs won’t be destroyed and supplanted with new ones. Or existing companies will not get wiped out and new ones created. As an example, listen to episode 27 of Rad Awakenings podcast where they discuss a new company that arbitrages interest rates, other macroeconomic information, and payment terms to elicit discounts from vendors. This is possible now that we have the computational power and ability to massively aggregate amounts of data.
AI is not earth shattering as harnessing electricity or practicing agriculture for the first time. We expected flying cars by now. Instead, we got electric cars.
Social networks facilitate and magnify identity politics.
Once a user follows more than Dunbar’s number, algorithms have to kick in for platforms to be usable and not overwhelm. Generating engagement and stickiness requires displaying content platforms think you will like. So algorithms look at what the users first, second, etc level connections like, what similar profiles like.
Objective, rational content is boring and gets little engagement. Publishers push content that is emotionally driven. Content that requires us to pick a side.
Combine both and you have a filter bubble that amplifies tribal tendencies. A filter bubble based on identity politics.
This isn’t to say social media creates identity politics. We all want to be part of our own tribes. The point is that the natural output of social media will always be based on identity politics.
It is not unlike other social interactions. We hang out with family, neighbors, coworkers, church members, alums, etc. The problem is our expectations. For some reason we thought global social platforms would connect us all, expose us to new and different ideas, and then we would all become enlightened.
That’s not how our biology works.