Clear Sky Science · en

Quantifying customer sentiment for automobile brand perception analysis using machine learning on Twitter

· Back to index

Why Social Media Feelings Matter to Car Makers

Every day, millions of people talk about brands on social media, often more honestly than they would in a formal survey. For car companies, these casual posts reveal what drivers really think about their vehicles and service. This article explores how tweets about five major car brands can be turned into a single, easy-to-read score that shows whether public mood is leaning positive or negative—and how that mood shifts over time.

Figure 1
Figure 1.

From Online Chatter to Measurable Mood

The researchers start from a simple idea: instead of asking people what they think in slow, expensive surveys, listen to what they are already saying online. They collect nearly 16,000 English-language tweets that mention BMW, Mercedes-Benz, Porsche, Tesla, or Toyota, carefully excluding posts from the brands’ own accounts to focus on ordinary users. Using an advanced language model trained specifically on tweets, each message is labeled as positive or negative. Neutral, purely factual posts are set aside, because they do not clearly show how people feel.

A Single Score for Brand Goodwill

With positive and negative tweets in hand, the team builds a Brand Polarity Score, or BPS. This number ranges from -1 to +1 and compares how many favorable mentions a brand receives against how many complaints it gets. A value above zero means more praise than criticism; below zero would signal a brand in trouble. For the month they studied, all five car makers landed in positive territory, with Porsche and BMW leading the pack and Tesla showing the most mixed mood. Unlike raw counts of positive tweets, BPS weighs praise and criticism together, giving a clearer picture of overall goodwill.

Following Mood Swings Over Time

Public opinion rarely moves in a straight line. A viral praise post, a recall notice, or a big product announcement can quickly tilt sentiment for a few days. To capture these swings, the authors track the Brand Polarity Score day by day for each car maker. They then introduce a second measure, the Brand Polarity Position Indicator (BPPI), which acts like a running average: it accumulates past days and smooths out noise. Spikes that appear in the daily score become gentle bends in the BPPI curve, highlighting slower, more meaningful shifts in reputation rather than short-lived outbursts.

Figure 2
Figure 2.

Whose Voice Counts More?

Not all tweets are created equal. A happy comment from a highly followed account, or a widely shared complaint, can influence many more people than a lone remark with no engagement. To reflect this, the study creates an Influence-weighted Brand Polarity Score (IwBPS). Each tweet is given a weight based on how much attention it received and how prominent its author is, adjusted for the age of both the tweet and the account. The researchers also define a cumulative version of this score, IwBPPI, to track the longer-term impact of influential voices. These measures highlight which brands are being lifted—or dragged down—by posts that actually travel far across the platform.

Putting the Numbers to the Test

To check that their measures are trustworthy, the authors run several reality checks. They compare their favorite tweet model against other popular tools and find it is the most accurate on a large, labeled dataset. They show that sudden jumps in their scores line up with real news events, such as safety scandals or new technology announcements. They also compare results from their chosen model with a commercial system from a major cloud provider and find the patterns strongly agree. Finally, they test how sensitive the scores are to sampling quirks and random errors, showing that the daily and cumulative indicators remain stable even when some labels are deliberately scrambled.

What This Means for Everyday Understanding

In plain terms, the study shows that it is possible to turn messy, fast-moving social media chatter into a small set of clear, reliable numbers that track how people feel about car brands. The basic score says whether the conversation is mostly upbeat or sour, the cumulative indicators reveal longer-term reputation trends, and the influence-weighted versions show whether big shifts are being driven by loud, widely heard voices. For non-specialists, the takeaway is that brands no longer have to guess how the online crowd feels or wait months for survey results: by carefully reading public tweets with modern language tools, they can monitor their standing almost in real time and respond before minor grumbles grow into lasting damage.

Citation: Mathew, S.S., Hayawi, K., Venugopal, N. et al. Quantifying customer sentiment for automobile brand perception analysis using machine learning on Twitter. Sci Rep 16, 5703 (2026). https://doi.org/10.1038/s41598-026-35637-9

Keywords: social media sentiment, automobile brands, Twitter analysis, brand reputation, machine learning