We're (a)live!
That’s right, you’ve read it correctly. We’re finally live after our long beta testing phase.
This has been only possible thanks to the support of our incredible, small group of close beta testers, without you all, Trusted Humans couldn’t have been possible.
Now, we’re finally launching Trusted Humans as a final, complete product, and with that, a complete reset of the content has been made.
Nevertheless, we will continue testing functionalities in a separate environment, if you would like to participate with the previous, and other new testers, please contact us, and we will include you in the waitlist for when we start a new testing phase.
What we have learnt
This closed beta has given us some really important knowledge about how we (humans) trust online content. We’re going to share some of the key takeaways that we’ve extracted.
Content origin
We’ve discovered that we tend to trust online content regardless of the origin. This could be interpreted as that the origin doesn’t matter at all, but, during the we also discovered that once a person knew that the origin of the content was AI, the trust level dropped drastically, preferring other similar brands with a higher human score in most of the cases.
This basically means that the origin of the content does in fact matter, but that people are unaware of it. Not everyone is capable of recognizing AI content just by themselves.
Knowing this, we made our first refactor of the profile and scan pages, to make sure that the human score was clearly displayed, to allow visitors to know at a single glance how human the content is.
This key factor proved that our platform needed to be crystal clear about what is human, and what is not, as well as in which percentage.
Company checks
We initially didn’t plan on adding any company checks (Ethical AI usage, etc), but during the beta we discovered that there was a key factor that differentiate the company reputation in terms of AI usage.
Humans don't care so much about if a content is AI generated, if they know that it’s done ethically, that a person's job has been kept in place, etc.
Basically, we’ve created our checks as people care about how they use AI, not only if they use it, improving the overall company score depending if they have certain checks enabled or not.
Imagery scans
During this beta, we’ve got the feedback that more content than just textual should be added, so we’ve got hands on and we implemented our Image AI detector model, so from now on, when an image is detected in a website content, we also perform a scan of it, and compute it on the global score.
We got the feedback to add more types of content (more information in the following sections), but we thought that starting with images was a great start, as everyday the similarity between AI generated images and human-made is increasing. It’s harder to detect by the human eye when they’ve been done by a person, or by AI.
Our testers automatically loved that feature, we received the feedback that specially for e-commerces, knowing that the product that they were purchasing was real, was a game changer, as some of them already got an AI generated product, where the image was AI generated, and the product itself looked completely different, or was a dropshipping one.
Work in progress mode
Some of the beta testing phase companies that were testing Trusted Humans, commented that they were unaware of how much AI they were using, as they’ve been implementing it through the years, not knowing that it would result in what it’s today.
We all initially thought “Wow, it’s so incredible that AI can write this text / generate this image for my company”, but we didn’t think of the consequences through the years. That’s one of the main reasons why Trusted Humans was created, to mitigate this effect, and bring back user trust.
These companies now wanted to revert these changes, but were scared that it would take them time, and their public image would be damaged, so they asked us for a solution.
We thought that the best option was to create a profile flag, the work in progress flag. Once it’s enabled, the company profile displays that they’re working on in improving their human content.
This prevents users from performing scans. We know, during that time, users will be unaware of the result, that’s why by enabling this flag, the company is required to prove that they’re working on it, and in case of being detected that they’re somehow tricking it, we will ban them from controlling their profile.
We know that this might affect the users, but we all deserve the right to make the correct thing. This AI world is new, we are inexperienced, we all deserve a second chance, and that’s our opinion on the topic. As long as they work towards improving the human rate, they can be temporarily flagged as work in progress.
Upcoming features
Now, we’re going to see what feedback has been kept remaining, and will be included in the following releases.
It was quite a lot, and we ran out of time and year (literally, as the live release has been in 2026 already, when we expected it for 2025).
Here are some of the stronger points that we think that will be a game changer:
Videos
As with the imagery, videos are also a hot topic right now. The latest AI models make content that is incredibly hard to differentiate from reality.
This needs to be addressed as soon as possible, as the amount of deepfakes, and false advertisement is increasing at a fast pace.
The principal concern with this, is that you might purchase a product or service thinking that someone is promoting it, while they are not, and that’s really dangerous. Inclusive it can be that a product does a certain action or feature that does not.
We’re really concerned about this point, so we will work on it as soon as possible.
Vibe coded content
We, the Trusted Humans team, come from a technological background, that’s why (at least at the moment) we know that Vibe coded content is dangerous.
It’s not because it might create bugs or a loop in the code, that’s something that is not so bad, but because of cybersecurity.
We’ve seen by first hand that certain vibe-coded pages or functionalities are insecure. And we have a real example:
We saw an AI startup that got a newsletter. The complete public pages were vibe coded. All fine up to there, but here’s the problem, when you subscribed to the newsletter, the complete email list of people that had already subscribed was returned, meaning that it was public! Imagine subscribing to a newsletter and receiving the list of thousands of participants, that’s crazy.
This is just a simple example, the point is, would you trust to write your credit card in an e-commerce store knowing that it’s vibe-coded? Where’s the credit card going?
Our check will make sure that the company that you’re checking is not vibe-coded, or if so, in which percentage it’s. This feature will have a main priority as well, as we consider it quite important.
Social media
Other feedback that we received from our beta testers, is that they were seeing some profiles that reported to be human made, but their social media wasn’t. Using AI imagery, videos, etc.
There’s simply too many social media platforms to check them all, but we’re going to add a new feature to fix it.
We will start with the main platforms, as they’re the most used ones, and we will expand the list until we cover most of them.
Our idea is that if they’re using AI in one of them, they probably are in other ones, so we will try to get a global overall from the company social media profiles.
This feature will be implemented after the video one, as social media contains an intensive amount of videos (usually), so we will need to make sure that we have that properly done before proceeding with this quantity of content.
Offline content
Now it’s time to address the elephant in the room. Offline content from companies.
For example, a flyer, a booklet, a book, the image of an advertisement campaign, etc.
Basically, any kind of content that it’s offline.
We know that this is a must to know the real overall company score, but this one is a hard task, that’s why it’s going to be left for the following releases.
We want to give it an important priority, as it’s crucial to know the real score, so we will focus on it as much as possible. Our main problem is how to obtain that content, and we think that will need you, the Trusted Humans user for that goal, we will test an idea that we have with a closed testing group, and if it’s successful, we will implement it that way, otherwise we will need to check how to do it.
But you can take for granted that sooner than later, we will have it implemented on the site, so you can be sure that the company score reflects all possible channels. We just need a bit of time. In the end, we’re also humans :)
Closing this testing phase
We want to thank again to our first closed beta testers, as without you nothing of this would be possible, you’re all the best. Thank you for helping us push humanity and transparency forward.
We’ve prepared a small surprise for all of you that have helped us, so check our following communications for more information about it; we’re sure that you will love it.
For our new companies that want to claim their profile, we’re offering a 12% discount code on all anual plans during the first year, that will work until the first of March (1/3/2026). This is a thank you present for your support to humanity.
Don’t forget to use the discount code LNCH12 at checkout!
Remember to keep human,
The Trusted Humans team