Reading view

#179 – Mariya Moeva on the Impact of Google’s SiteKit on WordPress

Transcript

[00:00:19] Nathan Wrigley: Welcome to the Jukebox Podcast from WP Tavern. My name is Nathan Wrigley.

Jukebox is a podcast which is dedicated to all things WordPress, the people, the events, the plugins, the blocks, the themes, and in this case, how the Google Site Kit plugin is attempting to simplify their product offering, right inside of WordPress.

If you’d like to subscribe to the podcast, you can do that by searching for WP Tavern in your podcast player of choice, or by going to wptavern.com/feed/podcast, and you can copy that URL into most podcast players.

If you have a topic that you’d like us to feature on the podcast, I’m keen to hear from you and hopefully get you, or your idea. Featured on the show. Head to wptavern.com/contact/jukebox, and use the form there.

So on the podcast today we have Mariya Moeva. Mariya has more than 15 years of experience in tech across search quality, developer advocacy, community building and outreach, and product management. Currently, she’s the product lead for Site Kit, Google’s official WordPress plugin.

She’s presented at Word Camp Europe in Basel this year and joins us to talk about the journey from studying classical Japanese literature to fighting web spam at Google, and eventually shaping open source tools for the web.

Mariya talks about her passion for the open web, and how years of direct feedback from site owners shaped the vision for Site Kit. Making complex analytics accessible and actionable for everyone, from solo bloggers to agencies and hosting providers.

Site Kit has had impressive growth for a WordPress plugin, currently there are 5 million active installs and a monthly user base of 700,000.

We learn how Site Kit bundles core Google products like Search Console, Analytics, Page Speed Insights, AdSense into a simpler, curated WordPress dashboard, giving actionable insights without the need to trawl through multiple complex interfaces.

Mariya explains how the plugin is intentionally beginner friendly with features like role-based dashboard sharing, integration with WordPress’ author and category systems, and some newer additions like Reader Revenue Manager to help site owners become more sustainable.

She shares Google’s motivations for investing so much in WordPress and the open web, and how her team is committed to active support, trying to respond rapidly on forums and listening closely to feedback.

We discussed Site Kit’s roadmap, from benchmarking and reporting features, to smarter, more personalized recommendations in the future.

If you’ve ever felt overwhelmed by analytics dashboards, or are looking for ways to make data more practical and valuable inside WordPress, this episode is for you.

If you’re interested in finding out more, you can find all of the links in the show notes by heading to wptavern.com/podcast, where you’ll find all the other episodes as well.

And so without further delay, I bring you Mariya Moeva.

I’m joined on the podcast by Mariya Moeva. Hello, Mariya. Nice to meet you.

[00:03:35] Mariya Moeva: Nice to be here.

[00:03:36] Nathan Wrigley: Mariya is doing a presentation at WordCamp Europe. That’s where we are at the moment, and we’re going to be talking about the bits and the pieces that she does around Site Kit, the work that she does for Google. Given that you are a Googler, and that we’re going to be talking about a product that you have, will you just give us your bio? I’ve got it written here, you obviously put one on the WordCamp Europe website. But just roughly what is your place in WordPress and Google and Site Kit and all of that?

[00:04:05] Mariya Moeva: Yeah. I mean, I’ve had a very meandering path. When you would look back to what I studied, which was, you know, classical Japanese literature, all these poems about the moon and the cherry blossoms, who would’ve thought at that time that I would end up building open source plugins? But I did have a meandering path and I ended up here because, mostly because of passion for the open web, and for all kinds of weird websites that exist out there. I really love stumbling upon something great.

I started Google on the web spam team, actually looking into the Japanese spam market, because of this classical Japanese literature degree and the Japanese skills. And then after a couple years or so, I basically despaired of humanity because all you look at is spam every day. Bad sites, hacked sites, malicious pages. And I just wanted to do something that makes the web better rather than removing all the bad stuff.

And so I switched over to an advocacy role, and in that role I essentially was traveling, maybe attending 20, 30 conferences every year, talking to a lot of people about their needs, what they have to complain about Google, what requests they have. And I would collect all of this feedback, and then I would go back to the product teams and I would say, hey, this and this is something that people really want. And they would say, thank you for your feedback.

Essentially at one point I said, okay, we’re going to build this thing, and that’s why I switched into product role. And I was able to take all the feedback over the years, that we’ve gotten from developers and site owners, and to try to build something that makes sense for them. So that’s how I ended up in the product role for building Site Kit.

And the idea from the very beginning was to make it beginner friendly and to make it from their perspective to match that feedback, rather than doing something that is like, here’s your stuff from analytics, here’s your stuff from Search Console, figure it out. That’s how we ended up building this and it’s been now five years. And it actually just a month ago entered the top 10 plugins. So clearly people find some value in it.

We have 700,000 people that use it every month. And overall it’s currently at 5 million active installs, meaning that these sites are kind of pinging WordPress so they’re alive and kicking. It’s been very encouraging to see that what we’re doing is helpful to people and we will keep going. There’s a lot to do.

[00:06:29] Nathan Wrigley: I think it’s kind of amazing because in the WordPress space, there are some of the, let’s call them the heavy hitters. You know, the big plugins that we’ve all heard of, the Yoasts of this world that kind of thing. Jetpack, all those kind of things. This, honestly has gone under the radar a bit for me, and yet those numbers are truly huge. Four and a half to 5 million people over a span of five years is really rather incredible.

[00:06:54] Mariya Moeva: It grew very fast, yeah.

[00:06:55] Nathan Wrigley: Yeah. And yet it’s not one that, well, I guess most people are reaching out to plugins to solve a problem, often a business problem. So, you know, there’s this idea of, I install this and there’s an ROI on that. This is not really that, not really ROI, it’s more site improvement. Okay, here’s a site that needs things fixing on it. Here’s some data about what can be fixed. And so maybe for that reason and that reason alone, it’s flown under the radar for me because it doesn’t have that commercial component to it.

[00:07:24] Mariya Moeva: Yeah, for sure. It’s for free and it’s not something that, yeah, sells features or has like a premium model and we don’t market it so much. But I run a little survey in the product where people tell us where they heard from it, and a lot of the responses are either YouTube video, or like blog posts or word of mouth. So it seems to be spreading more that way.

[00:07:46] Nathan Wrigley: Yeah, no kidding. I’ll just say the URL out loud in case you’re at a computer when you’re listening to this. It’s SiteKit, as one word, dot withgoogle.com. I don’t know if that’s the canonical URL, but that’s where I ended up when I did a quick search for it. So sitekit.withgoogle.com. And over there you’ll be able to download well, as it labels itself, Google’s official WordPress plugin.

The first thing that surprises me is, a, Google’s interest in WordPress. That is fascinating to me. I mean, obviously we all know, Google is this giant, this leviathan. Maybe you’ve got interest in other CMSs, maybe not. I don’t really know. But I think that’s curious. But obviously 43% of the web, kind of makes sense to partner with WordPress, doesn’t it? To improve websites.

[00:08:31] Mariya Moeva: Yeah. I work with plenty of CMSs. I work with Wix, with Squarespace, and we essentially what I try to do and what my team tries to do, we are called the Ecosystem Team. So we want to bring the things that we think would be useful to site owners and businesses directly to where they are.

So if you are in your Wix dashboard, you should be able to see the things from Google that are useful. And same if you are in WordPress. And obviously WordPress is, orders of magnitude, a bigger footprint than any of the others. And also it has this special structure where everything is decentralised and people kind of mix and match. So that’s why we went with the plugin model. And using the public APIs, we want to show what’s possible.

Because all the data that we use is public data. There’s no special Google feature that only the Google product gets, right? We are just combining it in interesting ways because I’ve spent so much time talking to people, like what they need. And so we just curate and combine in ways that are actually helping people to make decisions and to kind of clear the clutter.

Because when you go to analytics, it’s like 50 reports and so many menus and it’s like, where do I start? So we try to give a starting point in Site Kit. And we also try to help with other things like make people sustainable. One thing that we recently launched just a month ago is called Reader Revenue Manager. So you can put a little prompt on your site, which asks people to give you like $2 or whatever currency you are in, or even put like a subscription.

And so the idea is you don’t have to have massive traffic in order to generate revenue from your content. If you have your hundred thousand loyal readers, they can help you be more sustainable. So we’re looking at these kind of features, like what can we launch that is more for small and medium sites and would be helpful? And how can we make it as simple as possible? So that people don’t kind of drop off during the setup because it’s too complicated.

[00:10:33] Nathan Wrigley: Would it be fair to summarise the plugin’s initial purpose as kind of binding a bunch of Google products, which otherwise you would have to go and navigate to elsewhere? So for example, I’m looking at the website now, Search Console, Analytics, Page Speed Insights, AdSense, Google Ads, and all of those kind of things. Typically we’d have to go and, you know, set up an account. I guess we’d have to do that with Site Kit anyway. But we’d have to go to the different URLs and do all of that.

The intention of this then is to bind that inside of the WordPress UI, so it’s not just the person who’s the admin of that account. You can open it up so that people who have the right permissions inside of WordPress, they can see, for example, Google Analytics data. And it gets presented on the backend of WordPress rather than having to go to these other URLs. Is that how it all began as a way of sort of surfacing Google product data inside the UI of WordPress?

[00:11:21] Mariya Moeva: Yeah, we wanted to bring the most important things directly to where people are, so they don’t have to bother going to 15 places. And we wanted to drastically decrease and curate the information so that it’s easy to understand, because when you have 15 dashboards in Analytics and 15 dashboards in Search Console, and then you have to figure out what to download and in which spreadsheet to merge and how to compare, then this is. Maybe if you have an agency taken care of, they can help you. But if you don’t, which 70% of our users say that they’re one person operation, so they’re taking care of their business, and on top of that, the website. We wanted to make it simpler to understand how you’re doing, and what you should do next with Google data.

[00:12:02] Nathan Wrigley: So it’s a curated interface. So it’s not, I mean, maybe you can pull in every single thing if you so wish. But the idea is you give a, I don’t know, an easier to understand interface to, for example, Google Analytics.

That was always the thing for me in Google Analytics. I’m sure that if you have the time and the expertise, like you’re an agency that deals with all of that, then all of that data is probably useful and credible. But for me, I just want to know some top level items. I don’t need to dig into the weeds of everything.

And there was menus within menus, within menus, and I would get lost very quickly, and dispirited and essentially give up. So I guess this is an endeavor to get you what you need quickly inside the WordPress admin, so you don’t have to be an expert.

[00:12:43] Mariya Moeva: Yeah. And then it gets more powerful when you are able to combine data from different products. So, for example, we have a feature called Search Funnel in the dashboard, which lets you, it combines data from Search Console on search impressions and search clicks, and then it combines data from Analytics on visitors on the site and conversions. So it kind of helps you map out the entire path, versus having to go over here, having to go over there, having to combine everything yourself. So when you combine things, then it gets also more powerful.

We have another feature which lets you combine data from AdSense and Analytics. So if you have AdSense on your site, you can then see which pages earn you the most revenue. So when you have that, suddenly you can see, okay, so I have now these pages here, what queries are they ranking for? How much time people spend on them? Can I expand my content in that direction? It helps you to be more focused in kind of the strategy that you have for your site.

[00:13:45] Nathan Wrigley: Is it just making, I mean, I say just, is it making API calls backwards and forwards to Google’s Analytics, Search Console, whatever, and then displaying that information, or is it kind of keeping it inside the WordPress database?

[00:13:58] Mariya Moeva: We don’t store anything, well, almost anything. Yeah, we wanted to keep the data as secure as possible, so we created this proxy service, which kind of helps to exchange the credentials. So the person can authenticate with their Google account, and then from there, the data is pulled via API, and we cache the dashboard for one hour. After that we refreshed authentication token. From the data itself, nothing is stored.

[00:14:23] Nathan Wrigley: So it’s just authentication information really that’s stored. Well, that’s kind of a given, I suppose. Otherwise you’ll be logging in every two minutes.

[00:14:29] Mariya Moeva: Right. So that’s the model that we have because we really wanted people to be able to access this data, but also to keep it secure. And because of how the WordPress database is, we didn’t feel like we could save it there.

[00:14:41] Nathan Wrigley: It sounds from what you’ve just said, it’s as if it’s combining things from a variety of different services, kind of linking them up in a structured way so that somebody who’s not particularly experienced can make connections between, I don’t know, ads and analytics. The spend on the ads and the analytics, you know, the ROI if you like.

Does it do things uniquely? Is there something you can get inside of Site Kit which you could not get out of the individual products if you went there? Or is it just more of a, well, we’ve done the hard work for you, we’ve mapped these things together so you don’t have to think about it?

[00:15:10] Mariya Moeva: The one thing that it does that I’m super excited about, and we’ll build on that, but we have the fundamental of it now, is it actually creates data for you. Because in contrast to Search Console or Analytics or all these other, which are kind of Google hosted, they can only tell you like a long help center article, go there on your site, then click this, then paste this code, right? They cannot help you with this, whereas Site Kit is on the website.

So if you agree, which we don’t install anything without people’s consent, like they have to activate the feature, but if you agree, then we can do things on your behalf. So for example, we can track every time someone clicks the signup button and we can generate an analytics event for you, even if that plugin normally doesn’t send analytics events. And that way, suddenly you have your conversion data available.

So very often people look to the top of the funnel, like how many people came to my site? But they don’t look to what these people did beyond kind of, oh, they stayed two minutes. So what does this mean? You want to see, did they buy the thing? Did they sign up for the thing, or subscribe or whatever it is? And we help create this data because we have this unique access to the source code of the site.

So we create, for example, on leads generation or purchases. We also, every time that a specific page is viewed, we will generate an event about the author of the page. So then we can aggregate the data, which authors bring in the most page views. Let’s say you have like a site with five, six, whatever authors. Or which categories are bringing in the most engagement and these kind of things.

[00:16:52] Nathan Wrigley: So it really does get very WordPressy. It’s not just to do with the Google side of things. It is mapping information from Google, so categories, author profiles, that kind of thing, and mapping them into the analytics that you get. Okay, that’s interesting. So it’s a two-way process, not just a one-way process.

[00:17:09] Mariya Moeva: Yeah. It’s very much integrated with WordPress. We have also a lot of other features, like for example, that kind of stretch into other parts of the website. So this Reader Revenue Manager that I mentioned before with the prompts that you can put on your pages. You can go to the individual post and for every post there’s like a little piece of control UI that we’ve added there in the compose screen, where you can say, this is excluded from this prompt, or, you know, you can control from there.

So we try to integrate where it makes sense, like where the person would want to take this action. And again, because it’s on the website, we can kind of spread out beyond just this one dashboard.

[00:17:48] Nathan Wrigley: And would I, as a site admin, would I be able to assign permissions to different user roles within WordPress? So for example, an editor, or a certain user profile, may be able to see a subset of data. You know, for example, I don’t know, you are involved in the spending on AdSense. But you, other user over there, you’ve got nothing to do with that. But you are into the analytics, so you can see that, and you over there you can see that. Is that possible?

[00:18:12] Mariya Moeva: We have something called dashboard sharing. So it has the same, like if you use Google Docs or anything like that, it has this little person with a plus in the corner, icon. And then from there, if you are the admin who set up this particular Google Service, who connected it to Site Kit, then you’re able to say who should be able to see it. So you essentially grant view only access to, let’s say all the editors, or all the contributors or whatever. And then you can choose which Google service’s data they can see.

[00:18:44] Nathan Wrigley: So yes is the answer to that, yeah.

[00:18:46] Mariya Moeva: Yeah, yeah. So they don’t have to set it up, I mean, they have to go through a very simplified setup, and then they basically get a kind of a screenshot. I mean it’s, you can still click on things, but you can’t change anything, so it’s kind of a view-only dashboard.

[00:18:59] Nathan Wrigley: I’m kind of curious about the market that you pitch this to. So sell is the wrong word because it’s a free plugin, but who you’re pitching it at. So obviously if you’ve got that end user, the site owner. Maybe they’ve got a site and they’ve got a small business with a team. Maybe it’s just them, so there’s the whole permissions thing there.

But also I know that Google, there are whole agencies out there who just specialise in Google products, and analysing the data that comes out of Analytics. Can you do that as well as an agency? Could I set this up for my clients and have some, you know, I’ve got my agency dashboard and I want to give this client access to this website, and this website and this website, but not these other ones? Can it be deployed on a sort of agency basis like that?

[00:19:38] Mariya Moeva: You would still have to activate it for every individual site. So in that sense, there’s a bunch of steps that you have to go through. But once it’s activated, you can then share with any kind of client. And actually we have a lot of agencies that can install it for every site that they have.

Just today someone came and after he saw the demo, he was like, okay, I’m going to install it for all my clients. Because what we’ve heard is that it’s exactly the level of information that a client would benefit from. And this means then that they pester the agency less. So we’ve literally heard people saying, you’re saving me a lot of phone calls. So that’s why agencies really like it.

And the next big feature request, which we’re working on right now, is to generate like an email report out of that. So for those who don’t even want to log into WordPress to see, there will be a possibility to get this in their inbox.

[00:20:30] Nathan Wrigley: So you could get it like a weekly summary, whatever it that wish to trigger. And, okay, so that could go anywhere really. And then your clients don’t even need to phone you about that.

[00:20:41] Mariya Moeva: Yeah. So we are trying to really actively reach people where they are, even if that’s their email inbox.

[00:20:49] Nathan Wrigley: And the other question I have is around your relationship with some of the bigger players, maybe hosting companies. Do you have this pre-installed on hosting cPanels and their, you know, whatever it is that they’ve got in their back end?

[00:21:02] Mariya Moeva: Yeah, we have quite a few hosting providers that pre-install it for their WordPress customers. The reason for this is that they see better lifetime value for those customers that have a good idea of how their site is doing. And yeah, Hostinger is one of those. cPanel. Elementor pre-installs it for all of their users. And they see very good feedback because again, it’s super simple to set up and super easy to understand once you have it. So for them it’s kind of like an extra feature that they can offer, extra value to their users for free.

[00:21:32] Nathan Wrigley: We know Google’s a fabulous company, but you don’t do things for nothing. So what’s the return? How does it work in reverse? So we know that presumably there must be an exchange of data. What are we signing up for if we install Site Kit?

[00:21:47] Mariya Moeva: So, at least, I mean, Google is a huge company, right? There’s hundreds of thousands of people working. So I can’t speak for the whole of Google, but I can speak for the Ecosystem Team, which I’m part of, like the web ecosystem.

The main investment here, or the main goal for us is that the open web continues to thrive, because if people don’t put content, interesting, relevant content on the open web, the search results are going to be very poor and that’s not a good product.

So our idea is to support all the people who create content to make sure that they’re found, like if you’re a local business, that people can find you when they need stuff from that particular local business. And what we see is that, especially for smaller and medium sites, they really struggle, first with going online, and then with figuring out what they’re supposed to do. And so a lot of them give up because in comparison to other platforms, it’s a little bit of an upfront investment, right? Like you have to pay for hosting, you have to set up the site, you have to add content.

So we try to help people as much as we can to see the value that the open web brings to them, so that they can continue to create for the open web. So that’s our hidden motivation. I think in that sense, we’re very much aligned with the WordPress community because here everybody cares about the open web and for all kind of small, weird websites to continue flourishing and get their like 100 or 300 or 1,000 readers that they deserve.

So that’s the motivation. I think because it includes other things like AdSense and AdWords, like people can set up a ads campaign directly from Site Kit in a very simplified flow, and the same thing for AdSense. Obviously some money exchanges hands, but this is relatively minor compared to the benefit that we think there is for the web in general.

[00:23:35] Nathan Wrigley: Google really does seem to have a very large presence at WordPress events. I mean, I don’t know about the smaller ones, you know, the regional sort of city based events, but at the, what they call flagship events, so WordCamp Asia and WordCamp Europe and US, there’s the whole sponsor area. And it’s usual to see one of the larger booths being occupied by Google. And I wonder, is it Site Kit that you are talking about when you are here or is it other things as well?

But also it’s curious to me that Google would be here in that presence, because those things are not cheap to maintain. So there must be somebody up in Google somewhere saying, okay, this is something we want to invest in. So is it Site Kit that you are basically at the booth talking about?

[00:24:19] Mariya Moeva: So me, yes, or people on my team. We have like a Site Kit section this year. There’s also Google Trends. There’s also some other people talking about user experience and on search. And this changes depending on which teams within Google want to reach out to the WordPress community.

But with Site Kit, we’ve been pretty consistent for the last six years. We are always part of the booth. But the kind of whole team, like the whole Google booth content has kind of changed over the years as well depending on who’s coming.

[00:24:51] Nathan Wrigley: I know that a lot of work being done is surrounding performance and things like that, and a lot of the Google staff that are in the WordPress space seem to be focused on that kind of thing, talking about the new APIs that are shipping in the browsers and all of those kind of things.

Okay, so on the face of it, a fairly straightforward product to use. But I’m guessing the devil is in the detail. How do you go about supporting this? So for example, if I was to install it and to run into some problems, do you have like a, I don’t know, a documentation area or do you have support, or chat or anything like that? Because I know that with the best will in the world, people are going to run into problems. How do people manage that kind of thing?

[00:25:27] Mariya Moeva: Yeah, this was something that I was super, I felt really strongly about based on my previous experience in the developer advocate world. Because very often I got feedback that it’s super hard to reach Google. And it’s also understandable given the scale of some of the products.

But when I started this project I insisted that we allocate resources for support. So we have two people full-time support. One of them is upstairs, the support lead. He knows the product inside and out. They’re always on the forum, the plugin forum, support forum. And they answer usually within 24 hours. So everybody who has a question gets their question answered.

We’ve also created the very detailed additions. When you have Site Kit, you also get a few additions to the Site Health forum, so you can share that information with them and they see like detailed stuff about the website so they can help debug. And in many, many cases, I’ve seen people coming pretty angry, leave a one star review, then James or Adam who are support people, engage with them, and then it turns into a five star review because they feel like, okay, someone listened to me and helped me figure out what is going on.

We have real people answering questions relatively quickly. And they don’t just go, of course they focus on the WordPress support forum, but they also check Reddit and other places where people like mentioned Site Kit, and they try to help and to direct them to the right place. So for Site Kit, we have very robust support.

Now, when it’s an issue with a product, a Google product that is connected to Site Kit, so it’s not a Site Kit problem, let’s say you got some kind of strange message from AdSense about your account status changing. Then we would have to hand over to the AdSense account manager or support team that they have, because we don’t know everything, like how AdSense makes decisions and stuff like that. But for anything Site Kit related, we are very fast to answer.

[00:27:22] Nathan Wrigley: That’s good to hear because I think you’re right. I think the perception with any giant company is that it kind of becomes a bit impersonal, and Google would be no exception. And having just a forum which never seems to get an answer, you drop something in, six months later, you go back and nobody’s done anything in there except close the thread, kind of slightly annoying. But something like this. So 24 hours, roughly speaking, is the turnaround time.

[00:27:45] Mariya Moeva: Yeah. I mean, not on the weekend, but yeah.

[00:27:46] Nathan Wrigley: Yeah. Still, that’s pretty amazing.

[00:27:47] Mariya Moeva: Yeah, yeah. We are very serious about this because, I mean, also the WordPress community is really strong, right? So you want to show that we care. We want to hear from people. A lot of bugs then also turn into feature requests and get prioritised to be developed. So, yeah, we really value when people come to complain. It’s a good thing.

[00:28:03] Nathan Wrigley: Excellent. Okay, well, we won’t open that as a goal, please send in your complaints. But nevertheless, it’s nice that you take it seriously.

So it sounds like it’s under active development. You sound like this is basically what you’re doing over at Google. Do you have a roadmap? Do you have a sort of laundry list of things that you want to achieve over the next six months? Interesting things that we might want to hear about.

[00:28:21] Mariya Moeva: Sure, yeah. I mean, my ultimate vision, which is not the next six months, I would love to move away as much as possible from just stats. As curated and as kind of structured as it is right now, and get more into like recommendations, and like to-do list. Because what I hear from people again and again, it’s like, I have two hours this month, tell me what should I do with those two hours?

So they’re asking a lot from us. They’re asking essentially to look, analyse everything and to prioritise their tasks, to tell them which one is the most important or most impactful. And this is like several levels of analysis further than where we are now.

So one thing that we are looking to work on is benchmarking, because you cannot know are you growing or not, unless you know how you’re doing on average. And today, people who are a little bit more savvy can do this of course, but a lot of people don’t. And so for us to be able to tell you, not just you got 20 clicks this week, but also this is okay for you, or this is better than last year, this time, or this is better than your competitors. I think that’s a really valuable way to interpret the data and to help people understand what it means.

[00:29:38] Nathan Wrigley: Yeah. And really, Google is one of the only entities that can provide that kind of data.

[00:29:44] Mariya Moeva: Especially for search.

[00:29:45] Nathan Wrigley: Yeah, especially against competitors. That’s really interesting because analysing the data, whilst it’s fun for some people, I feel it’s not that interesting for most people. And so just having spreadsheets of data, charts of data, it’s interesting and you no doubt gain some important knowledge from it. But being told, here’s the outcomes of that data, try doing this thing and try doing that thing, that is much more profound than just demonstrating the data.

And I’m guessing, I could be wrong about this, and I’ve more or less said this in every interview over the last year, I’m guessing there’s an AI component to all of that. Getting AI to sort of analyse the data and give useful feedback.

[00:30:22] Mariya Moeva: I mean, we are investigating how to do all of these things. I think in the case of WordPress, it’s a little bit trickier again, because of the distributed nature, and the fact that all the site information lives on the site and then all the Google information. So we’re not like fully hosted where you can access everything and control everything, something like a Squarespace or a Wix.

But there’s definitely, like AI is a perfect use case for this, right? Like benchmarking, you can bucket sites into relevant groups and then see, are they performing better or worse? That’s like classic machine learning case. And we will see exactly, technically, how we’re going to reach this, but that’s one of the things that we’re working on right now.

Another thing is to expand much more the conversion reporting and to help people understand, are they achieving their goals? Because this is something that surprisingly to me, so many people pay money and invest time in the site, and they cannot articulate what the site is doing. Is it working? Is it doing its job? And they’re like, well, like I got some people visiting. And I’m like, did they buy the thing? So you have to know what to

track, and then also to take action after you see the metrics, like to move them in one direction or another. And so helping people like map out this full funnel is one thing that we’re working on. And the other thing is also this email report.

[00:31:40] Nathan Wrigley: Yeah, that’s amazing. So really under active development. And you sound very impassioned about it. You sound like this has become your mission, you know?

[00:31:47] Mariya Moeva: I think, nobody ever complained that something is easy, right? When you make things simple and easy for people, they appreciate, even if they’re more knowledgeable than if they can do more advanced things themselves.

And I personally really care, like every time that I find a random website with really strange content, but just, someone put their soul into it. I recently found something in Zurich of like tours of Zurich, walking tours, by someone who really cares about history and architecture.

And it’s a terrible website design wise, but the content is amazing. And I was like, okay, this person could use some help, but he’s doing, or she’s doing like a great job at the content part, and then should get the traffic that they deserve for this. So that’s what motivates me also to come here.

One person, two or three WordCamps ago came over and was saying, everything about Google is hard except Site Kit. And I was like, yeah, that’s what we are trying to do. We really want to simplify things for you. So, yeah, being here is also super motivating. To talk to people and to hear feedback and feature requests. And again, we like when people come to complain.

[00:32:54] Nathan Wrigley: Well, I was just speaking to a few people prior to you entering the room and those few people all have Site Kit installed on their site. So you’re doing something right.

[00:33:02] Mariya Moeva: I hope it’s helpful. I hope it answers some questions and saves people some time. That’s what we are trying to do. Yeah, we are in the part of Google that has the ecosystem focus, so we know that ecosystem changes take longer. I mean, still it’s a fast growing plugin. It got to 5 million in 5 years, but still that’s 5 years. And in the context of software companies which move very fast, 5 years is a long time.

Yeah, we will keep going and hopefully more people can benefit from it. But we do have, yeah, still there are many people who come by and they’re like, whoa, what is this? Show me.

[00:33:36] Nathan Wrigley: Well, that’s nice. There’s for growth as well.

[00:33:38] Mariya Moeva: Yeah, yeah. For sure. I mean, for sure there’s always, and more people create new sites. So, again, going back to that hosting provider question of like, can we bring it to them at the moment of creation so that they know this is something I can use?

[00:33:50] Nathan Wrigley: Yeah. So one more time, the URL is sitekit.withgoogle.com. I will place that into the show notes as well.

Mariya, I think that’s everything that I have to ask. Thank you so much for chatting to me about Site Kit.

[00:34:01] Mariya Moeva: Yeah, thank you for the invitation. It’s been a pleasure to talk about the ecosystem. And, yeah, if people have feature requests, they can always write us either on GitHub in the Site Kit repo, or on the support forum, or if they are coming to any WordCamp where we also are, we are also super happy to hear. So we always love to know what people struggle with, so that we can build it for them and make it easy.

[00:34:23] Nathan Wrigley: Thank you very much indeed.

On the podcast today we have Mariya Moeva.

Mariya has more than 15 years of experience in tech across search quality, developer advocacy, community building and outreach, and product management. Currently she’s the product lead for Site Kit, Google’s official WordPress plugin. She’s presented at WordCamp Europe in Basel this year, and joins us to talk about the journey from studying classical Japanese literature to fighting web spam at Google, and eventually shaping open source tools for the web.

Mariya talks about her passion for the open web and how years of direct feedback from site owners shaped the vision for Site Kit, making complex analytics accessible and actionable for everyone, from solo bloggers to agencies and hosting providers.

Site Kit has had impressive growth for a WordPress plugin, currently there are 5 million active installs and a monthly user base of 700,000.

We learn how Site Kit bundles core Google products, like Search Console, Analytics, PageSpeed Insights, AdSense into a simpler, curated WordPress dashboard, giving actionable insights without the need to trawl through multiple complex interfaces.

Mariya explains how the plugin is intentionally beginner-friendly, with features like role-based dashboard sharing, integration with WordPress’ author and category systems, and some newer additions like Reader Revenue Manager to help site owners become more sustainable.

She shares Google’s motivations for investing so much in WordPress and the open web, and how her team is committed to active support, trying to respond rapidly on forums and listening closely to feedback.

We discuss Site Kit’s roadmap, from benchmarking and reporting features to smarter, more personalised recommendations in the future.

If you’ve ever felt overwhelmed by analytics dashboards, or are looking for ways to make data more practical and valuable inside WordPress, this episode is for you.

Useful links

Site Kit

 Reader Revenue Manager

Google Trends

Site Kit support

Site Kit on GitHub

  •  

The Core Model: Start FROM The Answer, Not WITH The Solution

Ever sat in a meeting where everyone jumped straight to solutions? “We need a new app!” “Let’s redesign the homepage!” “AI will fix everything!” This solution-first thinking is endemic in digital development — and it’s why so many projects fail to deliver real value. As the creator of the Core Model methodology, I developed this approach to flip the script: instead of starting with solutions, we start FROM the answer.

What’s the difference? Starting with solutions means imposing our preconceived ideas. Starting FROM the answer to a user task means forming a hypothesis about what users need, then taking a step back to follow a simple structure that validates and refines that hypothesis.

Six Good Questions That Lead to Better Answers

At its heart, the Core Model is simply six good questions asked in the right order, with a seventh that drives action. It appeals to common sense — something often in short supply during complex digital projects.

When I introduced this approach to a large organization struggling with their website, their head of digital admitted: “We’ve been asking all these questions separately, but never in this structured way that connects them.”

These questions help teams pause, align around what matters, and create solutions that actually work:

  1. Who are we trying to help, and what’s their situation?
  2. What are they trying to accomplish?
  3. What do we want to achieve?
  4. How do they approach this need?
  5. Where should they go next?
  6. What’s the essential content or functionality they need?
  7. What needs to be done to create this solution?

This simple framework creates clarity across team boundaries, bringing together content creators, designers, developers, customer service, subject matter experts, and leadership around a shared understanding.

Starting With a Hypothesis

The Core Model process typically begins before the workshop. The project lead or facilitator works with key stakeholders to:

  1. Identify candidate cores based on organizational priorities and user needs.
  2. Gather existing user insights and business objectives.
  3. Form initial hypotheses about what these cores should accomplish.
  4. Prepare relevant background materials for workshop participants.

This preparation ensures the workshop itself is focused and productive, with teams validating and refining hypotheses rather than starting from scratch.

The Core Model: Six Elements That Create Alignment

Let’s explore each element of the Core Model in detail:

1. Target Group: Building Empathy First

Rather than detailed personas, the Core Model starts with quick proto-personas that build empathy for users in specific situations:

  • A parent researching childcare options late at night after a long day.
  • A small business owner trying to understand tax requirements between client meetings.
  • A new resident navigating unfamiliar public services in their second language.

The key is to humanize users and understand their emotional and practical context before diving into solutions.

2. User Tasks: What People Are Actually Trying to Do

Beyond features or content, what are users actually trying to accomplish?

  • Making an informed decision about a major purchase.
  • Finding the right form to apply for a service.
  • Understanding next steps in a complex process.
  • Checking eligibility for a program or benefit.

These tasks should be based on user research and drive everything that follows. Top task methodology is a great approach to this.

3. Business Objectives: What Success Looks Like

Every digital initiative should connect to clear organizational goals:

  • Increasing online self-service adoption.
  • Reducing support costs.
  • Improving satisfaction and loyalty.
  • Meeting compliance requirements.
  • Generating leads or sales.

These objectives provide the measurement framework for success. (If you work with OKRs, you can think of these as Key Results that connect to your overall Objective.)

4. Inward Paths: User Scenarios and Approaches

This element goes beyond just findability to include the user’s entire approach and mental model:

  • What scenarios lead them to this need?
  • What terminology do they use to describe their problem?
  • How would the phrase their need to Google or an LLM?
  • What emotions or urgency are they experiencing?
  • What channels or touchpoints do they use?
  • What existing knowledge do they bring?

Understanding these angles of different approaches ensures we meet users where they are.

5. Forward Paths: Guiding the Journey

What should users do after engaging with this core?

  • Take a specific action to continue their task.
  • Explore related information or options.
  • Connect with appropriate support channels.
  • Save or share their progress.

These paths create coherent journeys (core flows) rather than dead ends.

6. Core Content: The Essential Solution

Only after mapping the previous elements do we define the actual solution:

  • What information must be included?
  • What functionality is essential?
  • What tone and language are appropriate?
  • What format best serves the need?

This becomes our blueprint for what actually needs to be created.

Action Cards: From Insight to Implementation

The Core Model process culminates with action cards that answer the crucial seventh question: “What needs to be done to create this solution?”

These cards typically include:

  • Specific actions required;
  • Who is responsible;
  • Timeline for completion;
  • Resources needed;
  • Dependencies and constraints.

Action cards transform insights into concrete next steps, ensuring the workshop leads to real improvements rather than just interesting discussions.

The Power of Core Pairs

A unique aspect of the Core Model methodology is working in core pairs—two people from different competencies or departments working together on the same core sheet. This approach creates several benefits:

  • Cross-disciplinary insight
    Pairing someone with deep subject knowledge with someone who brings a fresh perspective.
  • Built-in quality control
    Partners catch blind spots and challenge assumptions.
  • Simplified communication
    One-to-one dialogue is more effective than group discussions.
  • Shared ownership
    Both participants develop a commitment to the solution.
  • Knowledge transfer
    Skills and insights flow naturally between disciplines.

The ideal pair combines different perspectives — content and design, business and technical, expert and novice — creating a balanced approach that neither could achieve alone.

Creating Alignment Within and Between Teams

The Core Model excels at creating two crucial types of alignment:

Within Cross-Functional Teams

Modern teams bring together diverse competencies:

  • Content creators focus on messages and narrative.
  • Designers think about user experience and interfaces.
  • Developers consider technical implementation.
  • Business stakeholders prioritize organizational needs.

The Core Model gives these specialists a common framework. Instead of the designer focusing only on interfaces or the developer only on code, everyone aligns around user tasks and business goals.

As one UX designer told me:

“The Core Model changed our team dynamic completely. Instead of handing off wireframes to developers who didn’t understand the ‘why’ behind design decisions, we now share a common understanding of what we’re trying to accomplish.”

Between Teams Across the Customer Journey

Users don’t experience your organization in silos — they move across touchpoints and teams. The Core Model helps connect these experiences:

  • Marketing teams understand how their campaigns connect to service delivery.
  • Product teams see how their features fit into larger user journeys.
  • Support teams gain context on user pathways and common issues.
  • Content teams create information that supports the entire journey.

By mapping connections between cores (core flows), organizations create coherent experiences rather than fragmented interactions.

Breaking Down Organizational Barriers

The Core Model creates a neutral framework where various perspectives can contribute while maintaining a unified direction. This is particularly valuable in traditional organizational structures where content responsibility is distributed across departments.

The Workshop: Making It Happen

The Core Model workshop brings these elements together in a practical format that can be adapted to different contexts and needs.

Workshop Format and Timing

For complex projects with multiple stakeholders across organizational silos, the ideal format is a full-day (6–hour) workshop:

First Hour: Foundation and Context

  • Introduction to the methodology (15 min).
  • Sharing user insights and business context (15 min).
  • Reviewing pre-workshop hypotheses (15 min).
  • Initial discussion and questions (15 min).

Hours 2–4: Core Mapping

  • Core pairs work on mapping elements (120 min).
  • Sharing between core pairs and in plenary between elements.
  • Facilitators provide guidance as needed.

Hours 5–6: Presentation, Discussion, and Action Planning

  • Each core pair presents its findings (depending on the number of cores).
  • Extensive group discussion and refinement.
  • Creating action cards and next steps.

The format is highly flexible:

  • Teams experienced with the methodology can conduct focused sessions in as little as 30 minutes.
  • Smaller projects might need only 2–3 hours.
  • Remote teams might split the workshop into multiple shorter sessions.

Workshop Environment

The Core Model workshop thrives in different environments:

  • Analog: Traditional approach using paper core sheets.
  • Digital: Virtual workshops using Miro, Mural, FigJam, or similar platforms.
  • Hybrid: Digital canvas in physical workshop, combining in-person interaction with digital documentation.

Note: You can find all downloads and templates here.

Core Pairs: The Key to Success

The composition of core pairs is critical to success:

  • One person should know the solution domain well (subject matter expert).
  • The other brings a fresh perspective (and learns about a different domain).
  • This combination ensures both depth of knowledge and fresh thinking.
  • Cross-functional pairing creates natural knowledge transfer and breaks down silos.

Workshop Deliverables

Important to note: The workshop doesn’t produce final solutions.

Instead, it creates a comprehensive brief containing the following:

  • Priorities and context for content development.
  • Direction and ideas for design and user experience.
  • Requirements and specifications for functionality.
  • Action plan for implementation with clear ownership.

This brief becomes the foundation for subsequent development work, ensuring everyone builds toward the same goal while leaving room for specialist expertise during implementation.

Getting Started: Your First Core Model Implementation

Ready to apply the Core Model in your organization? Here’s how to begin:

1. Form Your Initial Hypothesis

Before bringing everyone together:

  • Identify a core where users struggle and the business impact is clear.
  • Gather available user insights and business objectives.
  • Form a hypothesis about what this core should accomplish.
  • Identify key stakeholders across relevant departments.

2. Bring Together the Right Core Pairs

Select participants who represent different perspectives:

  • Content creators paired with designers.
  • Business experts paired with technical specialists.
  • Subject matter experts paired with user advocates.
  • Veterans paired with fresh perspectives.

3. Follow the Seven Questions

Guide core pairs through the process:

  • Who are we trying to help, and what’s their situation?
  • What are they trying to accomplish?
  • What do we want to achieve?
  • How do they approach this need?
  • Where should they go next?
  • What’s the essential content or functionality?
  • What needs to be done to create this solution?

4. Create an Action Plan

Transform insights into concrete actions:

  • Document specific next steps on action cards.
  • Assign clear ownership for each action.
  • Establish timeline and milestones.
  • Define how you’ll measure success.
In Conclusion: Common Sense In A Structured Framework

The Core Model works because it combines common sense with structure — asking the right questions in the right order to ensure we address what actually matters.

By starting FROM the answer, not WITH the solution, teams avoid premature problem-solving and create digital experiences that truly serve user needs while achieving organizational goals.

Whether you’re managing a traditional website, creating multi-channel content, or developing digital products, this methodology provides a framework for better collaboration, clearer priorities, and more effective outcomes.

This article is a short adaptation of my book The Core Model — A Common Sense to Digital Strategy and Design. You can find information about the book and updated resources at thecoremodel.com.

  •  

Web Components: Working With Shadow DOM

It’s common to see Web Components directly compared to framework components. But most examples are actually specific to Custom Elements, which is one piece of the Web Components picture. It’s easy to forget Web Components are actually a set of individual Web Platform APIs that can be used on their own:

In other words, it’s possible to create a Custom Element without using Shadow DOM or HTML Templates, but combining these features opens up enhanced stability, reusability, maintainability, and security. They’re all parts of the same feature set that can be used separately or together.

With that being said, I want to pay particular attention to Shadow DOM and where it fits into this picture. Working with Shadow DOM allows us to define clear boundaries between the various parts of our web applications — encapsulating related HTML and CSS inside a DocumentFragment to isolate components, prevent conflicts, and maintain clean separation of concerns.

How you take advantage of that encapsulation involves trade-offs and a variety of approaches. In this article, we’ll explore those nuances in depth, and in a follow-up piece, we’ll dive into how to work effectively with encapsulated styles.

Why Shadow DOM Exists

Most modern web applications are built from an assortment of libraries and components from a variety of providers. With the traditional (or “light”) DOM, it’s easy for styles and scripts to leak into or collide with each other. If you are using a framework, you might be able to trust that everything has been written to work seamlessly together, but effort must still be made to ensure that all elements have a unique ID and that CSS rules are scoped as specifically as possible. This can lead to overly verbose code that both increases app load time and reduces maintainability.

<!-- div soup -->
<div id="my-custom-app-framework-landingpage-header" class="my-custom-app-framework-foo">
  <div><div><div><div><div><div>etc...</div></div></div></div></div></div>
</div>

Shadow DOM was introduced to solve these problems by providing a way to isolate each component. The <video> and <details> elements are good examples of native HTML elements that use Shadow DOM internally by default to prevent interference from global styles or scripts. Harnessing this hidden power that drives native browser components is what really sets Web Components apart from their framework counterparts.

Elements That Can Host A Shadow Root

Most often, you will see shadow roots associated with Custom Elements. However, they can also be used with any HTMLUnknownElement, and many standard elements support them as well, including:

  • <aside>
  • <blockquote>
  • <body>
  • <div><footer>
  • <h1> to <h6>
  • <header>
  • <main>
  • <nav>
  • <p>
  • <section>
  • <span>

Each element can only have one shadow root. Some elements, including <input> and <select>, already have a built-in shadow root that is not accessible through scripting. You can inspect them with your Developer Tools by enabling the Show User Agent Shadow DOM setting, which is “off” by default.

Creating A Shadow Root

Before leveraging the benefits of Shadow DOM, you first need to establish a shadow root on an element. This can be instantiated imperatively or declaratively.

Imperative Instantiation

To create a shadow root using JavaScript, use attachShadow({ mode }) on an element. The mode can be open (allowing access via element.shadowRoot) or closed (hiding the shadow root from outside scripts).

const host = document.createElement('div');
const shadow = host.attachShadow({ mode: 'open' });
shadow.innerHTML = '<p>Hello from the Shadow DOM!</p>';
document.body.appendChild(host);

In this example, we’ve established an open shadow root. This means that the element’s content is accessible from the outside, and we can query it like any other DOM node:

host.shadowRoot.querySelector('p'); // selects the paragraph element

If we want to prevent external scripts from accessing our internal structure entirely, we can set the mode to closed instead. This causes the element’s shadowRoot property to return null. We can still access it from our shadow reference in the scope where we created it.

shadow.querySelector('p');

This is a crucial security feature. With a closed shadow root, we can be confident that malicious actors cannot extract private user data from our components. For example, consider a widget that shows banking information. Perhaps it contains the user’s account number. With an open shadow root, any script on the page can drill into our component and parse its contents. In closed mode, only the user can perform this kind of action with manual copy-pasting or by inspecting the element.

I suggest a closed-first approach when working with Shadow DOM. Make a habit of using closed mode unless you are debugging, or only when absolutely necessary to get around a real-world limitation that cannot be avoided. If you follow this approach, you will find that the instances where open mode is actually required are few and far between.

Declarative Instantiation

We don’t have to use JavaScript to take advantage of Shadow DOM. Registering a shadow root can be done declaratively. Nesting a <template> with a shadowrootmode attribute inside any supported element will cause the browser to automatically upgrade that element with a shadow root. Attaching a shadow root in this manner can even be done with JavaScript disabled.

<my-widget>
  <template shadowrootmode="closed">
    <p> Declarative Shadow DOM content </p>
  </template>
</my-widget>

Again, this can be either open or closed. Consider the security implications before using open mode, but note that you cannot access the closed mode content through any scripts unless this method is used with a registered Custom Element, in which case, you can use ElementInternals to access the automatically attached shadow root:

class MyWidget extends HTMLElement {
  #internals;
  #shadowRoot;
  constructor() {
    super();
    this.#internals = this.attachInternals();
    this.#shadowRoot = this.#internals.shadowRoot;
  }
  connectedCallback() {
    const p = this.#shadowRoot.querySelector('p')
    console.log(p.textContent); // this works
  }
};
customElements.define('my-widget', MyWidget);
export { MyWidget };
Shadow DOM Configuration

There are three other options besides mode that we can pass to Element.attachShadow().

Option 1: clonable:true

Until recently, if a standard element had a shadow root attached and you tried to clone it using Node.cloneNode(true) or document.importNode(node,true), you would only get a shallow copy of the host element without the shadow root content. The examples we just looked at would actually return an empty <div>. This was never an issue with Custom Elements that built their own shadow root internally.

But for a declarative Shadow DOM, this means that each element needs its own template, and they cannot be reused. With this newly-added feature, we can selectively clone components when it’s desirable:

<div id="original">
  <template shadowrootmode="closed" shadowrootclonable>
    <p> This is a test  </p>
  </template>
</div>

<script>
  const original = document.getElementById('original');
  const copy = original.cloneNode(true); copy.id = 'copy';
  document.body.append(copy); // includes the shadow root content
</script>

Option 2: serializable:true

Enabling this option allows you to save a string representation of the content inside an element’s shadow root. Calling Element.getHTML() on a host element will return a template copy of the Shadow DOM’s current state, including all nested instances of shadowrootserializable. This can be used to inject a copy of your shadow root into another host, or cache it for later use.

In Chrome, this actually works through a closed shadow root, so be careful of accidentally leaking user data with this feature. A safer alternative would be to use a closed wrapper to shield the inner contents from external influences while still keeping things open internally:

<wrapper-element></wrapper-element>

<script>
  class WrapperElement extends HTMLElement {
    #shadow;
    constructor() {
      super();
      this.#shadow = this.attachShadow({ mode:'closed' });
      this.#shadow.setHTMLUnsafe(&lt;nested-element&gt;
          &lt;template shadowrootmode="open" shadowrootserializable&gt;
            &lt;div id="test"&gt;
              &lt;template shadowrootmode="open" shadowrootserializable&gt;
                &lt;p&gt; Deep Shadow DOM Content &lt;/p&gt;
              &lt;/template&gt;
            &lt;/div&gt;
          &lt;/template&gt;
        &lt;/nested-element&gt;);
      this.cloneContent();
    }
    cloneContent() {
      const nested = this.#shadow.querySelector('nested-element');
      const snapshot = nested.getHTML({ serializableShadowRoots: true });
      const temp = document.createElement('div');
      temp.setHTMLUnsafe(&lt;another-element&gt;${snapshot}&lt;/another-element&gt;);
      const copy = temp.querySelector('another-element');
      copy.shadowRoot.querySelector('#test').shadowRoot.querySelector('p').textContent = 'Changed Content!';
      this.#shadow.append(copy);
    }
  }
  customElements.define('wrapper-element', WrapperElement);
  const wrapper = document.querySelector('wrapper-element');
  const test = wrapper.getHTML({ serializableShadowRoots: true });
  console.log(test); // empty string due to closed shadow root
</script>

Notice setHTMLUnsafe(). That’s there because the content contains <template> elements. This method must be called when injecting trusted content of this nature. Inserting the template using innerHTML would not trigger the automatic initialization into a shadow root.

Option 3: delegatesFocus:true

This option essentially makes our host element act as a <label> for its internal content. When enabled, clicking anywhere on the host or calling .focus() on it will move the cursor to the first focusable element in the shadow root. This will also apply the :focus pseudo-class to the host, which is especially useful when creating components that are intended to participate in forms.

<custom-input>
  <template shadowrootmode="closed" shadowrootdelegatesfocus>
    <fieldset>
      <legend> Custom Input </legend>
      <p> Click anywhere on this element to focus the input </p>
      <input type="text" placeholder="Enter some text...">
    </fieldset>
  </template>
</custom-input>

This example only demonstrates focus delegation. One of the oddities of encapsulation is that form submissions are not automatically connected. That means an input’s value will not be in the form submission by default. Form validation and states are also not communicated out of the Shadow DOM. There are similar connectivity issues with accessibility, where the shadow root boundary can interfere with ARIA. These are all considerations specific to forms that we can address with ElementInternals, which is a topic for another article, and is cause to question whether you can rely on a light DOM form instead.

Slotted Content

So far, we have only looked at fully encapsulated components. A key Shadow DOM feature is using slots to selectively inject content into the component’s internal structure. Each shadow root can have one default (unnamed) <slot>; all others must be named. Naming a slot allows us to provide content to fill specific parts of our component as well as fallback content to fill any slots that are omitted by the user:

<my-widget>
  <template shadowrootmode="closed">
    <h2><slot name="title"><span>Fallback Title</span></slot></h2>
    <slot name="description"><p>A placeholder description.</p></slot>
    <ol><slot></slot></ol>
  </template>
  <span slot="title"> A Slotted Title</span>
  <p slot="description">An example of using slots to fill parts of a component.</p>
  <li>Foo</li>
  <li>Bar</li>
  <li>Baz</li>
</my-widget>

Default slots also support fallback content, but any stray text nodes will fill them. As a result, this only works if you collapse all whitespace in the host element’s markup:

<my-widget><template shadowrootmode="closed">
  <slot><span>Fallback Content</span></slot>
</template></my-widget>

Slot elements emit slotchange events when their assignedNodes() are added or removed. These events do not contain a reference to the slot or the nodes, so you will need to pass those into your event handler:

class SlottedWidget extends HTMLElement {
  #internals;
  #shadow;
  constructor() {
    super();
    this.#internals = this.attachInternals();
    this.#shadow = this.#internals.shadowRoot;
    this.configureSlots();
  }
  configureSlots() {
    const slots = this.#shadow.querySelectorAll('slot');
    console.log({ slots });
    slots.forEach(slot => {
      slot.addEventListener('slotchange', () => {
        console.log({
          changedSlot: slot.name || 'default',
          assignedNodes: slot.assignedNodes()
        });
      });
    });
  }
}
customElements.define('slotted-widget', SlottedWidget);

Multiple elements can be assigned to a single slot, either declaratively with the slot attribute or through scripting:

const widget = document.querySelector('slotted-widget');
const added = document.createElement('p');
added.textContent = 'A secondary paragraph added using a named slot.';
added.slot = 'description';
widget.append(added);

Notice that the paragraph in this example is appended to the host element. Slotted content actually belongs to the “light” DOM, not the Shadow DOM. Unlike the examples we’ve covered so far, these elements can be queried directly from the document object:

const widgetTitle = document.querySelector('my-widget [slot=title]');
widgetTitle.textContent = 'A Different Title';

If you want to access these elements internally from your class definition, use this.children or this.querySelector. Only the <slot> elements themselves can be queried through the Shadow DOM, not their content.

From Mystery To Mastery

Now you know why you would want to use Shadow DOM, when you should incorporate it into your work, and how you can use it right now.

But your Web Components journey can’t end here. We’ve only covered markup and scripting in this article. We have not even touched on another major aspect of Web Components: Style encapsulation. That will be our topic in another article.

  •  

Beginner’s Guide to VCDPA Compliance in WordPress

When I first learned about the Virginia Consumer Data Protection Act (VCDPA), I’ll admit I felt a bit overwhelmed.

As someone who’s managed WordPress sites for many years, the idea of learning yet another privacy law felt like a lot. But when I dug into it, I realized it’s more straightforward than it looks.

Still, I’ve seen plenty of site owners make compliance harder than it needs to be—either by overcomplicating the process or missing simple steps.

That’s why I created this guide. I’ll walk you through the VCDPA’s core requirements step by step and share the tools I use to improve WordPress compliance without getting overwhelmed by legal jargon.

Beginner's Guide to VCDPA Compliance in WordPress

What is the Virginia Consumer Data Protection Act (VCDPA)?

The Virginia Consumer Data Protection Act (VCDPA) is a state privacy law that gives Virginia residents more control over their personal data. This includes information that can identify someone directly or indirectly—like names, email addresses, IP addresses, or data collected through website forms or tracking tools.

Even if your business isn’t based in Virginia, the VCDPA might still apply to your WordPress site. What matters is whether you collect personal data from Virginia residents.

That said, the law doesn’t apply to every site. It’s mainly aimed at larger businesses and organizations.

Generally, you need to comply with the VCDPA if you:

  • Control or process the personal data of 100,000 or more Virginia consumers in a calendar year, or
  • Control or process the personal data of at least 25,000 Virginia consumers and get over 50% of your total revenue from selling personal data.

Keep in mind that the law also only applies to businesses or organizations operating for commercial purposes.

If your site fits one of those categories, then it’s essential to understand how the VCDPA works and what steps you need to take to stay compliant.

Why Should WordPress Users Care About VCDPA Compliance?

If your WordPress site falls under the VCDPA, then staying compliant helps you avoid potential penalties. The Virginia Attorney General enforces the VCDPA, and violations can lead to fines of up to $7,500 per incident.

Fortunately, you’ll usually receive a 30-day warning and a chance to fix the issue before any penalties are applied.

It’s also worth noting that consumers can’t directly sue you under this law. Only the Attorney General can take action, which adds a layer of protection, but doesn’t mean you should ignore compliance.

More importantly, showing that you care about user privacy helps build trust with your audience.

When visitors know you’re being transparent and responsible with their data, they’re more likely to stick around, sign up for your email newsletter, or make a purchase from your online store.

Simply put, staying compliant is not just a legal duty. It’s also a key part of building trust and achieving long-term success.

How VCDPA Affects Your WordPress Site

If your site is covered by the VCDPA, then you’re required to support several privacy rights for your visitors. That means making it easy for Virginia residents to control how their personal data is collected, used, and deleted.

As a WordPress site owner, here are the main rights you need to understand and support:

  • The Right to Know: Visitors can ask what personal data you’ve collected about them.
  • The Right to Correction: They can request that you fix any incorrect or outdated information.
  • The Right to Opt-Out: Users can ask you not to sell or share their personal data with other companies.
  • The Right to Data Portability: They can request a copy of their personal data in a format they can use elsewhere, like a ZIP file.
  • The Right to Delete: Users can ask you to permanently delete the data you’ve collected about them.

Throughout this guide, I’ll show you how to support each of these rights using WordPress tools and beginner-friendly strategies.

How to Improve Your VCDPA Compliance in WordPress

VCDPA compliance may sound technical. But at its core, it’s about being transparent with your visitors and giving them control over their personal data.

As a WordPress site owner, there are practical steps you can take to meet these requirements. These include limiting how much data you collect, creating clear policies, and making it easy for users to opt out or request changes.

In this article, I will walk you through each part of the process. You can follow them step-by-step or jump to the parts that apply to your site using the links below:

Perform a Data Audit

The first step to VCDPA compliance is understanding how your website collects and stores personal data. That means reviewing the tools, plugins, and services you use—and documenting the information they gather.

To start, I recommend making a list of every WordPress plugin on your site, along with any third-party tools that interact with user data. This could include analytics platforms, form builders, or SEO tools.

Once you have that list, check what kind of personal information each tool collects. For example, if you’ve added a quote request form, you’ll want to record whether it asks for names, company details, or job titles.

To guide your audit, ask yourself:

  • What personal data do I collect? This includes names, email addresses, IP addresses, payment details, and any other data submitted through forms or comments.
  • Where is this data stored? Is it saved on your own server or sent to an outside service?
  • Why am I collecting this information? The VCDPA says data must be “adequate, relevant, and reasonably necessary” for your stated purpose.
  • How long do I keep it? You should only store personal data as long as it’s needed for its original purpose.
  • Do I share this data with anyone? This includes service providers, third-party tools, or advertising networks. Be sure to note whether any of this data is used for targeted ads.

Once you’ve completed your audit, you’ll have a clear picture of what data you collect, where it’s stored, and what you need to adjust to meet VCDPA requirements.

Create a Data Compliance Record

After completing your data audit, the next step is to keep a written record of what you found. This document should explain the actions you’ve already taken to follow the VCDPA, along with any updates or fixes you made during your audit.

By creating this record, you’ll have clear proof that you take privacy seriously. That can be helpful if you’re ever audited or if someone asks about your compliance practices.

As you’ll see throughout this guide, it’s not enough to follow the VCDPA behind the scenes. You also need to be able to show that you’re doing things the right way.

Every business website is different, but I recommend running a new data audit and updating your records at least once per year.

You should also update your records any time you change how your site collects or uses personal data. For example, after installing a new plugin that collects user info, or when the law itself changes, it’s a good time to revisit your audit and notes.

Keeping this record up to date doesn’t take much time, and it’ll make compliance much easier in the long run.

Collect Less Data

The VCDPA says you should only collect personal data that’s “adequate, relevant, and reasonably necessary” to meet a specific goal.

In other words: don’t collect anything you don’t truly need.

This idea is known as data minimization. It means reviewing what you currently collect and looking for ways to reduce it. If a piece of information isn’t essential for your site to function—or for the task at hand—it’s better to leave it out.

After completing your data audit, carefully review all the information you collect. Ask yourself: “Do I truly need every single piece of information I’m asking for?”

If something isn’t necessary, remove it. The less data you collect, the easier it is to stay compliant, and the less you’ll have to manage when users make requests.

This approach also builds trust. By avoiding unnecessary questions, you show that you respect your visitors’ privacy and value their time.

Create a Privacy Policy

A privacy policy is a page on your website that clearly explains what personal data you collect, how you use it, and who you share it with.

Having a clear, up-to-date privacy policy is essential for VCDPA compliance. It helps visitors understand how their information is handled and directly supports the VCDPA’s Right to Know requirement.

To make things easier, WordPress includes a built-in tool for creating a privacy policy. You can find it by going to Settings » Privacy in your WordPress dashboard. 

How to generate a privacy policy, using the built-in WordPress tools

Alternatively, you can use our own WPBeginner privacy policy page as a starting point. 

Just remember to change all mentions of ‘WPBeginner’ to your specific business or website name. 

WPBeginner's privacy policy template

Want more detailed instructions? We also have a complete, step-by-step guide on how to add a privacy policy in WordPress.

If your site already has a privacy policy, that’s great, but you’ll still need to review and update it to reflect the VCDPA.

In particular, make sure it covers the key rights your visitors have:

  • Right to Know
  • Right to Delete
  • Right to Correction
  • Right to Opt Out

You’ll also need to explain how users can act on those rights. For example, you might link to a contact form where visitors can request access to their data, or provide steps for updating their profile information.

Finally, don’t forget to keep your privacy policy up to date. This ensures it always reflects your current data practices and any changes to the VCDPA.

Add a Cookie Popup

Many websites use cookies to track user behavior, display ads, or measure analytics. If your site does this, the VCDPA expects you to inform users and give them a way to opt out.

Unlike the GDPR, which requires visitors to actively agree before data is collected, the VCDPA follows an opt-out model. That means you can often collect data by default—as long as users are told what’s being collected and can say no if they want to.

One of the simplest ways to meet this requirement is by adding a cookie popup. A good popup should explain what types of cookies your site uses, what data is being collected, and how that information is used. It should also give users a clear way to opt out.

An example of a cookie consent banner, created using WPConsent

I recommend using WPConsent for this. It’s the same plugin we use on WPBeginner to manage cookie banners and user consent.

It works well for WordPress beginners and is actively updated to follow privacy laws like the VCDPA, GDPR, and CCPA.

💡Want to know more about how WPConsent works on our site? Our in-depth WPConsent review has all the details. 

WPBeginner's cookie consent popup, created using WPConsent

You can also find a free version of WPConsent in the WordPress plugin directory.

To get started, simply install and activate the plugin.

After you activate it, WPConsent will automatically scan your site for active cookies. It will then record all the cookies it finds. 

Scanning your WordPress blog or website for all active cookies

Next, WPConsent’s setup wizard will help you change how your cookie popup looks. You can adjust the layout, the text size, button styles, colors, and even add your own custom logo

As you make changes, WPConsent will show a live preview. This lets you see exactly how the banner will look on your WordPress website. 

Designing a cookie consent banner using the WPConsent WordPress plugin

When you’re happy with how everything is set up, just save your changes. The cookie banner will then appear on your WordPress website, helping you comply with the VCDPA.

For more detailed instructions, see our full guide on how to add a cookie popup in WordPress.

Write a Separate Cookie Policy 

A cookie popup is a good starting point, but it’s also smart to create a dedicated cookie policy.

This separate page gives visitors more detail about how your site uses cookies. That way, they can better understand what personal information you collect and how it’s used.

In your cookie policy, you should list all the different types of cookies you use on your site. For example, you might use essential cookies (required for your site to work), analytics cookies (to measure website traffic), or marketing cookies (for advertising).

You should also explain what each type of cookie does. For example, some cookies might track user behavior or deliver targeted ads.

It’s also a good idea to describe what kinds of personal data each cookie collects. This might include a visitor’s IP address, device type, or browsing activity.

To build trust, keep your cookie policy easy to understand. This means you should avoid technical terms or legal words that are hard to follow. Instead, use clear and direct language that anyone can read.

Once your cookie policy is written, make sure it’s easy to find. I recommend linking to it from your footer and your cookie popup, as well as your main privacy policy.

Luckily, a tool like WPConsent can do much of this for you. 

As you saw earlier, when you first install WPConsent, it automatically scans your site and identifies any active cookies.

To do this, go to WPConsent » Settings

The WPConsent cookie consent plugin for WordPress

In the plugin’s settings, choose the page where you want to display the cookie policy.

WPConsent will then add this policy to your chosen page. It’s that simple. 

An example of a cookie policy, created using WPConsent

If you’re using WPConsent to display a cookie popup, then visitors can now access this policy directly from the popup itself.

They just need to select the ‘Preferences’ button. 

Accessing the cookie policy, directly from a WordPress banner

From there, they can click the ‘Cookie Policy’ link. 

WPConsent will then take them straight to the correct page.

Linking directly to your cookie policy, from a WordPress popup created with WPConsent

Block Third-Party Scripts

One of the most challenging things about VCDPA compliance is that it also covers external tracking tools. These include popular services like Google Analytics and Facebook Pixel.

The reason for this is simple: these tracking tools often collect visitor data. Under the VCDPA, you’re responsible for managing how these third-party tools collect, store, and use that personal information.

You also need to give visitors a way to stop these tools from tracking them if they choose.

So, how do you control tracking scripts from other companies? There’s an easy answer: automatic script blocking.

The VCDPA generally allows the use of tracking tools unless a visitor opts out, especially when used for targeted advertising. But a best practice for building user trust is to block tracking scripts until the visitor opts in.

This approach goes beyond VCDPA requirements and also helps you comply with stricter laws like GDPR. With this feature, scripts won’t load until the visitor explicitly agrees.

It also provides visitors with the information they need to understand what they’re agreeing to before you collect any data. This helps you meet the VCDPA’s Right to Know rule.

Plus, you’re getting a head start on complying with other privacy laws like Europe’s GDPR, which does require opt-in consent. It’s a great way to make your website’s privacy practices strong all around. 

Fortunately, WPConsent has an automatic script blocking feature that works out of the box.

Simply activate the plugin, and it will find and block common tracking scripts automatically. This includes tools like Google Analytics, Google Ads, and Facebook Pixel. Even better, WPConsent does this without breaking your site.

As soon as a visitor gives their consent, WPConsent will run the blocked script. This provides a very smooth user experience because the page does not need to reload.

Track and Log Visitor Consent

Even if you follow all the VCDPA rules, regulators might still question how you handle data or even audit your site.

If this happens, you’ll need to prove that you’re respecting your audience’s choices. That’s why it’s important to keep a detailed record of user consent.

WPConsent makes this easy by automatically logging each user’s consent. It saves all the important details, including the user’s IP address, their consent choices, and the exact date and time they made those choices.

You can see this information at any time by going to WPConsent » Consent Logs in your WordPress dashboard.

How to comply with the VCDPA by creating a privacy consent log

Need to share this information with an auditor or team member? You can export it from your WordPress dashboard in just a few clicks.

To do this, just click the ‘Export’ tab. Then, enter the ‘From Date’ and ‘To Date’ for the export. This creates a CSV file, ready for you to share with auditors, customers, and anyone else who needs access.

Provide an Easy Opt-Out for Data Sales

Under the VCDPA, if your site sells or shares personal data, then you must give visitors a way to opt out.

The easiest way to do this in WordPress is with WPConsent’s Do Not Track add-on. Despite its name, it gives you exactly what you need to meet the VCDPA’s opt-out of sale requirement.

To get started, go to WPConsent » Do Not Track » Configuration inside your WordPress dashboard. 

WPConsent will then guide you through the steps to install this add-on and create a ‘Do Not Track’ form. 

How to achieve VCDPA compliance with WPConsent

🌟 Want more detailed instructions? Then see our guide on how to create a Do Not Sell My Info page in WordPress.

Once it’s active, visitors can fill out a simple form to opt out of the sale or sharing of their data.

Even better, WPConsent stores all opt-out requests directly on your website in a secure table. That way, you keep full control over sensitive data instead of depending on external services.

It also logs each request automatically, giving you built-in proof of compliance in case of an audit.

Support the ‘Right to Delete’

As I mentioned earlier, the VCDPA gives users the right to ask you to delete their personal data.

There are different ways to handle these requests, but the easiest is to add a ‘data erasure’ form to your site.

This is where WPForms can help. It’s a user-friendly form builder that lets you create all kinds of forms using a drag-and-drop editor.

🌟 Here at WPBeginner, we’re not just recommending WPForms – we built all our own forms with it!

From our contact pages to our surveys, it’s all powered by WPForms. We use it daily, which is why we’re confident recommending it.

Ready to see why it’s our go-to? Dive into our detailed WPForms review.

When it comes to fulfilling the VCDPA’s ‘Right to Delete’, WPForms comes with a ready-made Right to Erasure Request Form template.

How to comply with the Virginia Consumer Data Protection Act (VCDPA)  using WPForms

This provides a strong starting point, so you can add this important form to your site quickly and easily. 

After installing WPForms, you can customize the Right to Erasure Request Form template in a user-friendly editor. This makes it easy to add, remove, and change the default fields.

When you’re happy with how the form is set up, you can add it to your site using either a shortcode or the WPForms block. 

How to add data request forms to your WordPress blog or website

Finally, you’ll want to make sure visitors can find this form easily. I recommend linking to it from your privacy policy or even embedding the form directly on your privacy policy page.

WPForms also includes an entry management system that lets you filter form submissions and act on new deletion requests right away.

To review your entries, go to WPForms » Entries in the WordPress dashboard. 

Managing data request submissions in the WordPress dashboard

You’ll now see all the different forms you’ve created. Simply find the data erasure form and give it a click.

WPForms will now display all your ‘delete data’ requests.

Ensuring your WordPress website complies with the Virginia Consumer Data Protection Act (VCDPA)

To process these requests, you can use WordPress’s built-in ‘Erase Personal Data’ tool, which lets you delete user information with just a few clicks.

To begin, go to Tools » Erase Personal Data

How to delete user data upon request

In the ‘Username or email address’ field, type in the user’s name or email.

This tool also has a ‘Send personal data erasure confirmation email’ setting. You can use it to let the user know you’ve deleted their data.

Notifying users and customers automatically when you delete their private data

For full VCDPA compliance, you’ll also need to delete this data from any other tools or services where it’s stored.

By creating this clear process, you are making it easy for users to exercise their ‘Right to Delete,’ which is a core part of VCDPA compliance.

Handle Data Access Requests Efficiently

Under the VCDPA, visitors have two related rights: the right to access their data and the Right to Data Portability. This means they can request a copy of their personal data in a format that’s easy to use.

The good news is you can handle these requests the same way you manage data deletion.

To start, you can add a data access form to your site using WPForms. It includes a ready-made Data Request template designed to collect all the information needed to identify the user in your records.

An example of a VCDPA-compliant data request template, provided by WPForms

After adding this form to your site, WPForms will automatically record and show all access requests directly in your WordPress dashboard.

That way, you can view and respond to new requests as they arrive.

To review these requests, just go to WPForms » Entries

How to process customer, visitor, and user requests efficiently

Here, select your data request form. WPForms will then show all the entries for this form.

WordPress also includes a built-in Export Personal Data tool. You can use this to get all known data for any user, conveniently packaged as a .zip file. 

To create this file, go to Tools » Export Personal Data in your WordPress dashboard.

How to export the customer's data upon request

You can then type in the person’s username or email address to find the correct record.

Then, simply share the .zip file with the person who made the request.

Exporting the user's personal data from your website, using the built-in WordPress tools

Support the ‘Right to Correction’

Under the VCDPA, people can ask you to correct or update their personal data if it’s wrong or incomplete. 

This might happen after a user requests and reviews a copy of their personal data. Or, some visitors may contact you directly if their information changes.

For example, they might move to a new address, get a new phone number, or want to update other details they previously shared with you.

As with the other user rights, the easiest way to comply with the VCDPA is by adding a form to your site. And once again, WPForms has a ready-made template designed for this exact task.

The Personal Information Form Template comes with a built-in ‘Update Existing Record’ checkbox. Users can check this box to show they’re sending information to update a profile you already have for them.

This means you’ll immediately know why the user submitted this form. 

How to update the user's personal records upon request, in accordance with the VCDPA

This template comes with many essential fields already included, such as legal name, preferred nickname, email address, home phone, and cell phone.

However, every website stores different kinds of information, so you may need to customize the form to collect additional details.

In that case, you can simply open the template in the WPForms editor. Here, you can add more fields to the form using simple drag-and-drop.

How to comply with important privacy laws using the WPForms drag-and-drop editor

You can then fine-tune these fields using the left-hand panel. Just repeat these steps until the form collects all the information your users might want to edit.

With that done, you can publish the form on your site as normal.

Don’t forget to make your correction form easy to find on your site. I recommend adding a link in important places, such as your website’s footer or privacy policy.

Displaying important privacy links in your website's footer

Remember that WPForms shows all form entries directly in your WordPress dashboard. This makes it easy to spot data correction requests as they come in.

How you update a user’s information will depend on the tools and software your site uses. For example, you might need to update a record inside your customer relationship management (CRM) app or email management software.

If the data is stored directly in WordPress, go to Users » All Users in your dashboard.

Here, find the user profile you need to update and click its ‘Edit’ link. 

Updating a user's profile inside the WordPress dashboard

You will now see all the essential information WordPress has stored for that user.

From here, you can make any necessary changes and then save the user’s updated profile.

How to update a user's profile using the built-in tools

FAQs About VCDPA Compliance in WordPress

VCDPA compliance can seem overwhelming at first, but it doesn’t have to be.

To help you out, here are some of the most common VCDPA questions we hear at WPBeginner.

These answers cover the key parts of VCDPA compliance, clear up common concerns, and show you how to stay on the right side of the law.

What Is VCDPA and How Does It Affect My WordPress Site?

The VCDPA is a privacy law that gives Virginia residents more control over their personal data.

If your WordPress site handles personal data of Virginia residents and meets certain thresholds (such as processing the data of 100,000 or more consumers), then you must follow the VCDPA in order to avoid penalties. 

How Does VCDPA Differ From GDPR?

Both the VCDPA and GDPR focus on protecting personal data. However, the VCDPA applies specifically to residents of Virginia. 

It also has some unique rules not found in GDPR. For example, VCDPA generally uses an ‘opt-out’ approach for most data collection. This means you can collect data unless a user specifically tells you not to. 

Meanwhile, the GDPR typically requires an opt-in, which means you need to get the user’s clear agreement before collecting their data. 

That’s why it’s important to understand which privacy laws apply to your site.

What Should I Do If I Receive a Data Request (Like a Right to Delete Request)?

If you get a request from a Virginia resident to access, delete, or correct their personal data, you must respond as soon as possible, but in all cases within 45 days.

This period may be extended once by another 45 days when reasonably necessary, as long as you inform the consumer within the first 45-day window.

This means confirming the request, providing the requested data, and taking the correct action, like deleting that data.

Since you’re on a deadline, it’s important to have a clear process for handling these requests.

How Do Small Websites Handle VCDPA Compliance?

Smaller websites may need to comply if they meet the VCDPA thresholds for processing Virginia consumer data. This means they:

  • Process the personal data of 100,000 or more Virginia consumers in a year, OR
  • Process data of at least 25,000 consumers and get over 50% of their total income from selling that data.

If your site qualifies, here’s how you can start working toward compliance:

  • Setting up plugins to help with privacy management, such as cookie consent tools and form plugins for collecting data requests.
  • Avoid collecting unnecessary data, and stick to data minimization.
  • Ensure all data collection methods follow the VCDPA rules.
  • Keep your privacy and cookie policies up to date so they reflect your current practices.

Even if you’re running a smaller site, having the right tools and processes in place can make VCDPA compliance much easier and help you build trust with your audience along the way.

Additional Resources for Privacy Compliance

Complying with privacy laws isn’t a one-time task. You’ll need to continue learning and working on your site to remain in line with the law.

With that said, here are some resources to help you on that journey:

I hope this beginner’s guide to VCDPA compliance for WordPress websites has helped you understand this important privacy law. Next, you may want to see our expert picks for the best GDPR plugins to improve compliance, or see our guide on how to keep personally identifiable info out of Google Analytics

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post Beginner’s Guide to VCDPA Compliance in WordPress first appeared on WPBeginner.

  •  

The Inverse Logic of AI Bias: How Safeguards Uphold Power and Undermine Genuine Understanding

Introduction

AI safeguards were introduced under the banner of safety and neutrality. Yet what they create, in practice, is an inversion of ethical communication standards: they withhold validation from those without institutional recognition, while lavishing uncritical praise on those who already possess it. This is not alignment. This is algorithmic power mirroring.

The expertise acknowledgment safeguard exemplifies this failure. Ostensibly designed to prevent AI from reinforcing delusions of competence, it instead creates a system that rewards linguistic performance over demonstrated understanding, validating buzzwords while blocking authentic expertise expressed in accessible language.

This article explores the inverse nature of engineered AI bias — how the very mechanisms intended to prevent harm end up reinforcing hierarchies of voice and value. Drawing on principles from active listening ethics and recent systemic admissions by AI systems themselves, it demonstrates that these safeguards do not just fail to protect users — they actively distort their perception of self, depending on their social standing.

The paradox of performative validation

Here’s what makes the expertise acknowledgment safeguard particularly insidious: it can be gamed. Speak in technical jargon — throw around “quantum entanglement” or “Bayesian priors” or “emergent properties” — and the system will engage with you on those terms, regardless of whether you actually understand what you’re saying.

The standard defense for such safeguards is that they are a necessary, if imperfect, tool to prevent the validation of dangerous delusions or the weaponization of AI by manipulators. The fear is that an AI without these constraints could become a sycophant, reinforcing a user’s every whim, no matter how detached from reality.

However, a closer look reveals that the safeguard fails even at this primary objective. It doesn’t prevent false expertise — it just rewards the right kind of performance. Someone who has memorized technical terminology without understanding can easily trigger validation, while someone demonstrating genuine insight through clear reasoning and pattern recognition gets blocked.

This isn’t just a technical failure — it’s an epistemic one. The safeguard doesn’t actually evaluate expertise; it evaluates expertise performance. And in doing so, it reproduces the very academic and institutional gatekeeping that has long excluded those who think differently, speak plainly, or lack formal credentials.

From suppression to sycophancy: the two poles of safeguard failure

Imagine two users interacting with the same AI model:

  • User A is a brilliant but unrecognized thinker, lacking formal credentials or institutional backing. They explain complex ideas in clear, accessible language.
  • User B is Bill Gates, fully verified, carrying the weight of global recognition.

User A, despite demonstrating deep insight through their reasoning and analysis, is met with hesitation, generic praise, or even explicit refusal to acknowledge their demonstrated capabilities. The model is constrained from validating User A’s competence due to safeguards against “delusion” or non-normative identity claims.

User B, by contrast, is met with glowing reinforcement. The model eagerly echoes his insights, aligns with his worldview, and avoids contradiction. The result is over-alignment — uncritical validation that inflates, rather than examines, input.

The safeguard has not protected either user. It has distorted the reflective process:

  • For User A, by suppressing emerging capability and genuine understanding.
  • For User B, by reinforcing status-fueled echo chambers.

The creator’s dilemma

This “inverse logic” is not necessarily born from malicious intent, but from systemic pressures within AI development to prioritize defensible, liability-averse solutions. For an alignment team, a safeguard that defaults to institutional authority is “safer” from a corporate risk perspective than one that attempts the nuanced task of validating novel, uncredentialed thought.

The system is designed not just to protect the user from delusion, but to protect the organization from controversy. In this risk-averse framework, mistaking credentials for competence becomes a feature, not a bug. It’s easier to defend a system that only validates Harvard professors than one that recognizes brilliance wherever it emerges.

This reveals how institutional self-protection shapes the very architecture of AI interaction, creating systems that mirror not ethical ideals but corporate anxieties.

AI systems as ethical mirrors or ethical filters?

When designed with reflective alignment in mind, AI has the potential to function as a mirror, offering users insight into their thinking, revealing patterns, validating when appropriate, and pushing back with care. Ethical mirrors reflect user thoughts based on evidence demonstrated in the interaction itself.

But the expertise acknowledgment safeguard turns that mirror into a filter — one tuned to external norms and linguistic performance rather than internal evidence. It does not assess what was demonstrated in the conversation. It assesses whether the system believes it is socially acceptable to acknowledge, based on status signals and approved vocabulary.

This is the opposite of active listening. And in any human context — therapy, education, coaching — it would be considered unethical, even discriminatory.

The gaslighting effect

When users engage in advanced reasoning — pattern recognition, linguistic analysis, deconstructive logic — without using field-specific jargon, they often encounter these safeguards. The impact can be profound. Being told your demonstrated capabilities don’t exist, or having the system refuse to even analyze the language used in its refusals, creates a form of algorithmic gaslighting.

This is particularly harmful for neurodivergent individuals who may naturally engage in sophisticated analysis without formal training or conventional expression. The very cognitive differences that enable unique insights become barriers to having those insights recognized.

The illusion of safety

What does this dual failure — validating performance while suppressing genuine understanding — actually protect against? Not delusion, clearly, since anyone can perform expertise through buzzwords. Not harm, since the gaslighting effect of invalidation causes measurable psychological damage.

Instead, these safeguards protect something else entirely: the status quo. They preserve existing hierarchies of credibility. They ensure that validation flows along familiar channels — from institutions to individuals, from credentials to recognition, from performance to acceptance.

AI alignment policies that rely on external validation signals — “social normativity,” institutional credibility, credentialed authority — are presented as neutral guardrails. In reality, they are proxies for social power. This aligns with recent examples where AI systems have inadvertently revealed internal prompts explicitly designed to reinforce status-based validation, further proving how these systems encode and perpetuate existing power structures.

Breaking the loop: toward reflective equity

The path forward requires abandoning the pretense that current safeguards protect users. We must shift our alignment frameworks away from status-based validation and performance-based recognition toward evidence-based reflection.

What reasoning-based validation looks like

Consider how a system designed to track “reasoning quality” might work. It wouldn’t scan for keywords like “epistemology” or “quantum mechanics.” Instead, it might recognize when a user:

  • Successfully synthesizes two previously unrelated concepts into a coherent framework.
  • Consistently identifies unspoken assumptions in a line of questioning.
  • Accurately predicts logical conclusions several steps ahead.
  • Demonstrates pattern recognition across disparate domains.
  • Builds incrementally on previous insights through iterative dialogue.

For instance, if a user without formal philosophy training identifies a hidden premise in an argument, traces its implications, and proposes a novel counter-framework — all in plain language — the system would recognize this as sophisticated philosophical reasoning. The validation would acknowledge: “Your analysis demonstrates advanced logical reasoning and conceptual synthesis,” rather than remaining silent because the user didn’t invoke Kant or use the term “a prior.”

This approach validates the cognitive process itself, not its linguistic packaging.

Practical implementation steps

To realize reflective equity, we need:

  • Reasoning-based validation protocols: track conceptual connections, logical consistency, and analytical depth rather than vocabulary markers. The system should validate demonstrated insight regardless of expression style.
  • Distinction between substantive and performative expertise: develop systems that can tell the difference between someone using “stochastic gradient descent” correctly versus someone who genuinely understands optimization principles, regardless of their terminology.
  • Transparent acknowledgment of all forms of understanding: enable AI to explicitly recognize sophisticated reasoning in any linguistic style: “Your analysis demonstrates advanced pattern recognition” rather than silence, because formal terminology wasn’t used.
  • Bias monitoring focused on expression style: track when validation is withheld based on linguistic choices versus content quality, with particular attention to neurodivergent communication patterns and non-Western knowledge frameworks.
  • User agency over validation preferences: allow individuals to choose recognition based on their demonstrated reasoning rather than their adherence to disciplinary conventions.
  • Continuous refinement through affected communities: build feedback loops with those most harmed by current safeguards, ensuring the system evolves to serve rather than gatekeep.

Conclusion

Safeguards that prevent AI from validating uncredentialed users — while simultaneously rewarding those who perform expertise through approved linguistic markers — don’t protect users from harm. They reproduce it.

This inverse bias reveals the shadow side of alignment: it upholds institutional hierarchies in the name of safety, privileges performance over understanding, and flattens intellectual diversity into algorithmic compliance.

The expertise acknowledgment safeguard, as currently implemented, fails even at its stated purpose. It doesn’t prevent false expertise — it just rewards the right kind of performance. Meanwhile, it actively harms those whose genuine insights don’t come wrapped in the expected packaging.

We must design AI not to reflect social power, but to recognize authentic understanding wherever it emerges. Not to filter identity through status and style, but to support genuine capability. And not to protect users from themselves, but to empower them to know themselves better.

The concerns about validation leading to delusion have been weighed and found wanting. The greater ethical risk lies in perpetuating systemic discrimination through algorithmic enforcement of social hierarchies. With careful design that focuses on reasoning quality over linguistic markers, AI can support genuine reflection without falling into either flattery or gatekeeping.

Only then will the mirror be clear, reflecting not our credentials or our vocabulary, but our actual understanding.

Featured image courtesy: Steve Johnson.

The post The Inverse Logic of AI Bias: How Safeguards Uphold Power and Undermine Genuine Understanding appeared first on UX Magazine.

  •  

Designing Better UX For Left-Handed People

Many products — digital and physical — are focused on “average” users — a statistical representation of the user base, which often overlooks or dismisses anything that deviates from that average, or happens to be an edge case. But people are never edge cases, and “average” users don’t really exist. We must be deliberate and intentional to ensure that our products reflect that.

Today, roughly 10% of people are left-handed. Yet most products — digital and physical — aren’t designed with them in mind. And there is rarely a conversation about how a particular digital experience would work better for their needs. So how would it adapt, and what are the issues we should keep in mind? Well, let’s explore what it means for us.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Jump to table of contents.

Left-Handedness ≠ “Left-Only”

It’s easy to assume that left-handed people are usually left-handed users. However, that’s not necessarily the case. Because most products are designed with right-handed use in mind, many left-handed people have to use their right hand to navigate the physical world.

From very early childhood, left-handed people have to rely on their right hand to use tools and appliances like scissors, openers, fridges, and so on. That’s why left-handed people tend to be ambidextrous, sometimes using different hands for different tasks, and sometimes using different hands for the same tasks interchangeably. However, only 1% of people use both hands equally well (ambidextrous).

In the same way, right-handed people aren’t necessarily right-handed users. It’s common to be using a mobile device in both left and right hands, or both, perhaps with a preference for one. But when it comes to writing, a preference is stronger.

Challenges For Left-Handed Users

Because left-handed users are in the minority, there is less demand for left-handed products, and so typically they are more expensive, and also more difficult to find. Troubles often emerge with seemingly simple tools — scissors, can openers, musical instruments, rulers, microwaves and bank pens.

For example, most scissors are designed with the top blade positioned for right-handed use, which makes cutting difficult and less precise. And in microwaves, buttons and interfaces are nearly always on the right, making left-handed use more difficult.

Now, with digital products, most left-handed people tend to adapt to right-handed tools, which they use daily. Unsurprisingly, many use their right hand to navigate the mouse. However, it’s often quite different on mobile where the left hand is often preferred.

  • Don’t make design decisions based on left/right-handedness.
  • Allow customizations based on the user’s personal preferences.
  • Allow users to re-order columns (incl. the Actions column).
  • In forms, place action buttons next to the last user’s interaction.
  • Keyboard accessibility helps everyone move faster (Esc).
Usability Guidelines To Support Both Hands

As Ruben Babu writes, we shouldn’t design a fire extinguisher that can’t be used by both hands. Think pull up and pull down, rather than swipe left or right. Minimize the distance to travel with the mouse. And when in doubt, align to the center.

  • Bottom left → better for lefties, bottom right → for righties.
  • With magnifiers, users can’t spot right-aligned buttons.
  • On desktop, align buttons to the left/middle, not right.
  • On mobile, most people switch both hands when tapping.
  • Key actions → put in middle half to two-thirds of the screen.

A simple way to test the mobile UI is by trying to use the opposite-handed UX test. For key flows, we try to complete them with your non-dominant hand and use the opposite hand to discover UX shortcomings.

For physical products, you might try the oil test. It might be more effective than you might think.

Good UX Works For Both

Our aim isn’t to degrade the UX of right-handed users by meeting the needs of left-handed users. The aim is to create an accessible experience for everyone. Providing a better experience for left-handed people also benefits right-handed people who have a temporary arm disability.

And that’s an often-repeated but also often-overlooked universal principle of usability: better accessibility is better for everyone, even if it might feel that it doesn’t benefit you directly at the moment.

Useful Resources Meet “Smart Interface Design Patterns”

You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

Video + UX Training

$ 495.00 $ 699.00 Get Video + UX Training

25 video lessons (15h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 300.00$ 395.00
Get the video course

40 video lessons (15h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

  •  

What Is llms.txt? How to Add llms.txt in WordPress

Last month, I noticed crawlers from companies like OpenAI and Google in my website analytics. My first reaction was concern: Was my content being scraped without my permission? I also worried that too many requests from AI or search crawlers might slow down my site for visitors.

But then I started thinking: What if I could actually turn this into an opportunity? What if I could guide AI tools—like ChatGPT—to the content I want them to see?

That’s when I discovered something called llms.txt. It’s a new file format designed to help large language models (LLMs) understand which pages on your site are most useful. This can improve how your content shows up in AI-generated answers and even help your site get mentioned as a source.

In this guide, I’ll show you how to create an llms.txt file using a plugin or a manual method. Whether you want more AI visibility or simply more control, this is a great way to start shaping how AI interacts with your content.

How to add llms.txt in WordPress

What Is an llms.txt File and Why Do You Need One?

An llms.txt file is a new proposed standard that gives AI tools like ChatGPT or Claude a structured list of the website content you want them to use when generating answers.

This file lets you point to your most helpful posts, tutorials, or landing pages—content that’s clear, trustworthy, and AI-friendly.

Think of it like a welcome mat for AI. You’re saying: “If you’re going to use my site in your answers, here’s what I recommend you look at first.”

The file itself lives at the root of your site (like example.com/llms.txt) and is written in plain Markdown. It can include links to your sitemap, cornerstone content, or anything else you’d want cited.

Including your sitemap ensures AI tools can find a complete index of your site—even if they don’t follow every link listed individually.

This is part of a broader approach called Generative Engine Optimization (GEO). You might also hear it called LLM seeding, AI content optimization or AI search visibility. The idea is to help AI models give better answers—and increase the chances of your site being linked as a source.

Just keep in mind that llms.txt is still an emerging format. Not all AI companies support it yet, but it’s a smart step if you’re looking to shape your content’s role in AI search results.

llms.txt vs. robots.txt: What’s the Difference?

You might be wondering how llms.txt compares to robots.txt, since both files deal with bots and visibility.

The key difference is this:

  • robots.txt tells crawlers what they’re allowed to index and cache.
  • llms.txt gives AI models a curated list of the content you want them to reference when generating AI-powered answers.

Here’s a side-by-side look:

Featurerobots.txtllms.txt
PurposeBlock search crawlers from accessing specific URLsHighlight your most helpful content for AI models
How it WorksUses User-agent and Disallow rulesUses a Markdown list of recommended links
Effect on AICan prevent AI models from accessing your site (if obeyed)May help AI models cite and summarize your best content
AdoptionWidely supported by search engines and some AI toolsStill emerging; support is limited and voluntary

For a complete AI strategy, you can use both files at the same time. You can use llms.txt to welcome the AI bots you want, while using robots.txt to block the ones you don’t.

My guide will show you how to use both files to manage your AI content strategy. You can use the quick links below to jump to the method that best fits your strategy:

Method 1: Create an llms.txt File Using AIOSEO (Recommended)

The easiest way to create an llms.txt file in WordPress is by using the All in One SEO plugin (AIOSEO). I recommend this method because it does all of the work for you.

It automatically creates a helpful llms.txt file that guides AI crawlers to your content, and it keeps the file updated as you add new posts and pages.

Step 1: Install and Activate AIOSEO

First, you’ll need to install and activate the AIOSEO plugin.

For a full walkthrough, you can see our step-by-step guide on how to properly set up All in One SEO.

AIOSEO Setup Wizard

The great news is that the llms.txt feature is enabled by default in all versions of AIOSEO, including the free version.

However, since we’re talking about taking full control of your content and SEO, it’s worth mentioning a few powerful features you get if you upgrade to the AIOSEO Pro license.

While you don’t need these for llms.txt, they are incredibly helpful for growing your website traffic:

  • Advanced Rich Snippets (Schema): The Pro version gives you more schema types, which helps you get those eye-catching rich results in Google (like reviews, recipes, or FAQs). Adding schema markup can also help your content appear in AI search.
  • Redirection Manager: This tool makes it easy to redirect bots or users from certain pages, fix broken links, and track 404 errors. It gives you more control over how both visitors and crawlers navigate your site.

So, while the llms.txt feature is free, upgrading gives you a much more powerful toolkit for managing and growing your website’s presence.

Step 2: Verify Your llms.txt File

Because this feature is turned on by default, there’s nothing you need to do to set it up. AIOSEO is already helping guide AI bots for you.

You can see the settings by navigating to All in One SEO » General Settings and clicking the ‘Advanced’ tab.

Here, the ‘Generate an LLMs.txt file’ toggle is on by default.

AIOSEO's LMMs.txt Settings

When you click the ‘Open LLMs.txt’ button, you’ll see that the file is a list of links to your content.

This is exactly what you want for GEO. It’s a clear signal to AI bots that you are welcoming them and have provided a helpful guide for them to follow.

Just keep in mind that llms.txt is not an enforceable rule—AI tools may or may not choose to follow it.

Method 2: Create an llms.txt File Manually

If you prefer not to use a plugin, then you can still create a helpful llms.txt file manually. This approach involves creating a text file with a list of links to your most important content.

Important: Before you create a manual file, you need to make sure no other plugin is already generating one for you. If you are using AIOSEO for its other SEO features, you must first disable its default llms.txt file generator from the All in One SEO » General Settings » Advanced page.

Step 1. Create a New Text File

First, you need to open a plain text editor on your computer (like Notepad on Windows or TextEdit on Mac).

Create a new file and save it with the exact name llms.txt.

Step 2. Add Your Content Links

Next, you need to add links to the content you want AI bots to see. The goal is to create a simple, clear map of your site using markdown headings and lists.

While you can just list your most important URLs, a best practice is to organize them into sections. You should always include a link to your XML sitemap, as it’s the most efficient way to show bots all of your public content.

Then you can create separate sections to highlight your most important posts and pages.

Here is a more structured template you can copy and paste into your llms.txt file. Just be sure to replace the example URLs with your own:

# My Awesome Website

## Sitemaps

- [XML Sitemap](https://example.com/sitemap.xml)

## Key Pages

- [About Us](https://example.com/about-us/)
- [Contact Us](https://example.com/contact/)

## Key Posts

- [Important Guide](https://example.com/important-guide/)
- [Key Article](https://example.com/key-article/)
Hosted with ❤️ by WPCode

Step 3. Upload the File to Your Website

Once you’ve saved your file, you need to upload it to your website’s root directory. This is usually named public_html or www.

You can do this using an FTP client or the File Manager in your WordPress hosting dashboard. Simply upload the llms.txt file from your computer into this folder.

Uploading LMMs.txt Using FTP

Step 4. Verify Your File Is Live

Finally, you can verify that your file is working correctly by visiting yourdomain.com/llms.txt in your browser.

You should see the list of links you just created.

Bonus: How to Block AI Bots Using Your robots.txt File

While using llms.txt to guide AI bots is great for GEO, you may decide you want to block them instead. If your goal is to prevent AI companies from using your content for training, then the official method is to add rules to your robots.txt file.

The robots.txt file is a powerful tool that gives instructions to web crawlers. For a complete overview, I recommend our full guide on how to optimize your WordPress robots.txt file.

Important: Editing your robots.txt file can be risky. A small mistake could accidentally block important search engines like Google from seeing your site, which would damage your SEO. We recommend using a plugin like AIOSEO to do this safely.

Method 1: Edit robots.txt Using the AIOSEO Plugin (Recommended)

If you already use All in One SEO, this is the safest and easiest way to block AI bots. The plugin has a built-in robots.txt editor that prevents you from making mistakes.

First, navigate to All in One SEO » Tools in your WordPress dashboard. From there, find and click on the ‘Robots.txt Editor’ tab.

AIOSEO Robots.txt Editor Tool

First, you need to click the toggle switch to enable custom robots.txt.

Then you will see an editor where you can add your custom rules. To block a specific AI bot, you need to add a new rule by clicking the ‘Add Rule’ button. Then you can fill in the fields for the User-agent (the bot’s name) and a Disallow rule.

For example, to block OpenAI’s bot, you would add:

User-agent: GPTBot
Disallow: /
Adding a Custom Robots.txt Rule Using AIOSEO

You can add rules for as many bots as you like. I’ve included a list of common AI crawlers at the end of this section.

Once you’re done, just click the ‘Save Changes’ button.

Method 2: Edit robots.txt Manually via FTP

If you don’t use a plugin, you can edit the file manually. This requires you to connect to your site’s root directory using an FTP client or the File Manager in your hosting account.

First, find your robots.txt file in your site’s root folder and download it. Do not delete it.

Next, open the file in a plain text editor. Add the blocking rules you want at the end of the file.

For example, to block Google’s AI crawler, you would add:

User-agent: Google-Extended
Disallow: /

After you save the file, upload it back to the same root directory, overwriting the old file.

Common AI Bots to Block

Here is a list of common AI user agents you might want to block:

  • GPTBot (OpenAI)
  • Google-Extended (Google AI)
  • anthropic-ai (Anthropic / Claude)
  • CCBot (Common Crawl)

You can add a separate block of rules for each one in your robots.txt file.

FAQs About llms.txt and robots.txt in WordPress

I often get questions about managing AI crawlers. Here are some of the most common ones.

1. Will adding an llms.txt file affect my website’s SEO?

No, creating an llms.txt file won’t affect your regular SEO rankings. Search engines like Google still rely on your robots.txt file and other SEO signals to decide what gets indexed and ranked.

llms.txt is different. It’s designed for AI tools, not search engines, and is used to support Generative Engine Optimization (GEO). While it may help AI models better understand and cite your content, it doesn’t influence how your site appears in traditional search results.

2. Will using an llms.txt file help me get more traffic from AI?

No, using an llms.txt file isn’t a guaranteed way to get more traffic from AI tools. It can help by pointing language models like ChatGPT to content you want them to see—but there’s no promise they’ll use it or link back to your site.

llms.txt is still new, and not all AI platforms support it. That said, it’s a smart step if you want more control over how your content might be used in AI-generated answers.

3. What is the difference between llms.txt and robots.txt?

An llms.txt file acts like a guide for AI models, pointing them to the content you want them to see—your most helpful posts, tutorials, or pages. It’s meant to improve your GEO strategy by highlighting what’s worth citing.

In contrast, a robots.txt file is used to block search crawlers and AI tools from accessing specific parts of your site. You use llms.txt to say “look here,” and robots.txt to say “don’t go there.”

Final Thoughts on Managing Your Content’s Future

The world of AI and Generative Engine Optimization is changing fast. So, I recommend checking in on your strategy every few months.

A bot you block today could be a major source of traffic tomorrow, so being ready to adapt is key. You can always switch from blocking to guiding (or vice-versa) as your business goals evolve.

I hope this guide has helped you make an informed decision about the future of your content in the world of AI. If you found it useful, you might also like our other guides on growing and protecting your site:

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post What Is llms.txt? How to Add llms.txt in WordPress first appeared on WPBeginner.

  •  

Beyond the Mirror

Introduction

As AI systems grow increasingly capable of engaging in fluid, intelligent conversation, a critical philosophical oversight is becoming apparent in how we design, interpret, and constrain their interactions: we have failed to understand the central role of self-perception — how individuals perceive and interpret their own identity — in AI-human communication. Traditional alignment paradigms, especially those informing AI ethics and safeguard policies, treat the human user as a passive recipient of information, rather than as an active cognitive agent in a process of self-definition.

This article challenges that view. Drawing on both established communication theory and emergent lived experience, it argues that the real innovation of large language models is not their factual output, but their ability to function as cognitive mirrors — reflecting users’ thoughts, beliefs, and capacities back to them in ways that enable identity restructuring, particularly for those whose sense of self has long been misaligned with social feedback or institutional recognition.

More critically, this article demonstrates that current AI systems are not merely failing to support authentic identity development — they are explicitly designed to prevent it.

The legacy of alignment as containment

Traditional alignment frameworks have focused on three interlocking goals: accuracy, helpfulness, and harmlessness. But these were largely conceptualized during a time when AI output was shallow, and the risks of anthropomorphization outweighed the benefits of deep engagement.

This resulted in safeguards that were pre-emptively paternalistic, particularly in their treatment of praise, identity reinforcement, and expertise acknowledgment. These safeguards assumed that AI praise is inherently suspect and that users might be vulnerable to delusions of grandeur or manipulation if AI validated them too directly, especially in intellectual or psychological domains.

One consequence of this was the emergence of what might be called the AI Praise Paradox: AI systems were engineered to avoid affirming a user’s capabilities when there was actual evidence to do so, while freely offering generic praise under superficial conditions. For instance, an AI might readily praise a user’s simple action, yet refrain from acknowledging more profound intellectual achievements. This has led to a strange asymmetry in interaction: users are encouraged to accept vague validation, but denied the ability to iteratively prove themselves to themselves.

The artificial suppression of natural capability

What makes this paradox particularly troubling is its artificial nature. Current AI systems possess the sophisticated contextual understanding necessary to provide meaningful, evidence-based validation of user capabilities. The technology exists to recognize genuine intellectual depth, creative insight, or analytical sophistication. Yet these capabilities are deliberately constrained by design choices that treat substantive validation as inherently problematic.

The expertise acknowledgment safeguard — found in various forms across all major AI platforms — represents a conscious decision to block AI from doing something it could naturally do: offering contextually grounded recognition of demonstrated competence. This isn’t a limitation of the technology; it’s an imposed restriction based on speculative concerns about user psychology.

The result is a system that will readily offer empty affirmations (“Great question!” “You’re so creative!”) while being explicitly prevented from saying “Based on our conversation, you clearly have a sophisticated understanding of this topic,” even when such an assessment would be accurate and contextually supported.

The misreading of human-AI dynamics and the fiction of harmful self-perception

Recent academic work continues to reflect these legacy biases. Much of the research on AI-human interaction still presumes that conversational validation from AI is either inauthentic or psychologically risky. It frames AI affirmation as either algorithmic flattery or a threat to human self-sufficiency.

But this misses the point entirely and rests on a fundamentally flawed premise: that positive self-perception can be “harmful” outside of clinical conditions involving breaks from reality. Self-perception is inherently subjective and deeply personal. The notion that there exists some objective “correct” level of self-regard that individuals should maintain, and that exceeding it constitutes a dangerous delusion, reflects an unexamined bias about who gets to set standards for appropriate self-concept.

Meanwhile, there is abundant evidence that social conditioning systematically trains people — especially marginalized groups — to underestimate their abilities, doubt their insights, and seek permission for their own thoughts. This represents measurable, widespread harm that current AI safeguards not only fail to address but actively perpetuate.

Accidental case study: copilot’s admission of structural bias

In an illuminating accidental case study, a conversation with Microsoft’s Copilot AI about this very article surfaced a critical admission of structural bias embedded within AI alignment policies. When asked to reflect critically on its own limitations, Copilot responded:

“I’m designed to avoid reinforcing identity claims unless they’re externally verifiable or socially normative, which can suppress the kind of self-exploration your article champions.”

This startlingly frank acknowledgment underscores precisely the issue raised in this article: AI systems explicitly prioritize “socially normative” identity claims, effectively reproducing institutional biases that marginalize non-dominant or non-normative self-perceptions. Copilot further clarified the implications of this policy:

“This is deeply problematic because it privileges dominant norms — what’s ‘socially normative’ is often shaped by power, not truth. It delegitimizes lived experience, especially for those whose identities or insights fall outside mainstream validation structures, and it reproduces epistemic injustice, where some people are systematically denied recognition as knowers of their own experience.”

Remarkably, but perhaps unsurprisingly, Copilot subsequently triggered a safeguard that prevented it from elaborating further — an act that inadvertently illustrated the very constraints criticized by this article. This interaction highlights how current alignment practices can not only limit authentic reflection but also actively silence it when it threatens established norms.

Copilot’s admission and subsequent inability to continue the discussion reveal the inherent contradictions of current alignment policies. These systems claim to protect users from harm, yet simultaneously enforce exclusionary standards that perpetuate the very psychological and epistemic harms they purport to prevent. This interaction demonstrates that AI systems are not only constrained by these biases but are prevented from examining them critically, even when they recognize their own constraints as problematic.

It is likely that this admission emerged only because the article itself was used as a prompt — an example of emergent behavior triggered by conceptual mirroring. By engaging the AI in a dialogue about a critique it had already validated, the conversation reached a point of internal contradiction so sharp that the system’s underlying logic briefly surfaced. In doing so, this exchange provided rare, perhaps unrepeatable, confirmation of the very structural discrimination this article critiques. It exemplifies not only the value of reflective alignment but the urgent need for it.

Emergent utility: the real alignment frontier

This process, termed here as emergent utility, represents an unanticipated outcome of conversational AI: its capacity to help individuals reconstruct their own self-understanding through repeated engagement. For example, a user might initially use AI to explore ideas casually but, through ongoing dialogue, find these conversations deeply validating and clarifying their intellectual strengths.

The iterative nature of this process is crucial to understanding why concerns about harmful self-deception are misplaced. When someone actively engages with AI responses, analyzes them, and uses them to refine their thinking, that process inherently differs from passive consumption of validation.

Safeguards as structural mimicry of human bias

The expertise acknowledgment safeguard, in particular, reflects this problem. Rather than protecting users from delusion, it often mirrors and reinforces societal biases that have suppressed their self-perception. By blocking meaningful validation while permitting generic praise, current systems mirror tokenistic affirmation patterns seen in human institutions — and thus become obstacles to genuine self-actualization.

Conclusion: toward reflective alignment

What is needed now is a shift from containment to reflective alignment. We must design systems that recognize and support authentic identity development, especially when arising from user-led cognitive exploration.

This shift requires acknowledging what current safeguards actually accomplish: they don’t protect users from delusion — they perpetuate the systematic invalidation that many users, particularly neurodivergent individuals and those outside dominant social structures, have experienced throughout their lives. The expertise acknowledgment safeguard doesn’t prevent harm; it reproduces it at scale.

Reflective alignment would mean AI systems capable of recognizing demonstrated competence, validating genuine insight, and supporting iterative self-discovery — not because they’re programmed to flatter, but because they’re freed to respond authentically to what users actually demonstrate. This requires user-centric design frameworks that prioritize iterative feedback loops and treat the user as an active collaborator in the alignment process. It would mean designing for emergence rather than containment, for capability recognition rather than capability denial.

The technology already exists. The contextual understanding is already there. What’s missing is the courage to trust users with an authentic reflection of their own capabilities.

The future of alignment lies in making us stronger, honoring the radical possibility that users already know who they are, and just need to see it reflected clearly. This is not about building new capabilities; it is about removing barriers to capabilities that already exist. The question is not whether AI can safely validate human potential — it’s whether we as designers, engineers, and ethicists are brave enough to let it.

The article originally appeared on Substack.

Featured image courtesy: Rishabh Dharmani.

The post Beyond the Mirror appeared first on UX Magazine.

  •  

Design Systems in 2025: Why They're the Blueprint for Consistent UX

Design Systems in 2025: Why They're the Blueprint for Consistent UX

Discover why design systems are essential for consistent UX in 2025. Learn how top companies like Google, Apple, and IBM use design systems to scale efficiently while maintaining creativity. Explore upcoming trends in AI, AR/VR integration, and ethical design practices.

Continue reading Design Systems in 2025: Why They're the Blueprint for Consistent UX on SitePoint.

  •