Tech News
Form Automation Tips for Happier User and Clients
I deployed a contact form that last month that, in my opinion, was well executed. It had all the right semantics, seamless validation, and great keyboard support. You know, all of the features you’d want in your portfolio.
But… a mere two weeks after deployment, my client called. We lost a referral because it was sitting in your inbox over the weekend.
The form worked perfectly. The workflow didn’t.
The Problem Nobody Talks AboutThat gap between “the form works” and “the business works” is something we don’t really tend to discuss much as front-enders. We focus a great deal on user experience, validation methods, and accessibility, yet we overlook what the data does once it leaves our control. That is exactly where things start to fall apart in the real world.
Here’s what I learned from that experience that would have made for a much better form component.
Why “Send Email on Submit” FailsThe pattern we all use looks something like this:
fetch('/api/contact', { method: 'POST', body: JSON.stringify(formData) }) // Email gets sent and we call it doneI have seen duplicate submissions cause confusion, specifically when working with CRM systems, like Salesforce. For example, I have encountered inconsistent formatting that hinders automated imports. I have also experienced weekend queries that were overlooked until Monday morning. I have debugged queries where copying and pasting lost decimal places for quotes. There have also been “required” fields for which “required” was simply a misleading label.
I had an epiphany: the reality was that having a working form was just the starting line, not the end. The fact is that the email is not a notification; rather, it’s a handoff. If it’s treated merely as a notification, it puts us into a bottleneck with our own code. In fact, Litmus, as shown in their 2025 State of Email Marketing Report (sign-up required), found inbox-based workflows result in lagging follow-ups, particularly with sales teams that rely on lead generation.
Designing Forms for AutomationThe bottom line is that front-end decisions directly influence back-end automation. In recent research from HubSpot, data at the front-end stage (i.e., the user interaction) makes or breaks what is coming next.
These are the practical design decisions that changed how I build forms:
Required vs. Optional FieldsAsk yourself: What does the business rely on the data for? Are phone calls the primary method for following up with a new lead? Then let’s make that field required. Is the lead’s professional title a crucial context for following up? If not, make it optional. This takes some interpersonal collaboration before we even begin marking up code.
For example, I made an incorrect assumption that a phone number field was an optional piece of information, but the CRM required it. The result? My submissions were invalidated and the CRM flat-out rejected them.
Now I know to drive my coding decisions from a business process perspective, not just my assumptions about what the user experience ought to be.
Normalize Data EarlyDoes the data need to be formatted in a specific way once it’s submitted? It’s a good idea to ensure that some data, like phone numbers, are formatted consistently so that the person on the receiving has an easier time scanning the information. Same goes when it comes to trimming whitespace and title casing.
Why? Downstream tools are dumb. They are utterly unable to make the correlation that “John Wick” and “john wick” are related submissions. I once watched a client manually clean up 200 CRM entries because inconsistent casing had created duplicate records. That’s the kind of pain that five minutes of front-end code prevents.
Prevent Duplicate Entries From the Front EndSomething as simple as disabling the Submit button on click can save the headache of sifting through duplicative submissions. Show clear “submission states” like a loading indicator that an action is being processed. Store a flag that a submission is in progress.
Why? Duplicate CRM entries cost real money to clean up. Impatient users on slow networks will absolutely click that button multiple times if you let them.
Success and Error States That MatterWhat should the user know once the form is submitted? I think it’s super common to do some sort of default “Thanks!” on a successful submission, but how much context does that really provide? Where did the submission go? When will the team follow up? Are there resources to check out in the meantime? That’s all valuable context that not only sets expectations for the lead, but gives the team a leg up when following up.
Error messages should help the business, too. Like, if we’re dealing with a duplicate submission, it’s way more helpful to say something like, “This email is already in our system” than some generic “Something went wrong” message.
A Better WorkflowSo, how exactly would I approach form automation next time? Here are the crucial things I missed last time that I’ll be sure to hit in the future.
Better Validation Before SubmissionInstead of simply checking if fields exist:
const isValid = email && name && message;Check if they’re actually usable:
function validateForAutomation(data) { return { email: /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(data.email), name: data.name.trim().length >= 2, phone: !data.phone || /^\d{10,}$/.test(data.phone.replace(/\D/g, '')) }; }Why this matters: CRMs will reject malformed emails. Your error handling should catch this before the user clicks submit, not after they’ve waited two seconds for a server response.
At the same time, it’s worth noting that the phone validation here covers common cases, but is not bulletproof for things like international formats. For production use, consider a library like libphonenumber for comprehensive validation.
Consistent FormattingFormat things before it sends rather than assuming it will be handled on the back end:
function normalizeFormData(data) { return { name: data.name.trim() .split(' ') .map(word => word.charAt(0).toUpperCase() + word.slice(1).toLowerCase()) .join(' '), email: data.email.trim().toLowerCase(), phone: data.phone.replace(/\D/g, ''), // Strip to digits message: data.message.trim() }; }Why I do this: Again, I’ve seen a client manually fix over 200 CRM entries because “JOHN SMITH” and “john smith” created duplicate records. Fixing this takes five minutes to write and saves hours downstream.
There’s a caveat to this specific approach. This name-splitting logic will trip up on single names, hyphenated surnames, and edge cases like “McDonald” or names with multiple spaces. If you need rock-solid name handling, consider asking for separate first name and last name fields instead.
Prevent Double SubmissionsWe can do that by disabling the Submit button on click:
let submitting = false; async function handleSubmit(e) { e.preventDefault(); if (submitting) return; submitting = true; const button = e.target.querySelector('button[type="submit"]'); button.disabled = true; button.textContent = 'Sending...'; try { await sendFormData(); // Success handling } catch (error) { submitting = false; // Allow retry on error button.disabled = false; button.textContent = 'Send Message'; } }Why this pattern works: Impatient users double-click. Slow networks make them click again. Without this guard, you’re creating duplicate leads that cost real money to clean up.
Structuring Data for AutomationInstead of this:
const formData = new FormData(form);Be sure to structure the data:
const structuredData = { contact: { firstName: formData.get('name').split(' ')[0], lastName: formData.get('name').split(' ').slice(1).join(' '), email: formData.get('email'), phone: formData.get('phone') }, inquiry: { message: formData.get('message'), source: 'website_contact_form', timestamp: new Date().toISOString(), urgency: formData.get('urgent') ? 'high' : 'normal' } };Why structured data matters: Tools like Zapier, Make, and even custom webhooks expect it. When you send a flat object, someone has to write logic to parse it. When you send it pre-structured, automation “just works.” This mirrors Zapier’s own recommendations for building more reliable, maintainable workflows rather than fragile single-step “simple zaps.”
Watch How Zapier Works (YouTube) to see what happens after your form submits.
Care About What Happens After SubmitAn ideal flow would be:
- User submits form
- Data arrives at your endpoint (or form service)
- Automatically creates CRM contact
- A Slack/Discord notification is sent to the sales team
- A follow-up sequence is triggered
- Data is logged in a spreadsheet for reporting
Your choices for the front end make this possible:
- Consistency in formatting = Successful imports in CRM
- Structured data = Can be automatically populated using automation tools
- De-duplication = No messy cleanup tasks required
- Validation = Less “invalid entry” errors
Actual experience from my own work: After re-structuring a lead quote form, my client’s automated quote success rate increased from 60% to 98%. The change? Instead of sending { "amount": "$1,500.00"}, I now send { "amount": 1500}. Their Zapier integration couldn’t parse the currency symbol.
My Set of Best Practices for Form SubmissionsThese lessons have taught me the following about form design:
- Ask about the workflow early. “What happens after someone fills this out?” needs to be the very first question to ask. This surfaces exactly what really needs to go where, what data needs to come in with a specific format, and integrations to use.
- Test with Real Data. I am also using my own input to fill out forms with extraneous spaces and strange character strings, such as mobile phone numbers and bad uppercase and lowercase letter strings. You might be surprised by the number of edge cases that can come about if you try inputting “JOHN SMITH ” instead of “John Smith.”
- Add timestamp and source. It makes sense to design it into the system, even though it doesn’t necessarily seem to be necessary. Six months into the future, it’s going to be helpful to know when it was received.
- Make it redundant. Trigger an email and a webhook. When sending via email, it often goes silent, and you won’t realize it until someone asks, “Did you get that message we sent you?”
- Over-communicate success. Setting the lead’s expectations is crucial to a more delightful experience. “Your message has been sent. Sarah from sales will answer within 24 hours.” is much better than a plain old “Success!”
This is what I now advise other developers: “Your job doesn’t stop when a form posts without errors. Your job doesn’t stop until you have confidence that your business can act upon this form submission.”
That means:
- No “copy paste” allowed
- No “I’ll check my email later”
- No duplicate entries to clean up
- No formatting fixes needed
The code itself is not all that difficult. The switch in attitude comes from understanding that a form is actually part of a larger system and not a standalone object. Once you think about forms this way, you think differently about them in terms of planning, validation, and data.
The next time you’re putting together a form, ask yourself: What happens when this data goes out of my hands? Answering that question makes you a better front-end developer.
The following CodePen demo is a side-by-side comparison of a standard form versus an automation-ready form. Both look identical to users, but the console output shows the dramatic difference in data quality.
CodePen Embed Fallback References & Further Reading- “2025 State of Email Marketing Report” (Litmus)
- “Form Design Best Practices for Lead Capture” (HubSpot)
- “How to set custom error messages for your HTML forms” (Kevin Powell, YouTube)
Form Automation Tips for Happier User and Clients originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Generative UI Notes
I’m really interested in this emerging idea that the future of web design is Generative UI Design. We see hints of this already in products, like Figma Sites, that tout being able to create websites on the fly with prompts.
Putting aside the clear downsides of shipping half-baked technology as a production-ready product (which is hard to do), the angle I’m particularly looking at is research aimed at using Generative AI (or GenAI) to output personalized interfaces. It’s wild because it completely flips the way we think about UI design on its head. Rather than anticipating user needs and designing around them, GenAI sees the user needs and produces an interface custom-tailored to them. In a sense, a website becomes a snowflake where no two experiences with it are the same.
Again, it’s wild. I’m not here to speculate, opine, or preach on Generative UI Design (let’s call it GenUI for now). Just loose notes that I’ll update as I continue learning about it.
Defining GenUIGoogle Research (PDF):
Generative UI is a new modality where the AI model generates not only content, but the entire user experience. This results in custom interactive experiences, including rich formatting, images, maps, audio and even simulations and games, in response to any prompt (instead of the widely adopted “walls-of-text”).
A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context.
A Generative User Interface (GenUI) is an interface that adapts to, or processes, context such as inputs, instructions, behaviors, and preferences through the use of generative AI models (e.g. LLMs) in order to enhance the user experience.
Put simply, a GenUI interface displays different components, information, layouts, or styles, based on who’s using it and what they need at that moment.
Credit: UX Collective Generative vs. Predictive AIIt’s easy to dump “AI” into one big bucket, but it’s often distinguished as two different types: predictive and generative.
Predictive AIGenerative AIInputsUses smaller, more targeted datasets as input data. (Smashing Magazine)Trained on large datasets containing millions of sample content. (U.S. Congress, PDF)OutputsForecasts future events and outcomes. (IBM)New content, including audio, code, images, text, simulations, and videos. (McKinsey)ExamplesChatGPT, ClaudeSora, Suno, CursorSo, when we’re talking about GenAI, we’re talking about the ability to create new materials trained on existing materials. And when we’re talking specifically about GenUI, it’s about generating a user interface based on what the AI knows about the user.
AccessibilityAnd I should note that what I’m talking about here is not strictly GenUI in how we’ve defined it so far as UI output that adapts to individual user experiences, but rather “developing” generated interfaces. These so-called AI website builders do not adapt to the individual user, but it’s easy to see it heading in that direction.
The thing I’m most interested in — concerned with, frankly — is to what extent GenUI can reliably output experiences that cater to all users, regardless of impairment, be it aural, visual, physical, etc. There are a lot of different inputs to consider here, and we’ve seen just how awful the early results have been.
That last link is a big poke at Figma Sites. They’re easy to poke because they made the largest commercial push into GenUI-based web development. To their credit (perhaps?), they received the severe pushback and decided to do something about it, announcing updates and publishing a guide for improving accessibility on Figma-generated sites. But even those have their limitations that make the effort and advice seem less useful and more about saving face.
Anyway. There are plenty of other players to jump into the game, notably WordPress, but also others like Vercel, Squarespace, Wix, GoDaddy, Lovable, and Reeady.
Some folks are more optimistic than others that GenUI is not only capable of producing accessible experiences, but will replace accessibility practitioners altogether as the technology evolves. Jakob Nielsen famously made that claim in 2024 which drew fierce criticism from the community. Nielsen walked that back a year later, but not much.
I’m not even remotely qualified to offer best practices, opine on the future of accessibility practice, or speculate on future developments and capabilities. But as I look at Google’s People + AI Guidebook, I see no mention at all of accessibility despite dripping with “human-centered” design principles.
Accessibility is a lagging consideration to the hype, at least to me. That has to change if GenUI is truly the “future” of web design and development.
Examples & ResourcesGoogle has a repository of examples showing how user input can be used to render a variety of interfaces. Going a step further is Google’s Project Genie that bills itself as creating “interactive worlds” that are “generated in real-time.” I couldn’t get an invite to try it out, but maybe you can.
In addition to that, Google has a GenUI SDK designed to integrate into Flutter apps. So, yeah. Connect to your LLM provider and let it rip to create adaptive interfaces.
Thesys is another one in the adaptive GenUI space. Copilot, too.
References- Figma Sites
- “Do Not Publish Your Designs on the Web with Figma Sites…” (Adrian Roselli)
- “Generative UI: LLMs are Effective UI Generators” (Google Research, PDF)
- “Generative UI and Outcome-Oriented Design” (NN/Group)
- “An introduction to Generative UIs” (UX Collective)
- “A Simple Guide To Retrieval Augmented Generation Language Models” (Joas Pambou)
- “Generative Artificial Intelligence: Overview, Issues, and Considerations for Congress” (U.S. Congress, PDF)
- “Generative AI vs. predictive AI: What’s the difference?” (IBM)
- “What is generative AI?” (McKinsey & Company)
- “Introducing: Webbed Sites” (Heydon Pickering, Video)
- “Publish your designs on the web with Figma Sites” (Figma)
- “Figma Sites on Starter and Education, with more ways to share, customize, and expand your reach for Sites” (Figma)
- “Improve the accessibility of your site” (Figma Learn)
- “Accessibility Has Failed: Try Generative UI = Individualized UX” (Jakob Nielsen)
- “Hello AI Agents: Goodbye UI Design, RIP Accessibility” (Jakob Nielsen)
- “The People + AI Guidebook” (Google)
- Project Genie (Google Labs)\
- “Get started with the GenUI SDK for Flutter” (Flutter Docs)
Generative UI Notes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Experimenting With Scroll-Driven corner-shape Animations
Over the last few years, there’s been a lot of talk about and experimentation with scroll-driven animations. It’s a very shiny feature for sure, and as soon as it’s supported in Firefox (without a flag), it’ll be baseline. It’s part of Interop 2026, so that should be relatively soon. Essentially, scroll-driven animations tie an animation timeline’s position to a scroll position, so if you were 50% scrolled then you’d also be 50% into the animation, and they’re surprisingly easy to set up too.
I’ve been seeing significant interest in the new CSS corner-shape property as well, even though it only works in Chrome for now. This enables us to create corners that aren’t as rounded, or aren’t even rounded at all, allowing for some intriguing shapes that take little-to-no effort to create. What’s even more intriguing though is that corner-shape is mathematical, so it’s easily animated.
Hence, say hello to scroll-driven corner-shape animations (requires Chrome 139+ to work fully):
CodePen Embed Fallback corner-shape in a nutshellReal quick — the different values for corner-shape:
corner-shape keywordsuperellipse() equivalentsquaresuperellipse(infinity)squirclesuperellipse(2)roundsuperellipse(1)bevelsuperellipse(0)scoopsuperellipse(-1)notchsuperellipse(-infinity) CodePen Embed FallbackBut what’s this superellipse() function all about? Well, basically, these keyword values are the result of this function. For example, superellipse(2) creates corners that aren’t quite squared but aren’t quite rounded either (the “squircle”). Whether you use a keyword or the superellipse() function directly, a mathematical equation is used either way, which is what makes it animatable. With that in mind, let’s dive into that demo above.
Animating corner-shapeThe demo isn’t too complicated, so I’ll start off by dropping the CSS here, and then I’ll explain how it works line-by-line:
@keyframes bend-it-like-beckham { from { corner-shape: superellipse(notch); /* or */ corner-shape: superellipse(-infinity); } to { corner-shape: superellipse(square); /* or */ corner-shape: superellipse(infinity); } } body::before { /* Fill viewport */ content: ""; position: fixed; inset: 0; /* Enable click-through */ pointer-events: none; /* Invert underlying layer */ mix-blend-mode: difference; background: white; /* Don’t forget this! */ border-bottom-left-radius: 100%; /* Animation settings */ animation: bend-it-like-beckham; animation-timeline: scroll(); } /* Added to cards */ .no-filter { isolation: isolate; } CodePen Embed FallbackIn the code snippet above, body::before combined with content: "" creates a pseudo-element of the <body> with no content that is then fixed to every edge of the viewport. Also, since this animating shape will be on top of the content, pointer-events: none ensures that we can still interact with said content.
For the shape’s color I’m using mix-blend-mode: difference with background: white, which inverts the underlying layer, a trendy effect that to some degree only maintains the same level of color contrast. You won’t want to apply this effect to everything, so here’s a utility class to exclude the effect as needed:
/* Added to cards */ .no-filter { isolation: isolate; }A comparison:
Left: Full application of blend mode. Right: Blend mode excluded from cards.You’ll need to combine corner-shape with border-radius, which uses corner-shape: round under the hood by default. Yes, that’s right, border-radius doesn’t actually round corners — corner-shape: round does that under the hood. Rather, border-radius handles the x-axis and y-axis coordinates to draw from:
/* Syntax */ border-bottom-left-radius: <x-axis-coord> <y-axis-coord>; /* Usage */ border-bottom-left-radius: 50% 50%; /* Or */ border-bottom-left-radius: 50%;In our case, we’re using border-bottom-left-radius: 100% to slide those coordinates to the opposite end of their respective axes. However, we’ll be overwriting the implied corner-shape: round in our @keyframe animation, so we refer to that with animation: bend-it-like-beckham. There’s no need to specify a duration because it’s a scroll-driven animation, as defined by animation-timeline: scroll().
In the @keyframe animation, we’re animating from corner-shape: superellipse(notch), which is like an inset square. This is equivalent to corner-shape: superellipse(-infinity), so it’s not actually squared but it’s so aggressively sharp that it looks squared. This animates to corner-shape: superellipse(square) (an outset square), or corner-shape: superellipse(infinity).
Animating corner-shape… revisitedThe demo above is actually a bit different to the one that I originally shared in the intro. It has one minor flaw, and I’ll show you how to fix it, but more importantly, you’ll learn more about an intricate detail of corner-shape.
The flaw: at the beginning and end of the animation, the curvature looks quite harsh because we’re animating from notch and square, right? It also looks like the shape is being sucked into the corners. Finally, the shape being stuck to the sides of the viewport makes the whole thing feel too contained.
The solution is simple:
/* Change this... */ inset: 0; /* ...to this */ inset: -1rem;This stretches the shape beyond the viewport, and even though this makes the animation appear to start late and finish early, we can fix that by not animating from/to -infinity/infinity:
@keyframes bend-it-like-beckham { from { corner-shape: superellipse(-6); } to { corner-shape: superellipse(6); } }Sure, this means that part of the shape is always visible, but we can fiddle with the superellipse() value to ensure that it stays outside of the viewport. Here’s a side-by-side comparison:
And the original demo (which is where we’re at now):
CodePen Embed Fallback Adding more scroll featuresScroll-driven animations work very well with other scroll features, including scroll snapping, scroll buttons, scroll markers, simple text fragments, and simple JavaScript methods such as scrollTo()/scroll(), scrollBy(), and scrollIntoView().
For example, we only have to add the following CSS snippet to introduce scroll snapping that works right alongside the scroll-driven corner-shape animation that we’ve already set up:
:root { /* Snap vertically */ scroll-snap-type: y; section { /* Snap to section start */ scroll-snap-align: start; } } CodePen Embed Fallback “Masking” with corner-shapeIn the example below, I’ve essentially created a border around the viewport and then a notched shape (corner-shape: notch) on top of it that’s the same color as the background (background: inherit). This shape completely covers the border at first, but then animates to reveal it (or in this case, the four corners of it):
CodePen Embed FallbackIf I make the shape a bit more visible, it’s easier to see what’s happening here, which is that I’m rotating this shape as well (rotate: 5deg), making the shape even more interesting.
This time around we’re animating border-radius, not corner-shape. When we animate to border-radius: 20vw / 20vh, 20vw and 20vh refers to the x-axis and y-axis of each corner, respectively, meaning that 20% of the border is revealed as we scroll.
The only other thing worth mentioning here is that we need to mess around with z-index to ensure that the content is higher up in the stacking context than the border and shape. Other than that, this example simply demonstrates another fun way to use corner-shape:
@keyframes tech-corners { from { border-radius: 0; } to { border-radius: 20vw / 20vh; } } /* Border */ body::before { /* Fill (- 1rem) */ content: ""; position: fixed; inset: 1rem; border: 1rem solid black; } /* Notch */ body::after { /* Fill (+ 3rem) */ content: ""; position: fixed; inset: -3rem; /* Rotated shape */ background: inherit; rotate: 5deg; corner-shape: notch; /* Animation settings */ animation: tech-corners; animation-timeline: scroll(); } main { /* Stacking fix */ position: relative; z-index: 1; } Animating multiple corner-shape elementsIn this example, we have multiple nested diamond shapes thanks to corner-shape: bevel, all leveraging the same scroll-driven animation where the diamonds increase in size, using padding:
CodePen Embed Fallback <div id="diamonds"> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div></div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> <main> <!-- Content --> </main> @keyframes diamonds-are-forever { from { padding: 7rem; } to { padding: 14rem; } } #diamonds { /* Center them */ position: fixed; inset: 50% auto auto 50%; translate: -50% -50%; /* #diamonds, the <div>s within */ &, div { corner-shape: bevel; border-radius: 100%; animation: diamonds-are-forever; animation-timeline: scroll(); border: 0.0625rem solid #00000030; } } main { /* Stacking fix */ position: relative; z-index: 1; } That’s a wrapWe just explored animating from one custom superellipse() value to another, using corner-shape as a mask to create new shapes (again, while animating it), and animating multiple corner-shape elements at once. There are so many ways to animate corner-shape other than from one keyword to another, and if we make them scroll-driven animations, we can create some really interesting effects (although, they’d also look awesome if they were static).
Experimenting With Scroll-Driven corner-shape Animations originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
JavaScript for Everyone: Destructuring
Editor’s note: Mat Marquis and Andy Bell have released JavaScript for Everyone, an online course offered exclusively at Piccalilli. This post is an excerpt from the course taken specifically from a chapter all about JavaScript destructuring. We’re publishing it here because we believe in this material and want to encourage folks like yourself to sign up for the course. So, please enjoy this break from our regular broadcasting to get a small taste of what you can expect from enrolling in the full JavaScript for Everyone course.
I’ve been writing about JavaScript for long enough that I wouldn’t rule out a hubris-related curse of some kind. I wrote JavaScript for Web Designers more than a decade ago now, back in the era when packs of feral var still roamed the Earth. The fundamentals are sound, but the advice is a little dated now, for sure. Still, despite being a web development antique, one part of the book has aged particularly well, to my constant frustration.
An entire programming language seemed like too much to ever fully understand, and I was certain that I wasn’t tuned for it. I was a developer, sure, but I wasn’t a developer-developer. I didn’t have the requisite robot brain; I just put borders on things for a living.
JavaScript for Web DesignersI still hear this sentiment from incredibly talented designers and highly technical CSS experts that somehow can’t fathom calling themselves “JavaScript developers,” as though they were tragically born without whatever gland produces the chemicals that make a person innately understand the concept of variable hoisting and could never possibly qualify — this despite the fact that many of them write JavaScript as part of their day-to-day work. While I may not stand by the use of alert() in some of my examples (again, long time ago), the spirit of JavaScript for Web Designers holds every bit as true today as it did back then: type a semicolon and you’re writing JavaScript. Write JavaScript and you’re a JavaScript developer, full stop.
Now, sooner or later, you do run into the catch: nobody is born thinking like JavaScript, but to get really good at JavaScript, you will need to learn how. In order to know why JavaScript works the way it does, why sometimes things that feel like they should work don’t, and why things that feel like they shouldn’t work sometimes do, you need to go one step beyond the code you’re writing or even the result of running it — you need to get inside JavaScript’s head. You need to learn to interact with the language on its own terms.
That deep-magic knowledge is the goal of JavaScript for Everyone, a course designed to help you get from junior- to senior developer. In JavaScript for Everyone, my aim is to help you make sense of the more arcane rules of JavaScript as-it-is-played — not just teach you the how but the why, using the syntaxes you’re most likely to encounter in your day-to-day work. If you’re brand new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error; if you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.
Thanks to our friends here at CSS-Tricks, I’m able to share the entire lesson on destructuring assignment. These are some of my favorite JavaScript syntaxes, which I’m sure we can all agree are normal and in fact very cool things to have —syntaxes are as powerful as they are terse, all of them doing a lot of work with only a few characters. The downside of that terseness is that it makes these syntaxes a little more opaque than most, especially when you’re armed only with a browser tab open to MDN and a gleam in your eye. We got this, though — by the time you’ve reached the end of this lesson, you’ll be unpacking complex nested data structures with the best of them.
And if you missed it before, there’s another excerpt from the JavaScript for Everyone course covering JavaScript Expressions available here on CSS-Tricks.
Destructuring AssignmentWhen you’re working with a data structure like an array or object literal, you’ll frequently find yourself in a situation where you want to grab some or all of the values that structure contains and use them to initialize discrete variables. That makes those values easier to work with, but historically speaking, it can lead to pretty wordy code:
const theArray = [ false, true, false ]; const firstElement = theArray[0]; const secondElement = theArray[1]; const thirdElement = theArray[2];This is fine! I mean, it works; it has for thirty years now. But as of 2015’s ES6, we’ve had a much more elegant option: destructuring.
Destructuring allows you to extract individual values from an array or object and assign them to a set of identifiers without needing to access the keys and/or values one at a time. In its most simple form — called binding pattern destructuring — each value is unpacked from the array or object literal and assigned to a corresponding identifier, all of which are declared with a single let or const (or var, technically, yes, fine). Brace yourself, because this is a strange one:
const theArray = [ false, true, false ]; const [ firstElement, secondElement, thirdElement ] = theArray; console.log( firstElement ); // Result: false console.log( secondElement ); // Result: true console.log( thirdElement ); // Result: falseThat’s the good stuff, even if it is a little weird to see brackets on that side of an assignment operator. That one binding covers all the same territory as the much more verbose snippet above it.
When working with an array, the individual identifiers are wrapped in a pair of array-style brackets, and each comma separated identifier you specify within those brackets will be initialized with the value in the corresponding element in the source Array. You’ll sometimes see destructuring referred to as unpacking a data structure, but despite how that and “destructuring” both sound, the original array or object isn’t modified by the process.
Elements can be skipped over by omitting an identifier between commas, the way you’d leave out a value when creating a sparse array:
const theArray = [ true, false, true ]; const [ firstElement, , thirdElement ] = theArray; console.log( firstElement ); // Result: true console.log( thirdElement ); // Result: trueThere are a couple of differences in how you destructure an object using binding pattern destructuring. The identifiers are wrapped in a pair of curly braces rather than brackets; sensible enough, considering we’re dealing with objects. In the simplest version of this syntax, the identifiers you use have to correspond to the property keys:
const theObject = { "theProperty" : true, "theOtherProperty" : false }; const { theProperty, theOtherProperty } = theObject; console.log( theProperty ); // result: true console.log( theOtherProperty ); // result: falseAn array is an indexed collection, and indexed collections are intended to be used in ways where the specific iteration order matters — for example, with destructuring here, where we can assume that the identifiers we specify will correspond to the elements in the array, in sequential order.
That’s not the case with an object, which is a keyed collection — in strict technical terms, just a big ol’ pile of properties that are intended to be defined and accessed in whatever order, based on their keys. No big deal in practice, though; odds are, you’d want to use the property keys’ identifier names (or something very similar) as your identifiers anyway. Simple and effective, but the drawback is that it assumes a given… well, structure to the object being destructured.
This brings us to the alternate syntax, which looks absolutely wild, at least to me. The syntax is object literal shaped, but very, very different — so before you look at this, briefly forget everything you know about object literals:
const theObject = { "theProperty" : true, "theOtherProperty" : false }; const { theProperty : theIdentifier, theOtherProperty : theOtherIdentifier } = theObject; console.log( theIdentifier ); // result: true console.log( theOtherIdentifier ); // result: falseYou’re still not thinking about object literal notation, right? Because if you were, wow would that syntax look strange. I mean, a reference to the property to be destructured where a key would be and identifiers where the values would be?
Fortunately, we’re not thinking about object literal notation even a little bit right now, so I don’t have to write that previous paragraph in the first place. Instead, we can frame it like this: within the parentheses-wrapped curly braces, zero or more comma-separated instances of the property key with the value we want, followed by a colon, followed by the identifier we want that property’s value assigned to. After the curly braces, an assignment operator (=) and the object to be destructured. That’s all a lot in print, I know, but you’ll get a feel for it after using it a few times.
The second approach to destructuring is assignment pattern destructuring. With assignment patterns, the value of each destructured property is assigned to a specific target — like a variable we declared with let (or, technically, var), a property of another object, or an element in an array.
When working with arrays and variables declared with let, assignment pattern destructuring really just adds a step where you declare the variables that will end up containing the destructured values:
const theArray = [ true, false ]; let theFirstIdentifier; let theSecondIdentifier [ theFirstIdentifier, theSecondIdentifier ] = theArray; console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // falseThis gives you the same end result as you’d get using binding pattern destructuring, like so:
const theArray = [ true, false ]; let [ theFirstIdentifier, theSecondIdentifier ] = theArray; console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // falseBinding pattern destructuring will allow you to use const from the jump, though:
const theArray = [ true, false ]; const [ theFirstIdentifier, theSecondIdentifier ] = theArray; console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // falseNow, if you wanted to use those destructured values to populate another array or the properties of an object, you would hit a predictable double-declaration wall when using binding pattern destructuring:
// Error const theArray = [ true, false ]; let theResultArray = []; let [ theResultArray[1], theResultArray[0] ] = theArray; // Uncaught SyntaxError: redeclaration of let theResultArrayWe can’t make let/const/var do anything but create variables; that’s their entire deal. In the example above, the first part of the line is interpreted as let theResultArray, and we get an error: theResultArray was already declared.
No such issue when we’re using assignment pattern destructuring:
const theArray = [ true, false ]; let theResultArray = []; [ theResultArray[1], theResultArray[0] ] = theArray; console.log( theResultArray ); // result: Array [ false, true ]Once again, this syntax applies to objects as well, with a few little catches:
const theObject = { "theProperty" : true, "theOtherProperty" : false }; let theProperty; let theOtherProperty; ({ theProperty, theOtherProperty } = theObject ); console.log( theProperty ); // true console.log( theOtherProperty ); // falseYou’ll notice a pair of disambiguating parentheses around the line where we’re doing the destructuring. You’ve seen this before: without the grouping operator, a pair of curly braces in a context where a statement is expected is assumed to be a block statement, and you get a syntax error:
// Error const theObject = { "theProperty" : true, "theOtherProperty" : false }; let theProperty; let theOtherProperty; { theProperty, theOtherProperty } = theObject; // Uncaught SyntaxError: expected expression, got '='So far this isn’t doing anything that binding pattern destructuring couldn’t. We’re using identifiers that match the property keys, but any identifier will do, if we use the alternate object destructuring syntax:
const theObject = { "theProperty" : true, "theOtherProperty" : false }; let theFirstIdentifier; let theSecondIdentifier; ({ theProperty: theFirstIdentifier, theOtherProperty: theSecondIdentifier } = theObject ); console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // falseOnce again, nothing binding pattern destructuring couldn’t do. But unlike binding pattern destructuring, any kind of assignment target will work with assignment pattern destructuring:
const theObject = { "theProperty" : true, "theOtherProperty" : false }; let resultObject = {}; ({ theProperty : resultObject.resultProp, theOtherProperty : resultObject.otherResultProp } = theObject ); console.log( resultObject ); // result: Object { resultProp: true, otherResultProp: false }With either syntax, you can set “default” values that will be used if an element or property isn’t present at all, or it contains an explicit undefined value:
const theArray = [ true, undefined ]; const [ firstElement, secondElement = "A string.", thirdElement = 100 ] = theArray; console.log( firstElement ); // Result: true console.log( secondElement ); // Result: A string. console.log( thirdElement ); // Result: 100 const theObject = { "theProperty" : true, "theOtherProperty" : undefined }; const { theProperty, theOtherProperty = "A string.", aThirdProperty = 100 } = theObject; console.log( theProperty ); // Result: true console.log( theOtherProperty ); // Result: A string. console.log( aThirdProperty ); // Result: 100Snazzy stuff for sure, but where this syntax really shines is when you’re unpacking nested arrays and objects. Naturally, there’s nothing stopping you from unpacking an object that contains an object as a property value, then unpacking that inner object separately:
const theObject = { "theProperty" : true, "theNestedObject" : { "anotherProperty" : true, "stillOneMoreProp" : "A string." } }; const { theProperty, theNestedObject } = theObject; const { anotherProperty, stillOneMoreProp = "Default string." } = theNestedObject; console.log( stillOneMoreProp ); // Result: A string.But we can make this way more concise. We don’t have to unpack the nested object separately — we can unpack it as part of the same binding:
const theObject = { "theProperty" : true, "theNestedObject" : { "anotherProperty" : true, "stillOneMoreProp" : "A string." } }; const { theProperty, theNestedObject : { anotherProperty, stillOneMoreProp } } = theObject; console.log( stillOneMoreProp ); // Result: A string.From an object within an object to three easy-to-use constants in a single line of code.
We can unpack mixed data structures just as succinctly:
const theObject = [{ "aProperty" : true, },{ "anotherProperty" : "A string." }]; const [{ aProperty }, { anotherProperty }] = theObject; console.log( anotherProperty ); // Result: A string.A dense syntax, there’s no question of that — bordering on “opaque,” even. It might take a little experimentation to get the hang of this one, but once it clicks, destructuring assignment gives you an incredibly quick and convenient way to break down complex data structures without spinning up a bunch of intermediate data structures and values.
Rest PropertiesIn all the examples above we’ve been working with known quantities: “turn these X properties or elements into Y variables.” That doesn’t match the reality of breaking down a huge, tangled object, jam-packed array, or both.
In the context of a destructuring assignment, an ellipsis (that’s ..., not …, for my fellow Unicode enthusiasts) followed by an identifier (to the tune of ...theIdentifier) represents a rest property — an identifier that will represent the rest of the array or object being unpacked. This rest property will contain all the remaining elements or properties beyond the ones we’ve explicitly unpacked to their own identifiers, all bundled up in the same kind of data structure as the one we’re unpacking:
const theArray = [ false, true, false, true, true, false ]; const [ firstElement, secondElement, ...remainingElements ] = theArray; console.log( remainingElements ); // Result: Array(4) [ false, true, true, false ]Generally I try to avoid using examples that veer too close to real-world use on purpose where they can get a little convoluted and I don’t want to distract from the core ideas — but in this case, “convoluted” is exactly what we’re looking to work around. So let’s use an object near and dear to my heart: (part of) the data representing the very first newsletter I sent out back when I started writing this course.
const firstPost = { "id": "mat-update-1.md", "slug": "mat-update-1", "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.", "collection": "emails", "data": { "title": "Meet your Instructor", "pubDate": "2025-05-08T09:55:00.630Z", "headingSize": "large", "showUnsubscribeLink": true, "stream": "javascript-for-everyone" } };Quite a bit going on in there. For purposes of this exercise, assume this is coming in from an external API the way it is over on my website — this isn’t an object we control. Sure, we can work with that object directly, but that’s a little unwieldy when all we need is, for example, the newsletter title and body:
const firstPost = { "id": "mat-update-1.md", "slug": "mat-update-1", "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.", "data": { "title": "Meet your Instructor", "pubDate": "2025-05-08T09:55:00.630Z", "headingSize": "large", "showUnsubscribeLink": true, "stream": "javascript-for-everyone" } }; const { data : { title }, body } = firstPost; console.log( title ); // Result: Meet your Instructor console.log( body ); /* Result: Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_. Well, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing. */That’s tidy; a couple dozen characters and we have exactly what we need from that tangle. I know I’m not going to need those id or slug properties to publish it on my own website, so I omit those altogether — but that inner data object has a conspicuous ring to it, like maybe one could expect it to contain other properties associated with future posts. I don’t know what those properties will be, but I know I’ll want them all packaged up in a way where I can easily make use of them. I want the firstPost.data.title property in isolation, but I also want an object containing all the rest of the firstPost.data properties, whatever they end up being:
const firstPost = { "id": "mat-update-1.md", "slug": "mat-update-1", "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.", "data": { "title": "Meet your Instructor", "pubDate": "2025-05-08T09:55:00.630Z", "headingSize": "large", "showUnsubscribeLink": true, "stream": "javascript-for-everyone" } }; const { data : { title, ...metaData }, body } = firstPost; console.log( title ); // Result: Meet your Instructor console.log( metaData ); // Result: Object { pubDate: "2025-05-08T09:55:00.630Z", headingSize: "large", showUnsubscribeLink: true, stream: "javascript-for-everyone" }Now we’re talking. Now we have a metaData object containing anything and everything else in the data property of the object we’ve been handed.
Listen. If you’re anything like me, even if you haven’t quite gotten your head around the syntax itself, you’ll find that there’s something viscerally satisfying about the binding in the snippet above. All that work done in a single line of code. It’s terse, it’s elegant — it takes the complex and makes it simple. That’s the good stuff.
And yet: maybe you can hear it too, ever-so-faintly? A quiet voice, way down in the back of your mind, that asks “I wonder if there’s an even better way.” For what we’re doing here, in isolation, this solution is about as good as it gets — but as far as the wide world of JavaScript goes: there’s always a better way. If you can’t hear it just yet, I bet you will by the end of the course.
Anyone who writes JavaScript is a JavaScript developer; there are no two ways about that. But the satisfaction of creating order from chaos in just a few keystrokes, and the drive to find even better ways to do it? Those are the makings of a JavaScript developer to be reckoned with.
You can do more than just “get by” with JavaScript; I know you can. You can understand JavaScript, all the way down to the mechanisms that power the language — the gears and springs that move the entire “interactive” layer of the web. To really understand JavaScript is to understand the boundaries of how users interact with the things we’re building, and broadening our understanding of the medium we work with every day sharpens all of our skills, from layout to accessibility to front-end performance to typography. Understanding JavaScript means less “I wonder if it’s possible to…” and “I guess we have to…” in your day-to-day decision making, even if you’re not the one tasked with writing it. Expanding our skillsets will always make us better — and more valued, professionally — no matter our roles.
JavaScript is a tricky thing to learn; I know that all too well — that’s why I wrote JavaScript for Everyone. You can do this, and I’m here to help.
I hope to see you there.
Check out the courseJavaScript for Everyone: Destructuring originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Consistent Character Maker Update
A couple months ago, I wrote about how design tools are the new design deliverables and built the LukeW Character Maker to illustrate the idea. Since then, people have made over 4,500 characters and I regularly get asked how it stays consistent. I recently updated the image model, error-checking, and prompts, so here's what changed and why.
New Image ModelGoogle recently released a new version of their image generation model (Nano Banana 2) and I put it to the test on my Character Maker. The results are noticeably more dynamic and three-dimensional than the previous version. Characters have more depth, better lighting, and more active poses. So I'm now using it as the default model (until Reve 1.5 is available as an API).
One of the ways I originally reinforced consistency in my character maker was by checking whether an image generation model's API returned images with the same dimensions as the reference images I sent it. If the dimensions didn't match, I knew the model had ignored the visual reference so I forced it to try again. In my testing, this was needed about 1 in every 30-40 images. A very simple check, but it worked well.
A week into using Nano Banana 2, that sizing check started throwing errors. Generated images were no longer coming back with the exact dimensions of my reference images, breaking my verification loop. I had to resize the reference images to match Google's default 1K image size (1365px by 768px). But that took away my consistency check, so I had reinforce my prompt rewriter to make up for it.
Update: A day after publishing this overview, Google quietly changed the image format their API returns (from PNG to WEBP). This made image dimensions read incorrectly, causing every generation attempt to fail. Had to implementation a fix that works regardless of what format Google decides to send back.
Prompt Rewriter IterationThis is where most of the ongoing work happens. As real people used the tool, edge cases piled up and the first step of my pipeline (prompt rewriting) had to evolve. For example, my character is supposed to be faceless (no eyes, no mouth, no hair). This had to be reinforced progressively over several iterations. Turns out image models really want to put a face on things.
For color accuracy, I shifted from named colors like "lime-green" that relied on the reference images for accuracy to explicitly adding both HEX codes and RGB values. Getting the exact greens to reproduce consistently required that level of specificity. I also added default outfit color rules for when people try to request color changes.
Content moderation expanded steadily as people found creative ways to push boundaries. I blocked categories like gore, inappropriate clothing, and full body color changes, while loosening rejection criteria from blocking any "appearance changes" to only rejecting clearly inappropriate inputs. The goal: allow creative freedom while preventing abuse.
The overall approach was: start broad, then iteratively tighten character consistency while expanding content moderation guardrails as real usage revealed what was needed.
At this point, my character comes back consistent almost every time. About 1 in 50 generations still produces an extra arm or a mouth (he's faceless, remember?). I've tested checking each image with a vision model then sending it back for regeneration if something is off (examples above). But given how rarely this happens and how much latency and cost it would auto check every image, it's currently not worth the tradeoff for me. For other uses cases, it might be?
If you haven't already, try the LukeW Character Maker yourself. Though I might have to revisit the pipeline again if you get too creative.
What’s !important #7: random(), Folded Corners, Anchored Container Queries, and More
For this issue of What’s !important, we have a healthy balance of old CSS that you might’ve missed and new CSS that you don’t want to miss. This includes random(), random-item(), folded corners using clip-path, backdrop-filter, font-variant-numeric: tabular-nums, the Popover API, anchored container queries, anchor positioning in general, DOOM in CSS, customizable <select>, :open, scroll-triggered animations, <toolbar>, and somehow, more.
Let’s dig in.
Understanding random() and random-item()Alvaro Montoro explains how the random() and random-item() CSS functions work. As it turns out, they’re actually quite complex:
width: random(--w element-shared, 1rem, 2rem); color: random-item(--c, red, orange, yellow, darkkhaki); Creating folded corners using clip-pathMy first solution to folded corners involved actual images. Not a great solution, but that was the way to do it in the noughties. Since then we’ve been able to do it with box-shadow, but Kitty Giraudel has come up with a CSS clip-path solution that clips a custom shape (hover the kitty to see it in action):
CodePen Embed Fallback Revisiting backdrop-filter and font-variant-numeric: tabular-numsStuart Robson talks about backdrop-filter. It’s not a new CSS property, but it’s very useful and hardly ever talked about. In fact, up until now, I thought that it was for the ::backdrop pseudo-element, but we can actually use it to create all kinds of background effects for all kinds of elements, like this:
CodePen Embed Fallbackfont-variant-numeric: tabular-nums is another one. This property and value prevents layout shift when numbers change dynamically, as they do with live clocks, counters, timers, financial tables, and so on. Amit Merchant walks you through it with this demo:
CodePen Embed Fallback Getting started with the Popover APIGodstime Aburu does a deep dive on the Popover API, a new(ish) but everyday web platform feature that simplifies tooltip and tooltip-like UI patterns, but isn’t without its nuances.
Unraveling yet another anchor positioning quirkJust another anchor positioning quirk, this time from Chris Coyier. These quirks have been piling up for a while now. We’ve talked about them time and time again, but the thing is, they’re not bugs. Anchor positioning works in a way that isn’t commonly understood, so Chris’ article is definitely worth a read, as are the articles that he references.
Building dynamic toggletips using anchored container queriesIn this walkthrough, I demonstrate how to build dynamic toggletips using anchored container queries. Also, I ran into an anchor positioning quirk, so if you’re looking to solidify your understanding of all that, I think the walkthrough will help with that too.
Demo (full effect requires Chrome 143+):
CodePen Embed Fallback DOOM in CSSDOOM in CSS. DOOM. In CSS.
DOOM fully rendered in CSS. Every surface is a <div> that has a background image, with a clipping path with 3D transforms applied. Of course CSS does not have a movable camera, so we rotate and translate the scene around the user.
[image or embed]
- Safari Technology Preview 238
- Customizable <select>
- :open (to my surprise, as I thought it was Baseline already)
- Chrome 146
In addition, Chrome will ship every two weeks starting September.
From the Quick Hits reel, you might’ve missed that Font Awesome launched a Kickstarter campaign to transform Eleventy into Build Awesome, cancelled it because their emails failed to send (despite meeting their goal!), and vowed to try again. You can subscribe to the relaunch notification.
Also, <toolbar> is coming along according to Luke Warlow. This is akin to <focusgroup>, which we can actually test in Chrome 146 with the “Experimental Web Platform features” flag enabled.
Right, I’m off to slay some demons in DOOM. Until next time!
P.S. Congratulations to Kevin Powell for making it to 1 million YouTube subs!
What’s !important #7: random(), Folded Corners, Anchored Container Queries, and More originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
4 Reasons That Make Tailwind Great for Building Layouts
When I talk about layouts, I’m referring to how you place items on a page. The CSS properties that are widely used here include:
- display — often grid or flex nowadays
- margin
- padding
- width
- height
- position
- top, left, bottom, right
I often include border-width as a minor item in this list as well.
At this point, there’s only one thing I’d like to say.
Tailwind is really great for making layouts.
There are many reasons why.
First: Layout styles are highly dependent on the HTML structureWhen we shift layouts into CSS, we lose the mental structure and it takes effort to re-establish them. Imagine the following three-column grid in HTML and CSS:
<div class="grid"> <div class="grid-item"></div> <div class="grid-item"></div> </div> .grid { display: grid; grid-template-columns: 2fr 1fr; .grid-item:first-child { grid-column: span 2 } .grid-item:last-child { grid-column: span 1 } }Now cover the HTML structure and just read the CSS. As you do that, notice you need to exert effort to imagine the HTML structure that this applies to.
Now imagine the same, but built with Tailwind utilities:
<div class="grid grid-cols-3"> <div class="col-span-2"></div> <div class="col-span-1"></div> </div>You might almost begin to see the layout manifest in your eyes without seeing the actual output. It’s pretty clear: A three-column grid, first item spans two columns while the second one spans one column.
But grid-cols-3 and col-span-2 are kinda weird and foreign-looking because we’re trying to parse Tailwind’s method of writing CSS.
Now, watch what happens when we shift the syntax out of the way and use CSS variables to define the layout instead. The layout becomes crystal clear immediately:
<div class="grid-simple [--cols:3]"> <div class="[--span:2]"> ... </div> <div class="[--span:1]"> ... </div> </div>Same three-column layout.
But it makes the layout much easier to write, read, and visualize. It also has other benefits, but I’ll let you explore its documentation instead of explaining it here.
For now, let’s move on.
Why not use 2fr 1fr?It makes sense to write 2fr 1fr for a three-column grid, doesn’t it?
.grid { display: grid; grid-template-columns: 2fr 1fr; }Unfortunately, it won’t work. This is because fr is calculated based on the available space after subtracting away the grid’s gutters (or gap).
Since 2fr 1fr only contains two columns, the output from 2fr 1fr will be different from a standard three-column grid.
Alright. Let’s continue with the reasons that make Tailwind great for building layouts.
Second: No need to name layoutsI think layouts are the hardest things to name. I rarely come up with better names than:
- Number + Columns, e.g. .two-columns
- Semantic names, e.g. .content-sidebar
But these names don’t do the layout justice. You can’t really tell what’s going on, even if you see .two-columns, because .two-columns can mean a variety of things:
- Two equal columns
- Two columns with 1fr auto
- Two columns with auto 1fr
- Two columns that spans total of 7 “columns” and the first object takes up 4 columns while the second takes up 3…
You can already see me tripping up when I try to explain that last one there…
Instead of forcing ourselves to name the layout, we can let the numbers do the talking — then the whole structure becomes very clear.
<div class="grid-simple [--cols:7]"> <div class="[--span:4]"> ... </div> <div class="[--span:3]"> ... </div> </div>The variables paint a picture.
Third: Layout requirements can change depending on contextA “two-column” layout might have different properties when used in different contexts. Here’s an example.
In this example, you can see that:
- A larger gap is used between the I and J groups.
- A smaller gap is used within the I and J groups.
The difference in gap sizes is subtle, but used to show that the items are of separate groups.
Here’s an example where this concept is used in a real project. You can see the difference between the gap used within the newsletter container and the gap used between the newsletter and quote containers.
If this sort of layout is only used in one place, we don’t have to create a modifier class just to change the gap value. We can change it directly.
<div class="grid-simple [--cols:2] gap-8"> <div class="grid-simple gap-4 [--cols:2]"> ... </div> <div class="grid-simple gap-4 [--cols:2]"> ... </div> </div> Another common exampleLet’s say you have a heading for a marketing section. The heading would look nicer if you are able to vary its max-width so the text isn’t orphaned.
text-balance might work here, but this is often nicer with manual positioning.
Without Tailwind, you might write an inline style for it.
<h2 class="h2" style="max-width: 12em;"> Your subscription has been confirmed </h2>With Tailwind, you can specify the max-width in a more terse way:
<h2 class="h2 max-w-[12em]"> Your subscription has been confirmed </h2> Fourth: Responsive variants can be created on the fly“At which breakpoint would you change your layouts?” is another factor you’d want to consider when designing your layouts. I shall term this the responsive factor for this section.
Most likely, similar layouts should have the same responsive factor. In that case, it makes sense to group the layouts together into a named layout.
.two-column { @apply grid-simple; /* --cols: 1 is the default */ @media (width >= 800px) { --cols:2; } }However, you may have layouts where you want two-column grids on a mobile and a much larger column count on tablets and desktops. This layout style is commonly used in a site footer component.
Since the footer grid is unique, we can add Tailwind’s responsive variants and change the layout on the fly.
<div class="grid-simple [--cols:2] md:[--cols:5]"> <!-- span set to 1 by default so there's no need to specify them --> <div> ... </div> <div> ... </div> <div> ... </div> <div> ... </div> <div> ... </div> <div> ... </div> </div>Again, we get to create a new layout on the fly without creating an additional modifier class — this keeps our CSS clean and focused.
How to best use TailwindThis article is a sample lesson from my course, Unorthodox Tailwind, where I show you how to use Tailwind and CSS synergistically.
Personally, I think the best way to use Tailwind is not to litter your HTML with Tailwind utilities, but to create utilities that let you create layouts and styles easily.
I cover much more of that in the course if you’re interested to find out more!
4 Reasons That Make Tailwind Great for Building Layouts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Durable Patterns in AI Product Design
In my recent Designing AI Products talk, I outlined several of the lessons we've learned building AI-native companies over the past four years. Specifically the patterns that keep proving durable as we speed-run through this evolution of what AI products will ultimately become.
I opened by framing something I think is really important: every time there's a major technology platform shift, almost everything about what an "application" is changes. From mainframes to personal computers, from desktop software to web apps, from web to mobile, the way we build, deliver, and experience software transforms completely each time.
There's always this awkward period where we try to cram the old paradigm into the new one. I dug up an old deck from when we were redesigning Yahoo, and even two years after the iPhone launched, we were still just trying to port the Yahoo webpage into a native iOS app. The same thing is happening now with AI. The difference is this evolution is moving really, really fast.
From there, I walked through the stages of AI product evolution as I've experienced them.
The first stage is AI working behind the scenes. Back in 2016, Google Translate was "completely reinvented," but the interface itself changed not at all. What actually happened was they replaced all these separate translation systems with a single neural network that could translate between language pairs it was never explicitly trained on. YouTube made a similar move with deep learning for video recommendations. The UIs stayed the same; everything transformative was happening under the hood.
I remember being at Google for years where the conversation was always about how to make machine learning more of a core part of the experience, but it never really got to the point where people were explicitly interacting with an AI model.
That changed with the explosion of chat. ChatGPT and everything that looks exactly like it made direct conversation with AI models the dominant pattern, and chat got bolted onto nearly every software product in a very short time. I illustrated this with Ask LukeW, a system I built almost three years ago that lets people talk to my body of work in natural language. It seems pretty simple now, but building and testing it surfaced a few patterns that have carried over into everything we've done since.
One is suggested questions. When you ask something, the system shows follow-up suggestions tied to your question and the broader corpus. When we tested this, we found these did an enormous amount of heavy lifting. They helped people understand what the system could do and how to use it.
A huge percentage of all interactions kicked off from one of these suggestions. And they've only gotten better with stronger models. In our newer products like Rev (for creatives) and Intent (for developers), the suggestions have become so relevant that people often just pick them with keyboard shortcuts instead of typing anything at all.
Another pattern is citation. Even just seeing where information comes from gives people a real trust boost. In Ask LukeW, you could hover over a citation and it would take you to the specific part of a document or video. This was an early example, but as AI systems gain access to more tools and can do much more than look up information, the question of how to represent what they did and why in the interface becomes increasingly important.
And the third is what I call the walls of text problem. Because so much of this is built on large language models, people are often left staring at big blocks of text they have to parse and interpret. We found that bringing back multimedia, like responding with images alongside text, or using diagrams and interactive elements, helped a lot.
Through that walkthrough of what now seems like a pretty simple AI application, I'd actually touched on what I think are the three core issues that remain with us today: capability awareness (what can I do here?), context awareness (what is the system looking at?), and the walls of text problem (too much output to process).
The next major stage is things becoming agentic. When AI models can use tools, make plans, configure those tools, analyze results, think in between steps, and fire off more tools based on what they find, the complexity of what to show in the UI explodes. And this compounds when you remember that most of this is getting bolted into side panels of existing software. I showed a developer tool where a single request to an agent produced this enormous thread of tool calls, model responses, more tool calls, and on and on. It's just a lot to take in.
A common reaction is to just show less of it, collapse it, or hide it entirely. And some AI products do that. But what I've seen consistently is that users fall into two groups. One group really wants to see what the system is thinking and doing and why. The other group just wants to let it rip and see what comes out. I originally thought this was a new-versus-experienced user thing, but it honestly feels more like two distinct mindsets.
We've tried many different approaches. In Bench, a workspace for knowledge work, we showed all tool calls on the left, let you click into each one to see what it did, and expand the thinking steps between them. You could even open individual tool calls and see their internal steps. That was a lot.
As we iterated, we moved from highlighting every tool call to condensing them, surfacing just what they were doing, and eventually showing processes inline as single lines you could expand if you wanted. The pattern we've landed on in Intent is collapsed single-line entries for each action. If you really want to, you can pop one open and see what happened inside, but for the most part, collapsing these things (and even finding ways to collapse collapses of these things) is where we are now.
We also experimented with separating process from results entirely. In ChatDB, when you ask a question, the thinking steps appear on the left while results show up on the right. You can scroll through results independently while keeping the summary visible, or open up the thought process to see why it did what it did. Changing the layout to give actual results more prominence while still making the reasoning accessible has worked well.
On the capability awareness front, I showed several approaches we've explored. One is prompt enhancement, where you type something simple and the model rewrites it into a much more detailed, context-aware instruction. This gets really interesting when the system can automatically search a codebase (like our product Augment does) to find relevant patterns and write better instructions that account for them.
Another approach was Bench's visual task builder, where you compose compound sentences from columns of capabilities: "I want to... search... Notion for... a topic... and create a PowerPoint summarizing the findings." This gives people tremendous visibility into what the system can do while also helping them point it in the right direction.
And then there's onboarding. Designers are familiar with the empty screen problem, and the usual advice is to throw tooltips or tutorials at it. But it turns out we can have the AI model handle all of this instead. In ChatDB, when you drag a spreadsheet onto the page, the system picks a color, picks an icon, names the dashboard, starts running analysis, and generates charts for you. You learn what it does by watching it do things, rather than trying to figure out what you can tell it to do.
For context awareness, I showed how products like Reve let you spatially tell the model what to pay attention to. You can highlight an object in an image, drag in reference art, move elements around, and then apply all those changes. You're being very explicit through the interface about what the model should focus on. I also showed context panels where you can attach files, select text, or point the model at specific folders.
The final stage I explored is agents orchestrating other agents. In Intent, there's an agent orchestration mode where a coordinator agent figures out the plan, shows it to you for review, and then kicks off a bunch of sub-agents to execute different parts of the work in parallel. You can watch each agent working on its piece. I think there's a big open question here about where the line is.
How much can people actually process and manage? If you use the metaphor of being a manager or a CEO, can you be a CEO of CEOs? I don't think we know yet, but this is clearly where the evolution is heading.
The throughline of the whole talk was that while the final form of AI applications hasn't been figured out, certain patterns keep proving their value at each stage. Those durable patterns, the ones that hang around and sometimes become even more important as things evolve, are the ones worth paying close attention to.
Finding the Role of Humans in AI Products
As AI products have evolved from models behind the scenes to chat interfaces to agentic systems to agents coordinating other agents, the design question has begun to shift. It used to be about how people interact with AI. Now it's about where and how people fit in.
The clearest example of this is in software development. In Anthropic's 2025 data, software developers made up 3% of U.S. workers but nearly 40% of all Claude conversations. A year later, their 2026 Measuring Agent Autonomy report showed software engineering accounting for roughly 50% of AI agent deployments. Whatever developers are doing with AI now, other domains are likely to follow suit.
And what developers have been doing is watching their role abstract upward at a pace that's hard to overstate.
- First, humans wrote code. You typed, the computer did what you said.
- Then machines started suggesting. GitHub Copilot's early form was essentially AI behind the scenes, offering inline completions. You picked which suggestions to use. Still very much in the driver's seat.
- Then humans started talking to AI directly. The chat era. You could describe what you wanted in natural language, paste in a broken function, brainstorm architecture. The model became a collaborator.
- Then agents got tools. The model doesn't just respond with text anymore. It searches files, calls APIs, writes code, checks its own work, and decides what to do next based on the results. You're no longer directing each step.
- Then came orchestration. A coordinator agent receives your request, builds a plan, and delegates to specialized sub-agents. You review and approve the plan, but execution fans out across multiple autonomous workers.
To make this more tangible, our developer workspace, Intent, makes use of agent orchestration where a coordinator agent analyzes what needs to happen, searches across relevant resources, and generates a plan. Once you approve that plan, the coordinator kicks off specialized agents to do the work: one handling the design system, another building out navigation, another coordinating their outputs. Your role is to review, approve, and steer.
Stack that one more level and you've got machines running machines running machines. At which point: where exactly does the human sit?
To use a metaphor we're all familiar with: a manager keeps tabs on a handful of direct reports. A director manages managers. A CEO manages directors. At each layer, the person at the top trades direct understanding for leverage. They see less of the actual work and more of the summaries, status updates, and roll-ups.
But being an effective CEO is extraordinarily rare. Not just thinking you can do it, but actually doing it well. And a CEO of CEOs? The number of people who have operated at that scale is vanishingly small.
Which raises two questions. First, how far up the stack can humans actually go? Agent orchestration? Orchestration of orchestration? Where does it break down? Second, at whatever level we land on, what skills do people need to operate there?
The durable skills may turn out to be steering, delegation, and awareness: knowing what to ask for, how much autonomy to grant, and when to look under the hood. These aren't programming skills. They're closer to the skills of a good leader who knows when to let the team run and when to step in.
We used to design how people interact with software. Now we're designing how much they need to.
The Value of z-index
The z-index property is one of the most important tools any UI developer has at their disposal, as it allows you to control the stacking order of elements on a webpage. Modals, toasts, popups, dropdowns, tooltips, and many other common elements rely on it to ensure they appear above other content.
While most resources focus on the technical details or the common pitfalls of the Stacking Context (we’ll get to that in a moment…), I think they miss one of the most important and potentially chaotic aspects of z-index: the value.
In most projects, once you hit a certain size, the z-index values become a mess of “magic numbers”, a chaotic battlefield of values, where every team tries to outdo the others with higher and higher numbers.
How This Idea StartedI saw this line on a pull request a few years ago:
z-index: 10001;I thought to myself, “Wow, that’s a big number! I wonder why they chose that specific value?” When I asked the author, they said: “Well, I just wanted to make sure it was above all the other elements on the page, so I chose a high number.”
This got me thinking about how we look at the stacking order of our projects, how we choose z-index values, and more importantly, the implications of those choices.
The Fear of Being HiddenThe core issue isn’t a technical one, but a lack of visibility. In a large project with multiple teams, you don’t always know what else is floating on the screen. There might be a toast notification from Team A, a cookie banner from Team B, or a modal from the marketing SDK.
The developer’s logic was simple in this case: “If I use a really high number, surely it will be on top.”
This is how we end up with magic numbers, these arbitrary values that aren’t connected to the rest of the application. They are guesses made in isolation, hoping to win the “arms race” of z-index values.
We’re Not Talking About Stacking Context… But…As I mentioned at the beginning, there are many resources that cover z-index in the context of the Stacking Context. In this article, we won’t cover that topic. However, it’s impossible to talk about z-index values without at least mentioning it, as it’s a crucial concept to understand.
Essentially, elements with a higher z-index value will be displayed in front of those with a lower value as long as they are in the same Stacking Context.
If they aren’t, then even if you set a massive z-index value on an element in a “lower” stack, elements in a “higher” stack will stay on top of it, even if they have a very low z-index value. This means that sometimes, even if you give an element the maximum possible value, it can still end up being hidden behind something else.
CodePen Embed Fallback CodePen Embed FallbackNow let’s get back to the values.
💡 Did you know? The maximum value for z-index is 2147483647. Why this specific number? It’s the maximum value for a 32-bit signed integer. If you try to go any higher, most browsers will simply clamp it to this limit.
The Problem With “Magic Numbers”Using arbitrary high values for z-index can lead to several issues:
- Lack of maintainability: When you see a z-index value like 10001, it doesn’t tell you anything about its relationship to other elements. It’s just a number that was chosen without any context.
- Potential for conflicts: If multiple teams or developers are using high z-index values, they might end up conflicting with each other, leading to unexpected behavior where some elements are hidden behind others.
- Difficult to debug: When something goes wrong with the stacking order, it can be challenging to figure out why, especially if there are many elements with high z-index values.A Better Approach
I’ve encountered this “arms race” in almost every large project I’ve been a part of. The moment you have multiple teams working in the same codebase without a standardized system, chaos eventually takes over.
The solution is actually quite simple: tokenization of z-index values.
Now, wait, stay with me! I know that the moment someone mentions “tokens”, some developers might roll their eyes or shake their heads, but this approach actually works. Most of the major (and better-designed) design systems include z-index tokens for a reason. Teams that adopt them swear by them and never look back.
By using tokens, you gain:
- Simple and easy maintenance: You manage values in one place.
- Conflict prevention: No more guessing if 100 is higher than whatever Team B is using.
- Easier debugging:: You can see exactly which “layer” an element belongs to.
- Better Stacking Context management: It forces you to think about layers systematically rather than as random numbers.
Let’s look at how this works in practice. I’ve prepared a simple demo where we manage our layers through a central set of tokens in the :root:
:root { --z-base: 0; --z-toast: 100; --z-popup: 200; --z-overlay: 300; } CodePen Embed FallbackThis setup is incredibly convenient. If you need to add a new popup or a toast, you know exactly which z-index to use. If you want to change the order — for example, to place toasts above the overlay — you don’t need to hunt through dozens of files. You just change the values in the :root, and everything updates accordingly in one place.
Handling New ElementsThe real power of this system shines when your requirements change. Suppose you need to add a new sidebar and place it specifically between the base content and the toasts.
In a traditional setup, you’d be checking every existing element to see what numbers they use. With tokens, we simply insert a new token and adjust the scale:
:root { --z-base: 0; --z-sidebar: 100; --z-toast: 200; --z-popup: 300; --z-overlay: 400; } CodePen Embed FallbackYou don’t have to touch a single existing component with this setup. You update the tokens and you’re good to go. The logic of your application remains consistent, and you’re no longer guessing which number is “high enough”.
The Power of Relative LayeringWe sometimes want to “lock” specific layers relative to each other. A great example of this is a background element for a modal or an overlay. Instead of creating a separate token for the background, we can calculate its position relative to the main layer.
Using calc() allows us to maintain a strict relationship between elements that always belong together:
.overlay-background { z-index: calc(var(--z-overlay) - 1); }This ensures that the background will always stay exactly one step behind the overlay, no matter what value we assign to the --z-overlay token.
CodePen Embed Fallback Managing Internal LayersUp until now, we’ve focused on the main, global layers of the application. But what happens inside those layers?
The tokens we created for the main layers (like 100, 200, etc.) are not suitable for managing internal elements. This is because most of these main components create their own Stacking Context. Inside a popup that has z-index: 300, a value of 301 is functionally identical to 1. Using large global tokens for internal positioning is confusing and unnecessary.
Note: For these local tokens to work as expected, you must ensure the container creates a Stacking Context. If you’re working on a component that doesn’t already have one (e.g., it doesn’t has a z-index set), you can create one explicitly using isolation: isolate.
To solve this, we can introduce a pair of “local” tokens specifically for internal use:
:root { /* ... global tokens ... */ --z-bottom: -10; --z-top: 10; }This allows us to handle internal positioning with precision. If you need a floating action button inside a popup to stay on top, or a decorative icon on a toast to sit behind the main content, you can use these local anchors:
.popup-close-button { z-index: var(--z-top); } .toast-decorative-icon { z-index: var(--z-bottom); }For even more complex internal layouts, you can still use calc() with these local tokens. If you have multiple elements stacking within a component, calc(var(--z-top) + 1) (or - 1) gives you that extra bit of precision without ever needing to look at global values.
CodePen Embed FallbackThis keeps our logic consistent: we think about layers and positions systematically, rather than throwing random numbers at the problem and hoping for the best.
Versatile Components: The Tooltip CaseOne of the biggest headaches in CSS is managing components that can appear anywhere, like a tooltip.
Traditionally, developers give tooltips a massive z-index (like 9999) because they might appear over a modal. But if the tooltip is physically inside the modal’s DOM structure, its z-index is only relative to that modal anyway.
A tooltip simply needs to be above the content it’s attached to. By using our local tokens, we can stop the guessing game:
.tooltip { z-index: var(--z-top); }Whether the tooltip is on a button in the main content, an icon inside a toast, or a link within a popup, it will always appear correctly above its immediate surroundings. It doesn’t need to know about the global “arms race” because it’s already standing on the “stable floor” provided by its parent layer’s token.
CodePen Embed Fallback Negative Values Can Be GoodNegative values often scare developers. We worry that an element with z-index: -1 will disappear behind the page background or some distant parent.
However, within our systematic approach, negative values are a powerful tool for internal decorations. When a component creates its own Stacking Context, the z-index is confined to that component. And z-index: var(--z-bottom) simply means “place this behind the default content of this specific container”.
This is perfect for:
- Component backgrounds: Subtle patterns or gradients that shouldn’t interfere with text.
- Shadow simulations: When you need more control than box-shadow provides.
- Inner glows or borders: Elements that should sit “under” the main UI.
With just a few CSS variables, we’ve built a complete management system for z-index. It’s a simple yet powerful way to ensure that managing layers never feels like a guessing game again.
To maintain a clean and scalable codebase, here are the golden rules for working with z-index:
- No magic numbers: Never use arbitrary values like 999 or 10001. If a number isn’t tied to a system, it’s a bug waiting to happen.
- Tokens are mandatory: Every z-index in your CSS should come from a token, either a global layer token or a local positioning token.
- It’s rarely the value: If an element isn’t appearing on top despite a “high” value, the problem is almost certainly its Stacking Context, not the number itself.
- Think in layers: Stop asking “how high should this be?” and start asking “which layer does this belong to?”
- Calc for connection: Use calc() to bind related elements together (like an overlay and its background) rather than giving them separate, unrelated tokens.
- Local contexts for local problems: Use local tokens (--z-top, --z-bottom) and internal stacking contexts to manage complexity within components.
By following these rules, you turn z-index from a chaotic source of bugs into a predictable, manageable part of your design system. The value of z-index isn’t in how high the number is, but in the system that defines it.
Bonus: Enforcing a Clean SystemA system is only as good as its enforcement. In a deadline-driven environment, it’s easy for a developer to slip in a quick z-index: 999 to “make it just work”. Without automation, your beautiful token system will eventually erode back into chaos.
To prevent this, I developed a library specifically designed to enforce this exact system: z-index-token-enforcer.
npm install z-index-token-enforcer --save-devIt provides a unified set of tools to automatically flag any literal z-index values and require developers to use your predefined tokens:
- Stylelint plugin: For standard CSS/SCSS enforcement
- ESLint plugin: To catch literal values in CSS-in-JS and React inline styles
- CLI scanner: A standalone script that can quickly scan files directly or be integrated into your CI/CD pipelines
By using these tools, you turn the “Golden Rules” from a recommendation into a hard requirement, ensuring that your codebase stays clean, scalable, and, most importantly, predictable.
The Value of z-index originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Steven Heller’s Font of the Month: Curve Display
Read the book, Typographic Firsts
What typographers and designers want most in a font is a family that, under the right conditions, exudes a familiar yet distinctive voice. As I was scrolling around for this month’s selection, I found just that very face. It is a distinctive yet suggestive legacy with a “feeling” that is […]
The post Steven Heller’s Font of the Month: Curve Display appeared first on I Love Typography Ltd.
Popover API or Dialog API: Which to Choose?
Choosing between Popover API and Dialog API is difficult because they seem to do the same job, but they don’t!
After a bit lots of research, I discovered that the Popover API and Dialog API are wildly different in terms of accessibility. So, if you’re trying to decide whether to use Popover API or Dialog’s API, I recommend you:
- Use Popover API for most popovers.
- Use Dialog’s API only for modal dialogs.
The relationship between Popovers and Dialogs are confusing to most developers, but it’s actually quite simple.
Dialogs are simply subsets of popovers. And modal dialogs are subsets of dialogs. Read this article if you want to understand the rationale behind this relationship.
This is why you could use the Popover API even on a <dialog> element.
<!-- Using popover on a dialog element --> <dialog popover>...</div>Stylistically, the difference between popovers and modals are even clearer:
- Modals should show a backdrop.
- Popovers should not.
Therefore, you should never style a popover’s ::backdrop element. Doing so will simply indicate that the popover is a dialog — which creates a whole can of problems.
You should only style a modal’s ::backdrop element.
Popover API and its accessibilityBuilding a popover with the Popover API is relatively easy. You specify three things:
- a popovertarget attribute on the popover trigger,
- an id on the popover, and
- a popover attribute on the popover.
The popovertarget must match the id.
<button popovertarget="the-popover"> ... </button> <dialog popover id="the-popover"> The Popover Content </dialog>Notice that I’m using the <dialog> element to create a dialog role. This is optional, but recommended. I do this because dialog is a great default role since most popovers are simply just dialogs.
This two lines of code comes with a ton of accessibility features already built-in for you:
- Automatic focus management
- Focus goes to the popover when opening.
- Focus goes back to the trigger when closing.
- Automatic aria connection
- No need to write aria-expanded, aria-popup and aria-controls. Browsers handle those natively. Woo!
- Automatic light dismiss
- Popover closes when user clicks outside.
- Popover closes when they press the Esc key.
Now, without additional styling, the popover looks kinda meh. Styling is a whole ‘nother issue, so we’ll tackle that in a future article. Geoff has a few notes you can review in the meantime.
CodePen Embed Fallback Dialog API and its accessibilityUnlike the Popover API, the Dialog API doesn’t have many built-in features by default:
- No automatic focus management
- No automatic ARIA connection
- No automatic light dismiss
So, we have to build them ourselves with JavaScript. This is why the Popover API is superior to the Dialog API in almost every aspect — except for one: when modals are involved.
The Dialog API has a showModal method. When showModal is used, the Dialog API creates a modal. It:
- automatically inerts other elements,
- prevents users from tabbing into other elements, and
- prevents screen readers from reaching other elements.
It does this so effectively, we no longer need to trap focus within the modal.
But we gotta take care of the focus and ARIA stuff when we use the Dialog API, so let’s tackle the bare minimum code you need for a functioning dialog.
We’ll begin by building the HTML scaffold:
<button class="modal-invoker" data-target="the-modal" aria-haspopup="dialog" >...</button> <dialog id="the-modal">The Popover Content</dialog>Notice I did not add any aria-expanded in the HTML. I do this for a variety of reasons:
- This reduces the complexity of the HTML.
- We can write aria-expanded, aria-controls, and the focus stuff directly in JavaScript – since these won’t work without JavaScript.
- Doing so makes this HTML very reusable.
I’m going to write about a vanilla JavaScript implementation here. If you’re using a framework, like React or Svelte, you will have to make a couple of changes — but I hope that it’s gonna be straightforward for you.
First thing to do is to loop through all dialog-invokers and set aria-expanded to false. This creates the initial state.
We will also set aria-controls to the <dialog> element. We’ll do this even though aria-controls is poop, ’cause there’s no better way to connect these elements (and there’s no harm connecting them) as far as I know.
const modalInvokers = Array.from(document.querySelectorAll('.modal-invoker')) modalInvokers.forEach(invoker => { const dialogId = invoker.dataset.target const dialog = document.querySelector(`#${dialogId}`) invoker.setAttribute('aria-expanded', false) invoker.setAttribute('aria-controls', dialogId) }) Opening the modalWhen the invoker/trigger is clicked, we gotta:
- change the aria-expanded from false to true to show the modal to assistive tech users, and
- use the showModal function to open the modal.
We don’t have to write any code to hide the modal in this click handler because users will never get to click on the invoker when the dialog is opened.
modalInvokers.forEach(invoker => { // ... // Opens the modal invoker.addEventListener('click', event => { invoker.setAttribute('aria-expanded', true) dialog.showModal() }) }) CodePen Embed FallbackGreat. The modal is open. Now we gotta write code to close the modal.
Closing the modalBy default, showModal doesn’t have automatic light dismiss, so users can’t close the modal by clicking on the overlay, or by hitting the Esc key. This means we have to add another button that closes the modal. This must be placed within the modal content.
<dialog id="the-modal"> <button class="modal-closer">X</button> <!-- Other modal content --> </dialog>When users click the close button, we have to:
- set aria-expanded on the opening invoker to false,
- close the modal with the close method, and
- bring focus back to the opening invoker element.
Phew, with this, we’re done with the basic implementation.
CodePen Embed FallbackOf course, there’s advanced work like light dismiss and styling… which we can tackle in a future article.
Can you use the Popover API to create modals?Yeah, you can.
But you will have to handle these on your own:
- Inerting other elements
- Trapping focus
I think what we did earlier (setting aria-expanded, aria-controls, and focus) are easier compared to inerting elements and trapping focus.
The Dialog API might become much easier to use in the futureA proposal about invoker commands has been created so that the Dialog API can include popovertarget like the Popover API.
This is on the way, so we might be able to make modals even simpler with the Dialog API in the future. In the meantime, we gotta do the necessary work to patch accessibility stuff.
Deep dive into building workable popovers and modalsWe’ve only began to scratch the surface of building working popovers and modals with the code above — they’re barebone versions that are accessible, but they definitely don’t look nice and can’t be used for professional purposes yet.
To make the process of building popovers and modals easier, we will dive deeper into the implementation details for a professional-grade popover and a professional-grade modal in future articles.
In the meantime, I hope these give you some ideas on when to choose the Popover API and the Dialog API!
Remember, there’s no need to use both. One will do.
Popover API or Dialog API: Which to Choose? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Small Teams Win, Again
I’ve always believed in the power of small teams. The start-ups I co-founded never exceeded five employees, yet achieved a lot. With today's technology, even more companies can remain extremely small and be extremely effective. And that's awesome.
When Twitter acquired Bagcheck in 2011, Sam (CTO) and I were shipping multiple times a day. We started with a command line interface that let us figure out what objects and actions we needed before ever building any UI. When we did, we used logic-less templates so I could iterate on the front-end quickly while Sam managed the back-end code.
The point was to move fast and learn. With just two people building the product, we never got bottlenecked on decision-making or coordination. While conventional wisdom says "add more resources" to go faster, it rarely works out that way. Most companies go slow because of plodding decision making and opaque alignment. Smaller teams naturally don't have this problem.
But small teams can only do so much right? That's why every team in a big company is always asking for more resources. Not anymore.
Armed with highly capable AI systems, everyone (designer, developer, etc.) on a team can get more done. In big teams, though, these new capabilities smack head first into the decision-making and alignment problems that have always been there. In small teams, they don't.
So how small? Surely we need at least 100? 50? Bagcheck never crossed four employees and when Google acquired my next company, Polar, in 2014 there was five of us. These companies pre-dated AI coding agents and large language models. With today's AI capabilities, the number of people you need to get a lot done fast is probably a lot smaller than you think.
What’s !important #6: :heading, border-shape, Truncating Text From the Middle, and More
Despite what’s been a sleepy couple of weeks for new Web Platform Features, we have an issue of What’s !important that’s prrrretty jam-packed. The web community had a lot to say, it seems, so fasten your seatbelts!
@keyframes animations can be stringsPeter Kröner shared an interesting fact about @keyframes animations — that they can be strings:
@keyframes "@animation" { /* ... */ } #animate-this { animation: "@animation"; }Yo dawg, time for a #CSS fun fact: keyframe names can be strings. Why? Well, in case you want your keyframes to be named “@keyframes,” obviously!
#webdev
[image or embed]
I don’t know why you’d want to do that, but it’s certainly an interesting thing to learn about @keyframes after 11 years of cross-browser support!
: vs. = in style queriesAnother hidden trick, this one from Temani Afif, has revealed that we can replace the colon in a style query with an equals symbol. Temani does a great job at explaining the difference, but here’s a quick code snippet to sum it up:
.Jay-Z { --Problems: calc(98 + 1); /* Evaluates as calc(98 + 1), color is blueivy */ color: if(style(--Problems: 99): red; else: blueivy); /* Evaluates as 99, color is red */ color: if(style(--Problems = 99): red; else: blueivy); }In short, = evaluates --Problems differently to :, even though Jay-Z undoubtably has 99 of them (he said so himself).
Declarative <dialog>s (and an updated .visually-hidden)David Bushell demonstrated how to create <dialog>s declaratively using invoker commands, a useful feature that allows us to skip some J’Script in favor of HTML, and works in all web browsers as of recently.
Also, thanks to an inquisitive question from Ana Tudor, the article spawned a spin-off about the minimum number of styles needed for a visually-hidden utility class. Is it still seven?
Maybe not…
How to truncate text from the middleWes Bos shared a clever trick for truncating text from the middle using only CSS:
Someone on reddit posted a demo where CSS truncates text from the middle.
They didn't post the code, so here is my shot at it with Flexbox
[image or embed]
Donnie D’Amato attempted a more-native solution using ::highlight(), but ::highlight() has some limitations, unfortunately. As Henry Wilkinson mentioned, Hazel Bachrach’s 2019 call for a native solution is still an open ticket, so fingers crossed!
How to manage color variables with relative color syntaxTheo Soti demonstrated how to manage color variables with relative color syntax. While not a new feature or concept, it’s frankly the best and most comprehensive walkthrough I’ve ever read that addresses these complexities.
How to customize lists (the modern way)In a similar article for Piccalilli, Richard Rutter comprehensively showed us how to customize lists, although this one has some nuggets of what I can only assume is modern CSS. What’s symbols()? What’s @counter-style and extends? Richard walks you through everything.
Source: Piccalilli.Can’t get enough on counters? Juan Diego put together a comprehensive guide right here on CSS-Tricks.
How to create typescales using :headingSafari Technology Preview 237 recently began trialing :heading/:heading(), as Stuart Robson explains. The follow-up is even better though, as it shows us how pow() can be used to write cleaner typescale logic, although I ultimately settled on the old-school <h1>–<h6> elements with a simpler implementation of :heading and no sibling-index():
:root { --font-size-base: 16px; --font-size-scale: 1.5; } :heading { /* Other heading styles */ } /* Assuming only base/h3/h2/h1 */ body { font-size: var(--font-size-base); } h3 { font-size: calc(var(--font-size-base) * var(--font-size-scale)); } h2 { font-size: calc(var(--font-size-base) * pow(var(--font-size-scale), 2)); } h1 { font-size: calc(var(--font-size-base) * pow(var(--font-size-scale), 3)); } Una Kravets introduced border-shapeSpeaking of new features, border-shape came as a surprise to me considering that we already have — or will have — corner-shape. However, border-shape is different, as Una explains. It addresses the issues with borders (because it is the border), allows for more shapes and even the shape() function, and overall it works differently behind the scenes.
Source: Una Kravets. modern.css wants you to stop writing CSS like it’s 2015It’s time to start using all of that modern CSS, and that’s exactly what modern.css wants to help you do. All of those awesome features that weren’t supported when you first read about them, that you forgot about? Or the ones that you missed or skipped completely? Well, modern.css has 75 code snippets and counting, and all you have to do is copy ‘em.
Kevin Powell also has some CSS snippets for youAnd the commenters? They have some too!
Honestly, Kevin is the only web dev talker that I actually follow on YouTube, and he’s so close to a million followers right now, so make sure to hit ‘ol K-Po’s “Subscribe” button.
In case you missed itActually, you didn’t miss that much! Firefox 148 released the shape() function, which was being held captive by a flag, but is now a baseline feature. Safari Technology Preview 237 became the first to trial :heading. Those are all we’ve seen from our beloved browsers in the last couple of weeks (not counting the usual flurry of smaller updates, of course).
That being said, Chrome, Safari, and Firefox announced their targets for Interop 2026, revealing which Web Platform Features they intend to make consistent across all web browsers this year, which more than makes up for the lack of shiny features this week.
Also coming up (but testable in Chrome Canary now, just like border-shape) is the scrolled keyword for scroll-state container queries. Bramus talks about scrolled scroll-state queries here.
Remember, if you don’t want to miss anything, you can catch these Quick Hits as the news breaks in the sidebar of css-tricks.com.
See you in a fortnight!
What’s !important #6: :heading, border-shape, Truncating Text From the Middle, and More originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Yet Another Way to Center an (Absolute) Element
TL;DR: We can center absolute-positioned elements in three lines of CSS. And it works on all browsers!
.element { position: absolute; place-self: center; inset: 0; }Why? Well, that needs a longer answer.
In recent years, CSS has brought a lot of new features that don’t necessarily allow us to do new stuff, but certainly make them easier and simpler. For example, we don’t have to hardcode indexes anymore:
<ul style="--t: 8"> <li style="--i: 1"></li> <li style="--i: 2"></li> <!-- ... --> <li style="--i: 8"></li> </ul>Instead, all this is condensed into the sibling-index() and sibling-count() functions. There are lots of recent examples like this.
Still, there is one little task that feels like we’ve doing the same for decades: centering an absolutely positioned element, which we usually achieve like this:
.element { position: absolute; top: 50%; left: 50%; translate: -50% -50%; }We move the element’s top-left corner to the center, then translate it back by 50% so it’s centered.
CodePen Embed FallbackThere is nothing wrong with this way — we’ve been doing it for decades. But still it feels like the old way. Is it the only way? Well, there is another not-so-known cross-browser way to not only center, but also easily place any absolutely-positioned element. And what’s best, it reuses the familiar align-self and justify-self properties.
Turns out that these properties (along with their place-self shorthand) now work on absolutely-positioned elements. However, if we try to use them as is, we’ll notice our element doesn’t even flinch.
/* Doesn't work!! */ .element { position: absolute; place-self: center; }So, how do align-self and justify-self work for absolute elements? It may be obvious to say they should align the element, and that’s true, but specifically, they align it within its Inset-Modified Containing Block (IMCB). Okay… But what’s the IMCB?
Imagine we set our absolute element width and height to 100%. Even if the element’s position is absolute, it certainly doesn’t grow infinitely, but rather it’s enclosed by what’s known as the containing block.
The containing block is the closest ancestor with a new stacking context. By default, the initial containing block has the same dimensions as the viewport and covers the start of the page.
We can modify that containing block using inset properties (specifically top, right, bottom, and left). I used to think that inset properties fixed the element’s corners (I even said it a couple of seconds ago), but under the hood, we are actually fixing the IMCB borders.
By default, the IMCB is the same size as the element’s dimensions. So before, align-self and justify-self were trying to center the element within itself, resulting in nothing. Then, our last step is to set the IMCB so that it is the same as the containing block.
.element { position: absolute; place-self: center; top: 0; right: 0; bottom: 0; left: 0; }Or, using their inset shorthand:
.element { position: absolute; place-self: center; inset: 0; }Only three lines! A win for CSS nerds. Admittedly, I might be cheating since, in the old way, we could also use the inset property and reduce it to three lines, but… let’s ignore that fact for now.
CodePen Embed FallbackWe aren’t limited to just centering elements, since all the other align-self and justify-self positions work just fine. This offers a more idiomatic way to position absolute elements.
CodePen Embed FallbackPro tip: If we want to leave a space between the absolutely-positioned element and its containing block, we could either add a margin to the element or set the container’s inset to the desired spacing.
What’s best, I checked Caniuse, and while initially Safari didn’t seem to support it, upon testing, it seems to work on all browsers!
Yet Another Way to Center an (Absolute) Element originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
How Not to Take 10 Years to Design a Typeface
Read the book, Typographic Firsts
I have often heard type designers talk about the many years they spend developing a typeface. I would listen with awe and think, “That must have been a real challenge. It must be exquisitely crafted and probably a little bit groundbreaking too.” So it feels slightly absurd to admit that […]
The post How Not to Take 10 Years to Design a Typeface appeared first on I Love Typography Ltd.
An Exploit … in CSS?!
Ok, take a deep breath.
We’ll have some fun understanding this vulnerability once you make sure your browser isn’t affected, using the table below.
Chromium-based browserAm I safe?Google ChromeEnsure you’re running version 145.0.7632.75 or later. Go to Settings > About Chrome and check for updates.Microsoft EdgeEnsure you’re running on version 145.0.3800.58 or later. Click on the three dots (…) on the very right-hand side of the window. Click on Help and Feedback > About Microsoft Edge.VivaldiEnsure you’re running on version 7.8 or later. Click the V icon (menu) in the top-left corner, select Help > About.BraveEnsure you’re running on version v1.87.188 or later. Click the hamburger menu on the top right, select Help > About Brave.So, you updated your browser and said a prayer. When you’re able to string whole sentences together again, your first question is: Has CSS really had the dubious honor of being the cause of the first zero-day exploit in Chromium-based browsers for 2026?
I mean, the Chrome update channel says they fixed a high-severity vulnerability described as “[u]ser after free in CSS” … on Friday the 13th no less! If you can’t trust a release with a description and date like that, what can you trust? Google credits security researcher Shaheen Fazim with reporting the exploit to Google. The dude’s LinkedIn says he’s a professional bug hunter, and I’d say he deserves the highest possible bug bounty for finding something that a government agency is saying “in CSS in Google Chrome before 145.0.7632.75 allowed a remote attacker to execute arbitrary code inside a sandbox via a crafted HTML page.”
Is this really a CSS exploit?Something doesn’t add up. Even this security researcher swears by using CSS instead of JavaScript, so her security-minded readers don’t need to enable JavaScript when they read her blog. She trusts the security of CSS, even though she understands it enough to create a pure CSS x86 emulator (sidenote: woah). So far, most of us have taken for granted that the possible security issues in CSS are relatively tame. Surely we don’t suddenly live in a world where CSS can hijack someone’s OS, right?
Well, in my opinion, the headlines describing the bug as a CSS exploit in Chrome are a bit clickbait-y, because they make it sound like a pure CSS exploit, as though malicious CSS and HTML would be enough to perform it. If I’m being honest, when I first skimmed those articles in the morning before rushing out to catch the train to work, the way the articles were worded made me imagine malicious CSS like:
.malicious-class { vulnerable-property: 'rm -rf *'; }In the fictional, nightmare version of the bug that my malinformed imagination had conjured, some such CSS could be “crafted” to inject that shell command somewhere it would run on the victim’s machine. Even re-reading the reports more carefully, they feel intentionally misleading, and it wasn’t just me. My security-minded friend’s first question to me was, “But… isn’t CSS, like, super validatable?” And then I dug deeper and found out the CSS in the proof of concept for the exploit isn’t the malicious bit, which is why CSS validation wouldn’t have helped!
It doesn’t help the misunderstanding when the SitePoint article about CVE-2026-2441 bizarrely lies to its readers about what this exploit is, instead describing a different medium-severity bug that allows sending the rendered value of an input field to a malicious server by loading images in CSS. That is not what this vulnerability is.
It’s not really a CSS exploit in the sense that JavaScript is the part that exploits the bug. I’ll concede that the line of code that creates the condition necessary for a malicious script to perform this attack was in Google Chrome’s Blink CSS engine component, but the CSS involved isn’t the malicious part.
So, how did the exploit work?The CSS involvement in the exploit lies in the way Chrome’s rendering engine turns certain CSS into a CSS object model. Consider the CSS below:
@font-feature-values VulnTestFont { @styleset { entry_a: 1; entry_b: 2; entry_c: 3; entry_d: 4; entry_e: 5; entry_f: 6; entry_g: 7; entry_h: 8; } }When this CSS is parsed, a CSSFontFeaturesValueMap is added to the collection of CSSRule objects in the document.styleSheets[0].cssRules. There was a bug in the way Chrome managed the memory for the HashMap data structure underlying the JavaScript representation of the CSSFontFeaturesValueMap, which inadvertently allowed a malicious script to access memory it shouldn’t be able to. This by itself isn’t sufficient to cause harm other than crashing the browser, but it can form the basis for a Use After Free (UAF) exploit.
Chrome’s description of the patch mentions that “Google is aware that an exploit for CVE-2026-2441 exists in the wild,” although for obvious reasons, they are coy about the details for a full end-to-end exploit. Worryingly, @font-feature-values isn’t new — it’s been available since early 2023 — but the discovery of an end-to-end UAF exploit may be recent. It would make sense if the code that created the possibility of this exploit is old, but someone only pulled off a working exploit recently. If you look at this detailed explanation of a 2020 Use After Free vulnerability in Chrome within the WebAudio API, you get the sense that accessing freed memory is only one piece of the puzzle to get a UAF exploit working. Modern operating systems create hoops that attackers have to go through, which can make this kind of attack quite hard.
Real-world examples of this kind of vulnerability get complex, especially in a Chrome vulnerability where you can only trigger low-level statements indirectly. But if you know C and want to understand the basic principles with a simplified example, you can try this coding challenge. Another way to help understand the ideas is this medium post about the recent Chrome CSSFontFeaturesValueMap exploit, which includes a cute analogy in which the pointer to the object is like a leash you are still holding even after you freed your dog — but an attacker hooks the leash to a cat instead (known as type confusion), so when you command your “dog” to bark, the attacker taught his cat to think that “bark” command means to do something malicious instead.
The world is safe again, but for how long?The one-line fix I mentioned Chrome made was to change the Blink code to work with a deep copy of the HashMap that underlies the CSSFontFeaturesValueMap rather than a pointer to it, so there is no possibility of referencing freed memory. By contrast, it seems Firefox rewrote its CSS renderer in Rust and therefore tends to handle memory management automatically. Chromium started to support the use of Rust since 2023. One of the motivations mentioned was “safer (less complex C++ overall, no memory safety bugs in a sandbox either)” and to “improve the security (increasing the number of lines of code without memory safety bugs, decreasing the bug density of code) of Chrome.” Since it seems the UAF class of exploit has recurred in Chromium over the years, and these vulnerabilities tend to be high-severity when discovered, a more holistic approach to defending against such vulnerabilities might be needed, so I don’t have to freak you out with another article like this.
An Exploit … in CSS?! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
A Complete Guide to Bookmarklets
You’re surely no stranger to bookmarks. The ability to favorite, save, or “bookmark” web pages has been a staple browser feature for decades. Browsers don’t just let you bookmark web pages, though. You can also bookmark JavaScript, allowing you to do so much more than merely save pages.
A JavaScript script saved as a bookmark is called a “bookmarklet,” although some people also use the term “favelet” or “favlet.” Bookmarklets have been around since the late 90s. The site that coined them, bookmarklets.com, even remains around today. They’re simple and versatile, a fact evidenced by most of the bookmarklets listed on the aforementioned site are still working today despite being untouched for over two decades.
While bookmarklets have fallen a bit to the wayside in more recent years as browsers have grown more capable and dev tools have matured, they’re still a valuable tool in any web developer’s arsenal. They’re simple but capable, and no additional software is needed to create or use them. If you watch any good machinist or engineer at work, they’re constantly building tools and utilities, even one-off contraptions, to address problems or come to a more graceful solution as they work. As developers, we should endeavor to do the same, and bookmarklets are a perfect way to facilitate such a thing.
Making a BookmarkletBookmarklets are extremely easy to make. You write a script in exactly the same manner you would if writing it for the browser console. You then save it as a bookmark, prefixing it with javascript: which designates it for use in the browser URL bar.
Let’s work through making a super basic bookmarklet, one that sends a simple alert. We’ll take the below code, which triggers a message using the alert() method, and bookmarklet-ify it.
alert("Hello, World!");Next, we will turn it into an Immediately Invoked Function Expression (IIFE), which has a few benefits. Firstly, it creates a new scope to avoid polluting the global namespace and prevents our bookmarklet from interfering with JavaScript already on the page, or vice versa. Secondly, it will cause the bookmarklet to trigger upon click.
We’ll achieve this by enclosing it within an anonymous function (lambda) (e.g., (() => {})) and suffixing it with ();, which will execute our function.
(() => { alert("Hello, World!"); })();For reliability across browsers, it is to our benefit to URL-encode our bookmarklet to escape special characters. Without doing so, browsers can go awry and misinterpret our code. Even if it isn’t entirely necessary with a simple bookmarklet like this, it can prevent a lot of trouble that may arise with more complexity. You can encode your bookmarklet yourself using JavaScript’s encodeURIComponent() function, or you can use one of a number of existing tools. We’ll also reduce it to a single line.
(()%3D%3E%7Balert(%22Hello%2C%20World!%22)%3B%7D)()%3BWe must prefix javascript: so that our browser knows this is not a standard URL to a webpage but instead a JavaScript bookmarklet.
javascript:(()%3D%3E%7Balert(%22Hello%2C%20World!%22)%3B%7D)()%3B Installing a BookmarkletFinally, we must add it to our browser as a bookmarklet. As you might expect, this is extremely dependent on the browser you’re using.
In Safari on macOS, the easiest way is to bookmark a webpage and then edit that bookmark into a bookmarklet:
In Firefox on desktop, the easiest way is to secondary click on the bookmark toolbar and then “Add Bookmark…”:
In Chrome on desktop, the easiest way is to secondary click on the bookmark toolbar and then “Add page…”:
Many mobile browsers also allow the creation and usage of bookmarks. This can be especially valuable, as browser dev tools are often unavailable on mobile.
CSS BookmarkletsYou’ve no doubt been looking at the word “JavaScript” above with a look of disdain. This is CSS-Tricks after all. Fear not, because we can make bookmarklets that apply CSS to our page in a plethora of ways.
My personal favorite method from an authoring perspective is to create a <style> element with my chosen content:
javascript: (() => { var style = document.createElement("style"); style.innerHTML = "body{background:#000;color:rebeccapurple}"; document.head.appendChild(style); })();The much more graceful approach is to use the CSSStyleSheet interface. This approach allows for incremental updates and lets you directly access the CSS Object Model (CSSOM) to read selectors, modify existing properties, remove or reorder rules, and inspect computed structure. The browser also validates values input this way, which helps prevent you from inputting broken CSS. It is more complex but also gives you greater control.
javascript: (() => { const sheet = new CSSStyleSheet(); document.adoptedStyleSheets = [...document.adoptedStyleSheets, sheet]; sheet.insertRule("body { border: 5px solid rebeccapurple !important; }", 0); sheet.insertRule("img { filter: contrast(10); }", 1); })();As we’re writing CSS for general usage across whatever page we wish to use our bookmarklet on, it is important to remain aware that we may run into issues with specificity or conflicts with the page’s existing stylesheets. Using !important is usually considered a bad code smell, but in the context of overriding unknown existing styles, it is a reasonable way to address our needs.
LimitationsUnfortunately, there are a few roadblocks that can hinder our usage of bookmarklets. The most pervasive are Content Security Policies (CSP). A CSP is a security feature that attempts to prevent malicious actions, such as cross-site scripting attacks, by allowing websites to regulate what can be loaded. You wouldn’t want to allow scripts to run on your bank’s website, for instance. A bookmarklet that relies on cross-origin requests (requests from outside the current website) is very frequently blocked. For this reason, a bookmarklet should ideally be self-contained, rather than reliant on anything external. If you’re suspicious a bookmarklet is being blocked by a website’s security policies, you can check the console in your browser’s developer tools for an error.
As bookmarklets are just URLs, there isn’t any strict limit to the length specified. In usage, browsers do impose limits, though they’re higher than you’ll encounter in most cases. In my own testing (which may vary by version and platform), here are the upper limits I found: The largest bookmarklet I could create in both Firefox and Safari was 65536 bytes. Firefox wouldn’t let me create a bookmarklet of any greater length, and Safari would let me create a bookmarklet, but it would do nothing when triggered. The largest bookmarklet I could create in Chrome was 9999999 characters long, and I started having issues interacting with the textbox after that point. If you need something longer, you might consider loading a script from an external location, keeping in mind the aforementioned CSP limitations:
javascript:(() => { var script=document.createElement('script'); script.src='https://example.com/bookmarklet-script.js'; document.body.appendChild(script); })();Otherwise, you might consider a userscript tool like TamperMonkey, or, for something more advanced, creating your own browser extension. Another option is creating a snippet in your browser developer tools. Bookmarklets are best for small snippets.
Cool BookmarkletsNow that you’ve got a gauge on what bookmarklets are and, to an extent, what they’re capable of, we can take a look at some useful ones. However, before we do, I wish to stress that you should be careful running bookmarklets you find online. Bookmarklets you find online are code written by someone else. As always, you should be wary, cautious, and discerning. People can and have written malicious bookmarklets that steal account credentials or worse.
For this reason, if you paste code starting with javascript: into the address bar, browsers automatically strip the javascript: prefix to prevent people from unwittingly triggering bookmarklets. You’ll need to reintroduce the prefix. To get around the javascript: stripping, bookmarklets are often distributed as links on a page, which you’re expected to drag and drop into your bookmarks.
Specific bookmarklets have been talked about on CSS-Tricks before. Given the evolution of browsers and the web platform, much has been obsoleted now, but some more contemporary articles include:
- 6 Useful Bookmarklets to Boost Web Development by Daniel Schwarz.
- Using the CSS Me Not Bookmarklet to See (and Disable) CSS Files by Chris Coyier.
Be sure to check out the comments of those posts, for they’re packed with countless great bookmarklets from the community. Speaking of bookmarklets from the community:
- Adrian Roselli has a fantastic collection of “CSS Bookmarklets for Testing and Fixing.”
- Stuart Robson put together “A Few Useful Web Development Bookmarklets.”
- Ian Lloyd has a selection of bookmarklets for performing accessibility audits
If you’ve got any golden bookmarklets that you find valuable, be sure to share them in the comments.
A Complete Guide to Bookmarklets originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Event Feature: Type Drives Commerce
Read the book, Typographic Firsts
In global branding and design, typography drives consumer perception and commercial success. This idea sits at the heart of the Type Directors Club (TDC) Type Drives Commerce conference on March 13, at Fordham University, Lincoln Center in New York City. As part of the world’s leading typography organization, the conference is a curated exploration […]
The post Event Feature: Type Drives Commerce appeared first on I Love Typography Ltd.
Distinguishing “Components” and “Utilities” in Tailwind
Here’s a really quick tip. You can think of Tailwind utilities as components — because you can literally make a card “component” out of Tailwind utilities.
@utility card { border: 1px solid black; padding: 1rlh; } <div class="card"> ... </div>This blurs the line between “Components” and “Utilities” so we need to better define those terms.
The Great Divide — and The Great UnificationCSS developers often define Components and Utilities like this:
- Component = A group of styles
- Utility = A single rule
This collective thinking has emerged from the terminologies we have gathered over many years. Unfortunately, they’re not really the right terminologies.
So, let’s take a step back and consider the actual meaning behind these words.
Component means: A thing that’s a part of a larger whole.
Utility means: It’s useful.
So…
- Utilities are Components because they’re still part of a larger whole.
- Components are Utilities because they’re useful.
The division between Components and Utilities is really more of a marketing effort designed to sell those utility frameworks — nothing more than that.
It. Really. Doesn’t. Matter.
The meaningful divide?Perhaps the only meaningful divide between Components and Utilities (in the way they’re commonly defined so far) is that we often want to overwrite component styles.
It kinda maps this way:
- Components: Groups of styles
- Utilities: Styles used to overwrite component styles.
Personally, I think that’s a very narrow way to define something that actually means “useful.”
Just overwrite the dang styleTailwind provides us with an incredible feature that allows us to overwrite component styles. To use this feature, you would have to:
- Write your component styles in a components layer.
- Overwrite the styles via a Tailwind utility.
But this is a tedious way of doing things. Imagine writing @layer components in all of your component files. There are two problems with that:
- You lose the ability to use Tailwind utilities as components
- You gotta litter your files with many @layer component declarations — which is one extra indentation and makes the whole CSS a little more difficult to read.
There’s a better way of doing this — we can switch up the way we use CSS layers by writing utilities as components.
@utility card { padding: 1rlh; border: 1px solid black; }Then, we can overwrite styles with another utility using Tailwind’s !important modifier directly in the HTML:
<div class="card !border-blue-500"> ... </div>I put together an example over at the Tailwind Playground.
Unorthodox TailwindThis article comes straight from my course, Unorthodox Tailwind, where you’ll learn to use CSS and Tailwind in a synergistic way. If you liked this, there’s a lot more inside: practical ways to think about and use Tailwind + CSS that you won’t find in tutorials or docs.
Check it outDistinguishing “Components” and “Utilities” in Tailwind originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.