I drive a Toyota that is nearly old enough to run for US Senator. Every control in the car is visible, clearly labeled and is distinct to the touch - at all times. The action isn't impeded by routine activity or maintenance (ex:battery change).
Because it can be trivially duplicated, this is minimally capable engineering. Yet automakers everywhere lack even this level of competence. By reasonable measure, they are poor at their job.
Youtuber/Engineer William Osman had a great rant some time back when he bought a new microwave and it came with a ton of buttons, his argument being that a microwave only really needs one (and ideally its just a dial instead of a button).
My previous one lasted more than 20 years, from when my parents bought it for me when I went to study until some time in my 40s. It was still functional, but its dial had become loose and it didn't look that great anymore.
The one I bought after that follows the new pattern, it has buttons up the wazoo and who even knows what they do? To be honest I just need one power setting with a time and maybe a defrost option?
I'm sympathetic , but think it's a disservice to the designers to present it like that:
> Every control in the car is visible
No. And that would be horrible.
Every control _critically needed while driving_ is visible and accessible. Controls that matter less can be smaller and more convoluted, or straight hidden.
The levers to adjust seat high and positions are hidden while still accessible. The latch to open the car good can (should ?) be less accessible and can be harder to find.
There are a myriad of subtle and opinionated choices to make the interface efficient. There's nothing trivial or really "simple" about that design process, and IMHO brushing over that is part of what leads us to the current situation where car makers just ignore these considerations.
The older designs weren't perfect, but they generally respected that you might need to adjust something without thinking too hard or taking your eyes off the road
i disagree. i only want minimalist functionality and therefore it's reasonable to have ALL controls always present and physical.
someone needs to have the courage to say no to features that will get people killed.
a simple gun doesn't jam in the heat of battle.
u
my 1989 Toyota corolla has manual windows and that is great.
IMHO we'd need to ban anything fancier than a bare bone golf cart if we're following the principles you're describing. Not that I'd disagree with that either, I genuinely think it would have a positive impact on cities, and even most rural towns; especially as the population is growing older in so many places.
It allows UI designers to add nearly endless settings and controls where they were before limited by dash space. It's similar to how everything having flash for firmware allows shipping buggy products that they justify because they can always fix it with a firmware update.
The real cost saving is in the touch panel being a single component. It eliminates the need to optimize UI in physical space, and decouples the UI design and testing from the rest of the car design and manufacturing process. As a bonus, both hardware and software for the panel can then be outsourced do the lowest bidder or bought as a bottom-of-the-barrel COTS package.
Is this true given all the chips modern cars have, all the programming that must be done, and all the complex testing and QA required for the multitude of extra function?
I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
Both lane assist and backup cameras are mandatory safety systems for new cars in the EU. Same goes for things like tired driver detection and other stuff that was considered opulent luxury ten years ago.
With the land tanks we call SUVs today, I can imagine it wasn't hard for politicians to decide that mirrors are no longer enough to navigate a car backwards.
Still, you don't need touch screens. Lane assist can be a little indicator on a dashboard with a toggle somewhere if you want to turn it off, it doesn't need a menu. A backup camera can be a screen tucked away in the dash that's off unless you've put your car in reverse. We may need processing to happen somewhere, but it doesn't need to happen in a media console with a touch screen.
You can actually put a backup camera in the rearview mirror. Back before rollover protection cars had quite amazing visibility. Best vehicle visibility I've had in the past 5 years was actually a 1997 F-150. You'd think it's a big truck, but you could more or less see all around you, and it didn't have that crazy high front hood either.
> I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I would pay more for decent physical switches and knobs, but I would give up AC before the backup camera. Getting this was life changing. I also wish all cars had some kind of blind spot monitoring.
I have always thought they should put the display for the backup camera behind the driver and facing the front of the car, so that it would be easily visible to a driver looking out the rear and rear-side windows while backing up.
You're not thinking about the manufacturing part. Buttons and knobs have to get assembled and physically put into every car. Software just needs to be written once.
> I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
You should check how SW and HW are tested in the car.
A typical SW test is: Requirement: SW must drive a motor if voltage reaches 5 V. A typical SW test is: Increase the voltage to 5 V, see that the motor moves.
Now what happenes at 20 V is left as an exercise for the user.
One of the reasons I purchased a (newer but used Mazda) was because it still has buttons and knobs right next to the driver's right hand in the center console. I can operate parts of the car without even having to look.
(another reason was because it still has a geared transmission instead of a CVT, but that's a separate discussion)
Look ma, I can change the air conditioning controls without looking moment.
A friend got a tesla on lease and it was quite cheap, 250/month. Been driven in that car a few times and was able to study the driver using the controls and it’s hideusly badly designed, driver has to take eyes off the road and deep dive in menus. Plus that slapped tablet in the middle is busy to look at, tiring and distacting. The 3d view of other cars/ pedestrians is a gimmick, or at least it looks like one to me. Does anyone actually like that? Perhaps im outdated or something but I wouldn’t consider such a bad UX in a car.
The 3D view is a marketing gimmick and maybe something to show off to your passengers. You've for a massive screen, so you can't just leave it empty, the owners would realize it's a gimmick.
In practice many drivers seem to be dealing fine with the touch screen because they've stopped paying attention to the road, trusting their car to keep distance and pay attention for them. Plus, most of the touch screen controls aren't strictly necessary while driving, they mostly control luxury features that you could set up after pulling over.
This implies it's a consequential cost. Building with tactile controls would take the (already considerable) purchase price and boost that high enough to impact sales.
If tactile controls were a meaningful cost difference, then budget cars with tactile controls shouldn't be common - in any market.
Are controls uniquely important, though? There are hundreds of things in a car that could be made better (more durable, longer lasting, better looking) for just $10 to $100 extra a piece. But it adds up.
It's not just cost, though. The reality is that consumers like the futuristic look, in theory (i.e., at the time of the purchase). Knobs look dated. It's the same reason why ridiculously glossy laptop screens were commonplace. They weren't cheaper to make, they just looked cool.
Thank you; this ridiculous non-argument also pollutes discussion on GUI/UX. "Skeuomorphism looks outdated"--no, skeuomorphism that looks like old UIs looks dated, by definition, but that does not mean it is the only way to design tactile UIs.
It is the job (and in my opinion, an exciting challenge) for the UI designers to come up with a modern looking tactile design based on the principles of skeuomorphism, possibly amalgamated with the results of newer HCI research.
This is often repeated but I don't believe this for a second. I have an 90s vehicle which is based on 60/70s technology. A switch for a fog light is like £10 on ebay for a replacement and I know I am not paying anywhere near cost i.e. I am being ripped off.
You think you’re being ripped off for a £10 fog light switch on a ~30 year old car?
That sounds like an incredible bargain to me.
Why do you think you should pay near cost? What’s the incentive for all the people who had to make, test, box, pack, move, finance, unpack, inventory, pick, box, label, and send it to you? I can’t imagine the price between £10 and free that you’d think wasn’t a rip-off for a part that probably sells well under a 100 units per year worldwide.
I'm pretty sure that simple switch is something directly in the circuit for the fog light, and there is a dedicated wire between the fog light, the switch, and the fuse box. And if its an old Jag, those wires flake out and have to be redone at great expense.
Compare this to the databus that is used in today's cars, it really isn't even a fair comparison on cost (you don't have to have 100 wires running through different places in your car, just one bus to 100 things and signal is separated from power).
> I'm pretty sure that simple switch is something directly in the circuit for the fog light, and there is a dedicated wire between the fog light, the switch, and the fuse box. And if its an old Jag, those wires flake out and have to be redone at great expense.
I don't really want to get into a big debate about this as I haven't worked on Jags, but I don't believe that replacing parts of the loom is would be that expensive. Remaking an entire loom, I will admit that would expensive as that would be a custom job with a lot of labour.
> Compare this to the databus that is used in today's cars, it really isn't even a fair comparison on cost (you don't have to have 100 wires running through different places in your car, just one bus to 100 things and signal is separated from power).
Ok fine. But the discussion was button vs touch screens and there is nothing preventing buttons being used with the newer databus design. I am pretty sure older BMWs, Mercs etc worked this way.
They can be used, they just need more complexity than a simple switch that completes a circuit, they now have tiny cpus so they can signal the bus correctly. The switch must broadcast turn thing on when the switch is set to on, and then turn thing off when the switch is set to off, all with whatever serial protocol being used (including back off and retry, etc.
..). So your input devices need to be little computers so that you can use one bus for everything, now you can see where one touch screen begins to save money.
I don't believe what you are describing is necessary. I am pretty sure you could have a module where the switches are wired normally into something and that communicates with the main bus. I am pretty sure this is how a lot of cars already work from watching people work on more modern vehicles.
In any event. I've never heard a good explanation of why I need all of this to turn the lights on or off in a car, when much simpler systems worked perfectly fine.
Many of the low-speed switches are connected to a single controller that then interfaces over LIN or CAN to the car.
Reducing the copper content of cars and reducing the size of the wiring bundles that have to pass through grommets to doors, in body channels, etc. was the main driver. Offering greater interconnectedness and (eventually) reliability was a nice side effect.
It used to be a pain in the ass to get the parking lights to flash some kind of feedback for remote locking, remote start, etc. Now, it’s two signals on the CAN bus.
> Offering greater interconnected news and (eventually) reliability was a nice side effect.
I am not sure about that. You still suffer from electronic problems due to corrosion around the plugs, duff sockets and dodgy earths as the vehicle ages.
Depending on age, it’s more likely that the physical switch drives an electric relay and the relay switches the actual fog lamp current which could be 3-5amps per lamp, letting the manufacturer use a small gauge trigger wire to run to/from the dash and thicker wire only for the shorter high-current path.
Not just that, wiring it in to the single control bus is easier, otherwise you are stuck doing an analog to digital conversion anyways. Even new cars that have separate controls, these are mostly capacitive buttons or dials that simply send a fixed signal on the bus (so your dial will go all the way around, because it isn't actually the single volume control on the radio, but just a turn the volume up or down control).
Most of the cost savings is in having a single bus to wire up through the car, then everything needs a little computer in it to send on that bus...so a screen wins out.
Most of the seeming analog controls on cars switched to digital in the 1990s. The digital control bus saved several hundred dollars per car. It still looked analog until around 2010 when touch screen started taking over.
I’m not sure if this is actually true for the volumes produced by the big carmakers. You’d very quickly get to volumes that make the largest component the material cost.
The good news over here is that the European NCAP is now mandating they put a bunch of those physical controls back if they want a 5-star safety rating. Would not be sorry to say good bye to the awful touchscreen UI in my car...
Don't forget the headlight regulations desperately need an update. RAC survey said 89% think some are too bright, 30% think *most* are too bright. Insane.
I had similar discussions with my father who started his career in the 80s as an engineer, and has been a CEO for the last ~15 years. The discussion was a bit broader, about engineering and quality/usability in everything.
His perspective was that companies were "run" by engineers first, then a few decades later by managers, and then by marketing.
Who knows what's next, maybe nothing (as in all decisions are accidentally made by AI because everyone at all levels just asks AI). Could be better than our current marketing-driven universe.
I commented on here about the surge in US car mfg recruiters contacting me about working on their new car systems. The HN opinion seemed to that they are complete disasters and stay away if I value my sanity.
While I agree with your sentiment, designing and manufacturing custom molds for each knob and function (including premium versions) instead of just slapping a screen on the dash does have a cost.
Because most companies are ruthless penny-pinchers and over-optimizers. They're willing to burn dollars to save pennies. The reason is that they're trading things they can measure for things they can't.
Basically, if you remove the knobs you can save, say, 10 dollars on every vehicle. In return, you have made your car less attractive and will lose a small number of sales. You will never, ever be able to quantify that loss in sales. So, on paper, you've saved money for "free".
Typically, opportunity cost is impossible or close to impossible to measure. What these companies think they are doing is minimizing cost. Often, they are just maximizing opportunity cost of various decisions. Everyone is trying to subtly cut quality over time.
Going from A quality to B quality is pretty safe, it's likely close to zero consumers will notice. But then you say "well we went from A to B and nobody noticed, so nobody will notice B to C!". So you do it again. Then over and over. And, eventually, you go from a brand known for quality to cheap bargain-bin garbage. And it happened so slowly that leadership is left scratching their heads. Sometimes the company then implodes spontaneously, other times it slowly rots and loses to competitors. It's so common it feels almost inevitable.
Really, most companies don't have to do much to stay successful. For a lot of markets, they just have to keep doing what they're doing. Ah, but the taste of cost-cutting is much too seductive. They do not understand what they are risking.
> Basically, if you remove the knobs you can save, say, 10 dollars on every vehicle. In return, you have made your car less attractive and will lose a small number of sales.
Is there evidence that fancy looking screens don't show better in the showroom than legacy looking knobs and buttons? Where under use, they may be better, I am not sure all that sells better.
No, there isn't. Like I said, the opportunity cost is invisible and impossible to measure.
All I know is personal anecdotes from people I talk to. I know a couple people who have a Mercedes EQS - they've all said the same thing: the big screen is cool for a little bit, then it's just annoying.
I think it will take a generation or two of cars before some consumers start holding back on purchases because of this. For now, they don't know better. But I'm sure after owning a car and being pissed off at it, they'll think a little bit harder on their next purchase. I think consumers are highly impacted by these types of things - small cuts that aren't bad, per se, but are annoying. Consumers are emotional, they hold grudges, they get pissed off.
I sort of feel the same way about fix-a-flat kits. Once people actually have the experience of trying to use a fix-a-flat kit, they'll start asking car salesmen if the car comes with a spare...
The problem isn't just that. These screens are actual safety hazards. Whatever you display in a showroom doesn't justify this: https://grumpy.website/1665
It was always expensive. Car makers need their cars to last (the used market is imbortant since few can afford a new car the scrap in 3 years) so they are not buying the cheap switches. a cherry mx will run near a dollar each in quantity. Then you put the cap an it plus wires and it adds up fast per switch. A touch screen is $75 in quantity and replaces many switches.
Because cars have long design times and a big touchscreen have generally been seen as more premium than a bunch of push buttons and dials. I think the tide has turned somewhat, but it’s going to take some time.
> designing and manufacturing custom molds for each knob and function ... dash does have a cost.
Manufacturing car components already involves designing and custom molds, does it not? Compared to the final purchase price, the cost of adding knobs to that stack seems inconsequential.
Yes, but the touch screen is one large mold. The button needs a custom mold for each button. The touch screen also has large flat areas with reduces cost since is prevents extra cost round shapes.
Power abhors a vacuum. Choosing to not change is viewed as failure to innovate, even if the design suffers. Planned obsolescence is as old as the concept of yearly production models themselves, and likely older, going back to replacement parts manufacturing and standardized production overtaking piecework.
It’s a race to the bottom to be the least enshittified versus your market competitors. Usability takes a backseat to porcine beauty productization.
I think an indicator that something is going wrong in UI design is what I'd call the "the food is in the fridge" anti pattern that seems to pop up lately.
Essentially it's UI text in random places telling you what steps you should take to activate some other feature, instead of - you know - just providing a button to activate that feature.
A variant of this is buttons or menu items that don't do anything else than move focus onto another button, or open a menu in a different location, so you can then click on that one.
Increasingly seeing this in Microsoft products, especially in VS Code.
I get why you would hide interface elements to use the screen real estate for something else.
I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
IntelliJ does this, for example, with the icons above the project tree. There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab. You have to know the secret spot on the screen where it is hidden and if you move your mouse pointer to the void there, it magically appears.
Why? What is the rationale behind going out of your way to implement something like this?
Some people complain about "visual clutter". Too many stimuli in the field of view assault their attention, and ruin their concentration. Such people want everything that's not in the focus of attention be gone, or at least be inconspicuous.
Some people are like airliner pilots. They enjoy every indicator to be readily visible, and every control to be easily within reach. They can effortlessly switch their focus.
Of course, there is a full range between these extremes.
The default IDE configuration has to do a balancing act, trying to appeal to very different tastes. It's inevitably a compromise.
Some tools have explicit switches: "no distractions mode", "expert mode", etc, which offer pre-configured levels of detail.
This is why we used to have customizable toolbars, and relevant actions still accessible via context menu and/or main menu, where the respective keyboard shortcuts were also listed. No need to compromise. Just make it customizable using a consistent framework.
Intellij on Windows also buries the top menus into a hamburger icon and leaves the entire area they occupied empty! Thankfully there is an option to reverse it deep in the settings, but having it be the default is absolutely baffling.
Microsoft pulls the same BS. Look at Edge. Absolute mess. No menu. No title bar. What application am I even using?
This stupidity seems to have spread across Windows. No title bars or menus... now you can't tell what application a Window belongs to.
And you can't even bring all of an application's windows to the foreground... Microsoft makes you hover of it in the task bar and choose between indiscernible thumbnails, one at a time. WTF? If you have two Explorer windows open to copy stuff, then switch to other apps to work during the copy... you can't give focus back to Explorer and see the two windows again. You have to hover, click on a thumbnail. Now go back and hover, and click on a thumbnail... hopefully not the same one, because of course you can't tell WTF the difference between two lists of files is in a thumbnail.
And Word... the Word UI is now a clinic on abject usability failure. They have a menu bar... except WAIT! Microsoft and some users claim that those are TABS... except that it's just a row of words, looking exactly like a menu.
So now there's NO menu and no actual tabs... just a row of words. And if you go under the File "menu" (yes, File), there are a bunch of VIEW settings. And in there you can add and remove these so-called "tabs," and when you do remove one, the functionality disappears from the entire application. You're not just customizing the toolbar; you're actually disabling entire swaths of features from the application.
It's an absolute shitshow of grotesque incompetence, in a once-great product. No amount of derision for this steaming pile is too much.
No title bars or menus... now you can't tell what application a Window belongs to.
I hate when applications stuff other controls (like browser tabs) into the title bar --- leaving you with no place to grab and move the window.
The irony is that we had title bars when monitors were only 640x480, yet now that they have multiplied many times in resolution, and become much bigger, UIs are somehow using the excuse of "saving space" to remove title bars and introducing even more useless whitespace.
We don't do desktop computing like we did then. Most of what was separate applications then are now done in-browser: it's like running a virtual machine inside your OS.
I don't need to know that what I'm using is Edge/Chrome/Firefox any more than I need to know that what I'm using is Windows/etc.
Amen. And then there's the idiotic peek-a-boo UI that hides controls until you accidentally roll over them with the cursor... not saving any space at all.
This isn't just a Windows thing. Look at Gnome for another example. macOS of late also likes to take over the title bar for random reasons, although there at least the menu bar is still present regardless.
I've always considered the Mac's shared menu bar a GUI 1.0 mistake that should have been fixed in the transition to OS X. Forcing all applications to share a single menu that's glued to the top of the screen, and doesn't switch back to the previous application when you minimize the one you're working with, is dumb.
Windows and Unix GUIs had it right: Put an application's menu where it belongs, on the application's main frame.
But now on Windows... NO menu? Oh wait, no... partial menus buried under hamburger buttons in arbitrary locations, and then others buried under other buttons.
...The Mac menu bar is what it is for a very good reason. Being at the top of the screen makes it an infinitely-tall target.
All you have to do to get to it is move your mouse up until you can't move it up any more.
This remains a very valuable aspect to it no matter what changes in the vogue of UIs have come and gone since.
The fact that you think that you've "minimized the application" when you minimized a window just shows that you are operating on a different (not better, not worse, just different) philosophy of how applications work than the macOS designers are.
This argument never made much sense to me, although I do subscribe to Fitts' Law. With desktop monitor sizes since 20+ years ago, the distance you have to travel, together with the visual disconnect between application and the menu bar, negates the easier targetability. And with smaller screen sizes, you would generally maximize the application window anyway, resulting in the same targetability.
The actual historical rationale for the top menu bar was different, as explained by Bill Atkinson in this video: https://news.ycombinator.com/item?id=44338182. The problem was that due to the small screen size, non-maximized windows often weren't wide enough to show all menus, and there often wasn't enough space vertically below the window's menu bar to show all menu items. That's why they moved the menus to the top of the screen, so that there always was enough space, and despite the drawback, as Atkinson notes, of having to move the mouse all the way to the top. This drawback was significant enough that it made them implement mouse pointer acceleration to compensate.
So targetability wasn't the motivation at all, that is a retconned explanation. And the actual motivation doesn't apply anymore on today's large and high-resolution screens.
> With desktop monitor sizes since 20+ years ago, the distance you have to travel, together with the visual disconnect between application and the menu bar, negates the easier targetability.
Try it on a Mac; the way its mouse acceleration works makes it really, really easy to just flick either a mouse or a finger on a trackpad and get all the way across the screen.
I’m not saying it’s necessarily harder to reach a menu bar at the top of the screen, given suitable mouse acceleration. But you also have to move the mouse pointer back to whatever you are doing in the application window, and moving to the top menu bar is not that much (if at all) easier to really justify the cognitive and visual separation. It that were the case, then as many application controls as possible should be moved to the border of the screen.
For your complaints about the taskbar, yes I too find it incredibly annoying that they compress all the application windows into a tiny thumbnail but there is a setting to expand thumbnails to include titles and separate them if there are multiple windows which is what I use. I don't currently have access to my windows machine or I'd help you out with the exact setting but it's there somewhere in the "taskbar settings"
> I get why you would hide interface elements to use the screen real estate for something else.
Except that screens on phones, tablets, laptops and desktops are larger than ever. Consider the original Macintosh from 1984 – large, visible controls took up a significant portion of its 9" display (smaller than a 10" iPad, monochrome, and low resolution.) Arguably this was partially due to users being unfamiliar with graphical interfaces, but Apple still chose to sacrifice precious and very limited resources (screen real estate, compute, memory, etc.) on a tiny, drastically underpowered (by modern standards) system in the 1980s for interface clarity, visibility, and discoverability. And once displays got larger the real estate costs became negligible.
An IDE, and the browser example given below, are tools I'll spend thousands of hours using in my life. The discoverability is only important for a small percentage of that, while viewing the content is important for all of it.
This is exactly when I will have the 'knowledge in the head'.
I agree, I know those buttons are there and how to activate them, but I still occasionally stare blankly at the screen wondering where the buttons are before remembering I need to hover them
I think the article overlooks that it is not really an accident that apps and operating systems are hiding all their user interface affordances. It's an antipattern to create lock in, and it tends to occur once a piece of software has reached what they consider saturation point in terms of growth where keeping existing users in is more important than attracting new ones. It so turns out that the vast majority of software we use is created by companies in exactly that position - Google, Apple, Microsoft, Meta etc.
It might seem counter intuitive that hiding your interface stops your users leaving. But it does it because it changes your basis of assumptions about what a device is and your relationship with it. It's not something you "use", but something you "know". They want you to feel inherently linked to it at an intuitive level such that leaving their ecosystem is like losing a part of yourself. Once you've been through the experience of discovering "wow, you have to swipe up from a corner in a totally unpredictable way to do an essential task on a phone", and you build into your world of assumptions that this is how phones are, the thought of moving to a new type of phone and learning all that again is terrifying. It's no surprise at all that all the major software vendors are doing this.
I think you picked a hypothesis and assumed it was true and ran with it.
Consider that all the following are true (despite their contradictions):
- "Bloated busy interface" is a common complaint of some of Google, Apple, Microsoft, and Meta. people here share a blank vscode canvas and complain about how busy the interface is compared to their 0-interface vim setup.
- flat design and minimalism are/were in fashion (have been for few years now).
- /r/unixporn and most linux people online who "rice" their linux distros do so by hiding all controls from apps because minimalism is in fashion
- Have you tried GNOME recently?
Minimal interface where most controls are hidden is a certain look that some people prefer. Plenty of people prefer to "hide the noise" and if they need something, they are perfectly capable to look it up. It's not like digging in manuals is the only option
If I had to pin most of this on anything I’d pick two:
- Dribbble-driven development, where the goal is to make apps look good in screenshots with little bearing to their practical usability
- The massive influx of designers from other disciplines (print, etc) into UI design, who are great at making things look nice but don’t carry many of the skills necessary to design effective UIs
Being a good UI designer is seeking out existing usability research, conducting new research to fill in the gaps, and understanding the limits of the target platform on top of having a good footing in the fundamentals. The role is part artist, part scientist, and part engineer. It’s knowing when to put ego aside and admit that the beautiful design you just came with isn’t usable enough to ship. It’s not just a sense for aesthetics and the ability to wield Photoshop or Figma or whatever well.
This is not what hiring selects for, though, and that’s reflected in the precipitous fall in quality of software design in the past ~15 years.
I agree with you it's very fashion driven and hence you see it in all kinds of places outside the core drivers of it. But my argument is, those fashions themselves are driven by the major players deciding to do this for less than honorable reasons.
I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user. But what is happening is that when these designs get reviewed, nobody is pushing back - when someone says "but how will the user know to do that?", it doesn't get listend to. Instead the people responsible are signing off on it saying, "it's OK, they will just learn that, once they get to know it, then it will be OK". It's all passive but it's based on an implicit assumption that uses are staying around and optimising for the ones that do, making it harder for the ones that want to come and go or stop in temporarily.
Once three or four big companies start doing it, everybody else cargo cults it and before you know it, it looks like fashion and GNOME is doing it too.
Somehow in your theory you omit the fact that people can learn how to use a new interface? It’s not like you’re entitled to a UI that never adds functionality anymore, ever. Sure, vendors ought to provide onboarding tutorials and documentation and such, but using that material is on the user.
UIs tend to have a universality with how people structure their environments. Minimalism is super hot outside of software design too. Millennial Gray is a cliche for a reason. Frutiger Aero wasn't just limited to technology. JLo's debut single is pretty cool about this aesthetic https://www.youtube.com/watch?v=lYfkl-HXfuU
I think you picked a hypothesis and assumed it was true and ran with it.
The tone of your post and especially this phrase is inappropriate imo. The GP's comment is plausible. You're welcome to make a counter-argument but you seem to be claiming without evidence their was no thinking behind their post.
God, no. I switched to xfce when GNOME decided that they needed to compete with Unity by copying whatever it did, no matter how loudly their entire user base complained.
I see nonprofit OSS projects doing it too, and wonder if they're just trendchasing without thinking. Firefox's aggravating redesigns fall under this category, as does Gnome and the like.
It's a double edged sword though in that it can discourage users from trying their interface.
Apple's interface shits me because it's all from that one button, and I can never remember how to get to settings because I use that interface so infrequently, so Android feels more natural. Ie. Android has done it's lock-in job, but Apple has done itself a disservice.
(Not entirely fair, I also dislike Apple for all the other same old argument reasons).
The other day I was locked out of my car
the key fob button wouldn't work
Why didn't I just use my key to get in?
First, you need to know there is a hidden key inside the fob.
Second, because there doesn't appear to be a keyhole on the car door,
you also have to know that you need to disassemble a portion
of the car door handle to expose the keyhole.
Hiding critical car controls is hostile engineering. In this, it doesn't stand out much in the modern car experience.
While this makes several cars a terrible choice for rentals, I do wish car owners would take maybe half an hour of their day after spending a couple thousand to read through the manual that came with their car. The manual doesn't just tell you how to change the radio station, it also contains a lot of safety information and instructions for how to act when something goes wrong.
How can I trust a driver to take things like safe maximum load into account when they don't even know they can open their car if their battery ever goes flat?
This also happened to me in a rental. We drove it off the lot to our hotel a half-hour away before we discovered the remote was busted, with all of our possessions locked inside.
I did know that there must be a physical key (unless Tesla?), and the only way I found the keyhole was because a previous renter had scratched the doorknob to shit trying to access the very same keyhole.
All of which you should know, and can be easily found with a quick google. The moment we got a car with no physical key my first question was “what’s the backup option and how does it work”.
Basic knowledge about the things you own isn’t hard. My god there is a lot of old man shakes fist at cloud in here.
This is such an Apple user take.
"Yes you can do that, but you're not supposed to so it's hidden behind so many menus that you can't find it except by accident and since I use it, I say sowwy to my phone every night before I go to sleep to make sure Apple doesn't get maddy mad at me"
The opposite take would be that there’s no need to shove something in the users face that they need less than once per year, but offer a more elaborate way to get there just in case.
How is some real clear key inside label on the fob "shoving something in the user's face"? How is visible keyhole, or at least not buried behind a snap off cover, "shoving something in the user's face"?
This is what happens when "designers" who are nothing more than artists take control of UI decisions. They want things to look "clean" at the expense of discoverability and forget that affordances make people learn.
Contrast this with something like an airplane cockpit, which while full of controls and assuming expert knowledge, still has them all labeled.
I still don't understand why desktop OSes now have mobile style taskbar icons that are twice as large as they need to be, grouped together so you need to hover to see which instance of what is what, and then click again to switch to the one you actually want if you can even figure out what it even is with just a thumbnail without any labels. All terminal windows look the fucking same!
Win NT-Vista style, aka the way web browsers show tabs with an icon + label is peak desktop UX for context switching and nobody can convince me otherwise. GNOME can't even render taskbars that way.
Most people coming into the workforce today have grown up on iOS and Android. To them, the phone is the default, the computer used to be what grownups use to do work. Watching them start using computers is very similar to those videos from the 80s and 90s of office workers using a computer for the first time.
The appification of UI is a necessary evil if you want people in their mid twenties or lower to use your OS. The world is moving to mobile-first, and UI is following suit, even in places it doesn't make sense.
Give a kid a UI from the 90s, styled after industrial control panels, and they'll be as confused as you are with touch screen designs. Back in the day, stereos used to provide radio buttons and sliders for tuning, but those devices aren't used anymore. I don't remember the last device I've used that had a physical toggle button, for instance.
UI is moving away from replicating the stereos from the 80s to replicating the electronics young people are actually using. That includes adding mobile paradigms in places that don't necessarily make sense, just like weird stereo controls were all over computers for no good reason.
If you prefer the traditional UX, you can set things up the way you want. Classic Shell will get you your NT-Vista task bar. Gnome Shell has a whole bunch of task bar options. The old approach may no longer be the default one, but it's still an option for those that want it.
Maybe you're right, but I mean I'm in my late twenties and I grew up on Win 95 and XP mainly, smartphones only started to become a thing in early high school. You'd probably have to look under like 16 to really find those who haven't ever seen an interface designed for the mouse.
> Classic Shell, Gnome Shell task bar options
Yeah mods, hacks, and extensions don't really count for either. The more time passes the more this nonsense becomes mandatory. Luckily KDE still exists for now and has it all native.
Next you’ll be complaining that the taps in your house don’t have a label telling you that they need to be twisted and in what direction.
Phones aren’t 747’s, and guess what every normal person that goes into an airplane cockpit who isn’t a pilot is so overwhelmed by all the controls they wouldn’t know what anything did.
Interface designers know what they’re doing. They know what’s intuitive and what isn’t, and they’ve refined down to an art how to contain a complicated feature set in a relatively simple form factor.
The irony of people here with no design training that they could do a better job than any “so called designer” shows incredible levels of egotism and disrespect to a mature field of study.
Also demonstrably, people use their phones really quite well with very little training, that’s a modern miracle.
... and then they ignore it?
It triggers me when someone calls hidden swipe gestures intuitive. It's the opposite of affordance, which these designers should be familiar with if they are worth their salaries.
Very slightly unrelated, but this trend is one of the reasons I went Android after the iPhone removed the home button. I think it became meaningfully harder to explain interactions to older users in my family and just when they got the hang of "force touch" it also went away.
First thing I do on new Pixel phones is enable 3 button navigation, but lately that's also falling out of favor in UI terms, with apps assuming bottom navigation bar and not accounting for the larger spacing of 3 button nav and putting content or text behind it.
Similarly the disappearing menu items in common software.
Take a simple example: Open a read-only file in MS Word. There is no option to save? Where's it gone? Why can I edit but not save the file?
A much better user experience would be to enable and not hide the Save option. When the user tries to save, tell them "I cannot save this file because of blah" and then tell them what they can do to fix it.
I half agree. The save option should be disabled, since there is something very frustrating about enabling a control that cannot be used. However, there could be a label (or a warning button that displays such a label) explaining why the option is disabled.
The Mac HIG specifies exactly this: don’t hide temporarily unavailable options, disable them. Disabling communicates to the user the relationships between data, state, etc and adds discoverability.
This has been the norm on every desktop. But lately I don't think app designers know what "HIG" even is. Everything is web (or tries real hard to look like it even when it's native apps...), which is to say, everything is broken.
I had the same story, which is why the last phone I got for my grandma was an iPhone SE (which still has the home button). This way, no matter where she ends up, there's this large and obvious thing that she can press to return back to the familiarity of the home screen.
I am firmly in the “key UI elements should be visible” camp. I also agree that Apple violates that rule occasionally.
However, I think they do a decent job at resisting it in general, and specifically I disagree that removing the home button constitutes hiding an UI element. I see it as a change in interaction, after which the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself. It is debatable whether it is intuitive or better in general, but I personally think it is rather similar to double-clicking an icon to launch an app, or right-clicking to invoke a context menu: neither have any visual cues, both are used all the time for some pretty key functions, but as soon as it becomes an intuition it does not add friction.
You may say Apple is way too liberal in forcing new intuitions like that, and I would agree in some cases (like address bar drag on Safari!), but would disagree in case of the home button (they went with it and they firmly stuck with it, and they kept around a model with the button for a few more years until 2025).
Regarding explaining the lack of home button: on iOS, there is an accessibility feature that puts on your screen a small draggable circle, which when pressed displays a configurable selection of shortcuts—with text labels—including the home button and a bunch of other pretty useful switches. Believe it or not, I know people who kept this circle around specifically when hardware home button was a thing, because they did not want to wear out the only thing they saw as a moving part!
>the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself.
Right, but while it's obvious to everyone that a button is a control, it's not obvious that an edge is a control. On top of that, swiping up from the bottom edge triggers two completely different actions depending on exactly when/where you lift your finger off the screen.
Why not move the physical home button to the back of the phone?
I am the same, long time Android user and when I borrow my wife's iPhone it is an exercise in frustration. Interactions are hidden, not intuitive, or just plain missing.
Now that Pixel cameras outclass iPhone cameras, and even Samsung is on par, there is really no reason to ever switch to the Apple ecosystem anymore IMO.
That’s thanks to third party devs, not Apple. If you look primarily at proper native UIKit/SwiftUI apps, there’s a lot more consistency, but there’s a lot of cross platform lowest common denominator garbage out there that pays zero mind to platform conventions.
You see this under macOS, too. A lot of Electron apps for instance replace the window manager’s standard titlebar with some custom thing that doesn’t implement chunks of the standard titlebar’s functionality. It’s frustrating.
Not really. In Android there will be a back button, on iPhone you're supposed to know to swipe in some direction. On Android there will be a button to show running apps, on iPhone you will need to swipe correctly from somewhere. When 3d touch existed I think there were like 11 different ways of pressing the home button depending on context.
Android by default is also swipe swipe swipe. You need to tweak the settings to get the older and saner 3-button setup back.
As far as the Back button, on iOS the norm is for it to be present somewhere in the UI of the app in any context where there's a "back" to go to. For cross-app switching, there's an OS-supplied Back button in the status bar on top, again, showing only when it's relevant (admittedly it's very tiny and easy to miss). Having two might sound complicated but tbh I rather prefer it that way because in Android it can sometimes be confusing as to what the single global Back button will do in any given case (i.e. whether it'll navigate within the current app, or switch you back to the previous app).
Like everything, this goes in cycles. When the iPhone launched, its UI was touted as revolutionary; simple, discoverable, not the convoluted mess that a typical windows experience was. "lol lolyou have to click start to power off your computer" and the likes. You had the physical home button, or the three buttons on android. They were discoverable, you handed an old phone to your grandma and she could just try things and figure it out.
Nowadays everything has to be clean and minimalist. No scrollbar, no buttons, just gestures. Hand a modern smartphone to someone who never used one in their life and see how they struggle to ever leave the first app they open. What are the odds they discover one of the gestures?
We have a user interface design rule that keyboard shortcuts and context menus must only be "shortcuts" for commands that are discoverable via clear buttons or menus. That probably makes our apps old-fashioned.
I recall learning that the four corners of the screen are the most valuable screen real estate, because it's easy to move the mouse to those locations quickly without fine control. So it's user-hostile that for Windows 11 Microsoft moved the default "Start" menu location to the center. And I don't think they can ascribe it to being mobile-first. Maybe it's "touch-first", where mouse motion doesn't apply.
I think it's user-hostile that 'maximise' is next to 'close'. After moving my mouse so far, I need to start using fine control if I want to maximise it. I want more of the program and, if I fail, I get none of it - destructively!
I think the centered icons on W11 were done for one reason and one reason only: ripping off MacOS (probably because it's what the design team uses themselves and it felt familiar to them). There is no sensible UX reason to do it, and even in MacOS it's a detriment to its interface.
I don't think it's a macOS ripoff, they would've also ripped off more of the dock if that was the goal. For instance, you would've been able to do things like "pin the task bar to the side".
I think they wanted the start menu to be front and center. And honestly, that just sounds like a good idea, because it is where you go to do stuff that's not on your desktop already. But clicking a button in the bottom left and having the menu open in the middle would look weird, so centering the icons would make sense.
I think there are better ways to do it and I'm sure they've been tried, but they would probably confuse existing Windows users even more.
Corners and edges are rarely used that way. They should be. See "Fitts Law".[1]
My metaverse client normally presents a clean 3D view of the world. If you bring the cursor to the top or bottom of the screen, the menu bar and controls appear. They stay visible as long as the cursor is over some control, then, after a few seconds, they disappear.
This seems to be natural to users. I deliberately don't explain it, but everybody finds the controls, because they'll move the mouse and hit an edge.
Not sure I agree with all of the OP's opinions. I prefer a clean, calm, uncluttered user interface over a noisy, busy, cluttered one. In the OP's example with maps, I'd rather see a full-screen map, instead of a map that is always partially covered by a bunch of big buttons, obfuscating my view. Please let me see the map. Yes, fill the entire screen with it.
Gradually, over decades, society has evolved a "shared language of touch-screen actions" for controlling touch-screen devices. Many actions are familiar to everyone here: tap to hide/show controls, press and hold to bring contextual menus, pinch with two fingers to zoom out, etc.
It's OK for UI designers to assume familiarity with this common language to keep UIs clean, calm, and uncluttered. I like it.
Your "shared language of touch screen interactions" is and will forever be unrealized as endless 'innovations' 'novelties' of creative developers and companies remain unfettered by any requirements for compliance voluntart or otherwise to UI 'standards'. Software developers are focused on myriad depths and constraints of toolkits and frameworks languages and libraries to get network, cloud, and actual functionality right, and immersed in those worlds, burden users with their own vast congitive prejudice that their application is the only one in the world, figuring 'users' have unlimited time to decipher undocumented UIs effectively gamified and unique across hundreds of spplications by not only gestures but by required and precise cadences to correctly effect those gestures, cadences which are overloaded and confounded by network and device delays and zero haptic or audio or visual feedback on what may have been 'commanded' or what is yet to be acomplished and displayed onscreen.
I might be tired, and this isn’t meant as anything other than constructive criticism, but good grief I think you need to use full stops a little more. I had to re-read that 3-4 times to make out what you meant.
Only tangentially related, and a seemingly lost old-man battle: stop hiding my scrollbar.
Interesting article. Some points I didn't quite agree entirely with. There's a cost and practically limitation to some things (like a physical knob in a car for zooming in and out on a map - although that was probably just an example of intuitive use).
I just recently switched a toggle on a newly installed app that did the opposite of what it was labelled - I thought the label represented the current state, but it represented the state it would switch to if toggled. It became obvious once changed, but that seems the least helpful execution.
I hate toggle switches IRL too. They are just as ambiguous there. Checkboxes and pushed-in buttons are far clearer, but have unfortunately been sacrificed at the altar of "modernity".
More seriously, my understanding is that the octopus retina does not have color receptors, just aggregate light, I.e. brightness.
But the octopus practically has a sub-brain behind each respective eye, and the eye brains can extract color from the slight lensing differences across frequencies.
They are amazing magical creatures.
Taking that approach, and some sort of ocular lathe, and we can fix this.
There was such a confusing toggle at the ticket machines for the train here in Austria many years ago. It was for immediately validating your ticket, which is a potentially costly mistake.
About the scroll bars: Also stop making them so thin that I have to have FPS skills to hit them! Looking at you, Firefox! (And possibly what standard CSS allows?) Yeah, I can scroll, but horizontally the scrollbar would be more convenient than pressing shift with my other hand.
Firefox nonobviousity:
Type in about:config in your address bar
Search for
widget.non-native-theme.scrollbar.size.override
Edit it to whatever number you want
You can also edit
widget.non-native-theme.scrollbar.style
to change the shape of it, set it to 4 for a nice chonk rectangle
Finally, turn on “Always show scrollbars” in the normal settings window about:settings if you want them always on.
I’ve never known until this moment that shift makes you scroll horizontally, because I’ve always either used a mouse with horizontal scrolling built into the scroll wheel, or a touchpad.
It's been a standard Windows feature for quite some time! I don't think people need to scroll horizontally as much now that most screens are no longer rectangular, but this feature goes back to the dialup era and very few people seem to know about it.
> I thought the label represented the current state, but it represented the state it would switch to if toggled. It became obvious once changed, but that seems the least helpful execution.
Such ambiguous switches are often associated with "opt out" misfeatures.
Right! If you want it to denote an action, you need to include the verb: "TURN ON" would be entirely clear. It's even clear if you sometimes DO want to show state / not a button "IS ON" is also perfectly clear. There's only a few that might he confused when the verb is shown, like "INCREASE," although I would have to work a little to imagine the UI where it's not clear whether the button is showing the verb or noun.
you can get the same issue with icons too. The one that gives me anxiety is the microphone with a line through on a button. I _am_ muted or I should click to _mute_. If my kids are arguing in the background and it's an important call it can feel like a high stakes thing to get wrong and often times it only becomes clear what state I'm in by toggling a few times. Does the icon change to a mic without a line when I click or does the previously shown mic with a line now get coloured in, what does _that_ mean?
And even WORSE are the services that use different variations on those depending on the platform you're using! Yes, I am looking directly at you, Amazon Chime.
One of my big beefs with modern UI is two-state controls where it's impossible to determine what the current state actually is. Like a button that says "Music Off" where it's unclear if that means the music is CURRENTLY off, or if clicking the button turns it off.
I recently used the washroom at a Starbucks. The one where you have to enter a code to get in. Once I was inside, there were no knobs or any mechanical way to lock the door - just one circular button with a lock icon on it. I pressed it, and the button lit up as green. Pressed it again, it lit up as red. No indication on what light colour meant what. Does red mean it's unlocked? Or does it mean it is locked, since red usually indicates no entry.
Reading through the responses of your comment, I came to the conclusion that the topic is on point. There are many complains about people missing things (please add ...), and people responding with a solution because it's already there - just hidden.
I can't recall the app but it was a similar toggle with a label, when you flipped the toggle the label lit up green indicating it was turned on. But the default state was off but how would you know?
The green / red is at least a half decent indicator (questionable for the colour blind folks though), but the current trend of very slightly different shades of grey is the pinnacle of utterly fucking stupid design; perfect for a non-interactive set piece in a gallery, just dumb for use for by human beings.
In the 90's I had this vision that the menu and the scrollbar should be physically separated from the screen.
If you have (next to your monitor on the left side) a narrow physical display with menu entries in it. You get 4 things for "free", the user will expect there to be menu entries, the developer will understand the expectations to have menu entries, there is limited room to go nuts with the layout or shape of the menu and last but most funny, you won't feel part of the screen has been taken away from you.
The physical scrollbar should be a transparent tube with a ball (or ideally a bubble) floating in it.
Usage could be moving the pointer out of the screen. The scrollbar led goes on and you can hold the button to move the page. When using the menu the pointer [also] vanishes and the menu entry at that height is highlighted. (much better usability) Moving the mouse up or down highlights the above or below entries, if there are a lot of entries it may also scroll. It may be a touch screen but the most usuable would be a vertical row of 5 extra wide (3 fingers) keyboard buttons on the left with the top 4 corresponding to the 1st, 2nd, 3rd, 4th menu entry and the 5th one for page down. (scrolling down 4 entries) Ideally these get some kind of texturing so that one can feel which button one is touching.
This way knowledge in the world can smoothly migrate to knowledge in the head until eventually you can smash out combinations of M keys in fractions of a second without looking at the screen or the keyboard. The menu displayed is always in focus, you don't have to examine the view port to use it. Having a row of horizontal F keys is a design fiasco. Instinctively bashing the full row of those might come natural after learning to type, then learning to type numbers, then symbols and only if you frequently use applications that have useful F key functionality. I only really know F5 and F11 but I cant smash them blindly as I pretty much never use them. I just tried F1 in firefox and no help documentation showed up... I think that was what it was suppose to do? Not even sure anymore.
Having the antenna menu (file, edit, etc) at the top of the viewport is also ugly. For example, smashing the second then the top M key could easily become second nature. CTRL+Z is fine of course but it aint knowledge in the world. Does anyone actually use ALT+E+U for undo? Try it on the CTRL+F input area. It's just funny. Type something in the address bar then compare ALT+E+U with using the Edit menu.
A separate display would take many of these "design" privileges away from the clowns.
(note: I think it is ALT+E+U as the Dutch layout is forced on me by windos. Edit is called Bewerken and the shortcut is ALT+W!?! ALT+E does nothing.)
It was just a vision from long ago. But okay, for sake of argument. It doesn't need to be ultra hd in a billion colors, it can go on the bezel and be screen height so that you don't have to aim to hit it. No need for it to glow intensely, perhaps not at all, perhaps simple single color LCD would do the trick.
I don't agree scrollbars work fine, they use to work fine, now they are to tiny to click on.
There also was/is the issue where the view port width needs to be adjusted when page state grows beyond the screen height then word wrap makes the content shift down. Is the solution to have one so tiny it is hard to use or should one always display a scrollbar? The one outside the screen is always there :)
I like things that do only one thing, do it well and in a simple way.
You could also go the other direction and put everything on the screen. Huawei just made a horrifying laptop where the keyboard is also a screen.
> In the 90's I had this vision that the menu and the scrollbar should be physically separated from the screen.
Buttons alongside, above, or below screens appear now and then. Some early terminals had them. Now that seems to be confined to aircraft cockpits and gasoline dispensers.
Some ATMs have unmarked physical buttons next the screen and tge text displayed on the screen next to those buttons defines what the key does.
TV remotes have A/B/C/D (red/blue/green/yellow) physical buttons whose function is dynamically defined by your context or which setting / function / menu you are currently inside.
I guess this goes back to video game controllers that have A/B X/Y buttons that can have different Functions in different contexts.
touch screens cant match physical buttons. This one is extra funny by taking keys away and giving you unknown things in return. Finally one can once again look down at the keys wondering which is which. After moving your hands away.
If I was on the design team they would have fired me for screaming at everyone. Screaming is good UI tho.
> If I was on the design team they would have fired me for screaming at everyone.
Oh man. I really do start screaming sometimes.
At user interfaces, too often. At unbelievably bad product choices of all kinds.
The simpler & dumber the issue the louder I get.
Someone creates a quality flat tine garden rake with about 40 metal tines, and charges accordingly. The person who manages stickers, because everything needs stickers, creates huge stickers they glue across all the tines. You try to peel it off and now you have over two dozen tines with long streaks of shredded paper glued hard to them.
Screaming is an appropriate place to put the high spin WTF-a-tons that might otherwise feed the universe’s dark energy.
And that, dear reader, is my theory of dark energy.
This is easily one of the most frustrating parts of the user experience on Discord. So many buttons are hidden until you mouse over them, which absolutely drives me UP A WALL. I really hope this trend discontinues.
Agree utterly. It's a real shame, and severely affects accessibility for disabled and elderly people. Not only UI discoverability but also the types of swiping or holding movements required on mobile devices. The initial mobile interfaces felt way more accessible, so I don't think its an implicit implication of limited screen real-estate. This has been a trend-driven flattening of UI, with aesthetics over functionality. The palm and compaq pilots felt sublime to use, and the ipod and early mp3 players were fine, as was the originally charming iphone skeudomorphic iconography. It's all been downhill since then.
I don’t know that I agree. Take reading HN comments on my phone. There’s dozens of UI controls that are hidden behind a few buttons at the top or bottom of the screen. Getting that stuff out of the way makes the page itself take up almost all of my phone screen - and that makes the webpage much more beautiful and enjoyable. My phone screen is only so large. The palm pilot era equivalent browser would fill half the screen with buttons and controls and scroll bars, leaving much less room for the website content.
In my opinion, hidden controls aren’t bad per se. But they are something you have to learn to use. That makes them generally worse for beginners and (hopefully) better for experts. It’s a trade off and sometimes getting users to learn your UI is the right decision. I’m glad my code editor puts so much power at my fingertips. I’m glad git is so powerful. I don’t want a simplified version of git if it means giving up some of its power.
That said, I think we have gone way too far toward custom per-app controls. If you’re going to force users to learn your UI conventions, those learnings should apply to other applications on the same platform. Old platforms like the palm were amazing for this - custom controls were incredibly rare. When you learned to use a palm pilot, you could use all the apps on it.
The interaction that messes me up all the time is the side button and payment related stuff
One press turns on/off the display
Two taps enables Apple Pay
Quite often my timing is not perfect or one press isn’t hard enough so I shut off the display
Then, paying with Apple Pay is a double press but paying for transit is no press. but often I’m absent minded and as I’m walking through the transit gate my brain thinks “must pay” “pay = double press” so I subconsciously double press and the gate screams since is not in transit mode now it’s in Apple Pay mode
There is a sweet spot in between those two extremes if you stop trying to build a compromise of a UI for both, touch screens and desktops. (sadly we still do not have the reality demoed by apple and google 10 years ago, where touch screens have hover detection, maybe XR gaze detection will bring this)
You can pack extreme amounts of features without cluttering the interface or sacrificing discoverability by only showing most features on hovering certain areas. This is risk and effort free from user perspective, as users are much more explorative when no clicking is involved and it is clear an interaction will not trigger features by accident in the discovery process.
Im especially passionate about this because having ADHD makes one sensitive to irrelevant stimuli in the periphery but being a power user for most software the dumbification of software happening since mobile apps drives me insane. I want software where a feature being used by the top 5 to 10 % power users once a month is not being ripped out if that once a month use provides high value for that group.
On the risk of sounding like a grandpa, but there used to be a pretty effective "division of labor" for this in UIs:
(1) The "fast" path: Provide toolbars, keyboard shortcuts and context menus for quick access to the most important features. This path is for users who already have the "knowledge in the head" and just want to get there quickly, so speed takes priority over discoverability.
(2) The "main" path: Provide an exhaustive list of all features in the "title bar"/"top of the screen" menus and the settings dialogues. This path is mainly for users who don't have the "knowledge in the head" and need a consistent, predictable way to discover the application's features. But it's also a general-purpose way to provide "knowledge in the world" for anyone who needs it, which may also include power users.
Therefore, for this path, discoverability and consistency is more important than speed.
Crucially, the "main" features are a superset of the "quick" features. This means, every "quick-access" feature actually has at least two different ways to activate it, either through 1 or through 2.
This sounds redundant, but makes perfect sense if it allows peoples to first use the feature through 2 and then later switch to 1 when they are more confident.
My impression is that increasingly, UIs drop 2 and only provide 1, changing the "fast" into the "main" path. Then suddenly "discoverability" becomes a factor of its own that needs to be implemented separately for each feature - and in the eyes of designers seems to become an unliked todo-list bullet point like "accessibility".
Usually then, it's implemented as an afterthought: Either through random one-time "new feature" popups (if it popped up at an inappropriate time and you just closed it to continue with what you wanted to to, or if you want to reopen it later - well, sucks to be you) - or through unordered "everything" menus that just contain a dump of all features in an unordered list, but are themselves hidden behind some obscure shortcut or invisible button.
> if you stop trying to build a compromise of a UI for both, touch screens and desktops
Agree many of the problems have to do with this, yet it’s barely mentioned by armchair designers. Temporally hidden and narrow scrollbars? Makes perfect sense for scrolling on touch screen (since you don’t touch them directly), but very annoying on desktop.
Back in the pre-touch days we’d have a lot of hover menus. But with a phone today? Nobody likes the hamburger/three dots, but there isn’t a better alternative without losing context. And nobody uses hover anymore for functional purposes.
But, I also don’t think building entirely separate apps and especially web sites for different form factors is desirable. We probably should be better at responsive design, and develop better tooling and guidelines.
My car’s audio system seems to go out of its way to bury sound settings (bass, treble, balance, etc.) in as many nested menus as possible. And when you do finally find the settings, they are greyed out. I had to actually watch a youtube video to figure out that they are configured at the individual source level. Super confusing and unintuitive, and especially egregious considering that this is in a vehicle you are DRIVING - confusion, distraction, and frustration are the last things you want drivers to experience.
Respectfully disagree. My point is that it should be easy and intuitive to do things like this while driving, just like anything else such as adjusting HVAC controls, operating turn signals, shifting gears, etc. Most major controls and operations should be tactile and easily understandable even if you have never driven that particular car before. I believe that drivers feel more distracted by modern vehicles’ UI/UX than ever before, and I rented a BMW last year that perfectly exemplifies this. It was a nightmare of unintuitive screens and menus just to do basic things - actively driving or not. It really turned me
off to BMWs.
I used to drive a Camry where on the factory radio, bass and treble had individual knobs and you could adjust them without taking your eyes off the road. Oh, those were the days.
I fully agree with you on this. If the car is moving you shouldn't really do anything more than previous/next/volume. And of those they should be on the steering wheel.
You want to mess with your equalizer, do it when stopped. IDGAF if it's dozens of physical buttons and knobs and sliders or hidden in menus; you're supposed to be driving not mastering an audio file.
As a user, you have no way to see if a photo has been "scanned" with smart features and what it has detected (e,g found person x, found dog, blue sky, beach etc).
Trips features, has this algorithm finished scanning your library? You have no idea, it's just hidden.
Faces, detection, has this completely scanned your library? You don't know. Photos that don't seem to have faces detect, was it scanned or failed or did it not scan yet?
The list is nearly endless - but in line with the rest of the direction of MacOS, getting worse.
I think the new Apple design tries to do this too much and it will cause some issues. They're trying to make many things modal, split and merge on scroll, show and hide contextually. The intentions might be good, an intelligent interface that adapts sounds good in theory, but who knows really what the users want to do?
I remember Nokia E-series phones with QWERTY keyboards had a little torch printed on the tiny spacebar. Everything else now feels unintuitive compared to that.
Just a minor quibble. Terminal based UI's weren't completely memorized. Many of us had a reference card taped to the wall, or a list of commonly used commands. It was an acceptable way to extend the limited information density of the 80x25 text display, and a really good manual was as discoverable as a GUI.
Not too convenient to carry along with a pocket computer, though.
Something which drives me mad is how modern operating systems (both desktop and mobile) keep hiding file system paths. There used to be a setting on OSX which let you show the address bar in Finder (though it wasn't default) but nowadays it seems to be impossible (unless you get some third-party extension) and I have to resort to using the terminal. It's bonkers.
It makes it impossible to locate files later when I need to move or transfer them.
I have this issue when links are shared directly to a file on SharePoint.
It's often more useful to share the directory it's in rather that the file itself. MS Office dies have a way to get that information, but you have to look for it.
The term “popover” has been gaining popularity in the last decade or so, as a superset of dropdowns. HTML adopting the term a couple of years ago has helped with this.
The only thing that seems wrong about it to me is that it's above the point where the user clicked rather than underneath; and that's only because that point is near the bottom of the screen.
The article suggests a “simple, well-labeled rotary control ... would accomplish the same function” as a power button and “prevent the user from accidentally activating the control in a way that is no longer hidden”. But a rotary control itself has a serious problem, in that it can mislead the user as to the state, on or off. If the power has failed and the machine does not restart when it comes back, the rotary control will remain in the ON state when the machine is off. From memory, Donald Norman called this kind of thing “false affordance” and gave the example of a door that needed to be pulled having a push-plate on it.
So my iMac, among many other devices like the light I wear on my head camping, has a button which you long-press to turn on. It is a very common pattern which most people will have come across, and it’s reasonable to expect people to learn it. The buttons are even labelled with an ISO standard symbol which you are expected to know.
If the power has failed and the machine does not restart when it comes back, the rotary control will remain in the ON state when the machine is off.
A better example may be a solenoid button, used on industrial machinery which should remain off after a power failure, which stays held in when pushed, but pops out when the power is cut. They are not common outside of such machinery, because they're extremely expensive. In the first half of the 20th century, they also saw some use in elevators: https://news.ycombinator.com/item?id=37385826
I have never looked at a fan that isn't running and been confused by the switch being set to “on”. The affordance is that it immediately tells me that the switch is on, so the problem is somewhere else. Compared to the typical phone's “hold for 3 seconds to turn on, hold for 10 seconds to enter some debug mode”, this is a breath of fresh air when anything unusual is going on with the device.
I live in a country where the socket on the wall the fan is plugged into also has a switch, which could be on or off. So to make the fan go around, both switches must be on; the user needs to know about and have a mental model of serial circuits.
If it’s just a button the user just has to know two things: turn the switch on at the wall socket when plugging in, which becomes habit since childhoood; and press and hold the button on the fan to make it go, which I suspect most children in 2025 can manage. These two things don’t interact and can be known and learned separately.
As you said, the knob’s position tells you about the switch. But it’s the fan the user is interested in, not the switch.
(BTW, if the fan has a motion sensor you can’t tell it’s off by the fact the blades aren’t turning. There’s probably a telltale LED.)
Notion is horrendous for this. Hiding every control behind an invisible hover target. No, I don't want my company documentation to have a minimalist aesthetic. I just want to use it.
The article mentions the late Mark Weiser's work on Ubicomp at Xerox PARC. Before he went to run PARC, we worked together at the University of Maryland, where he supported and collaborated with my work on pie menus.
Mark Weiser, Ben Shneiderman, Jack Callahan, and I published a paper at ACM CHI'88 about pie menus, which seamlessly support both relaxed "self revealing" browsing for novices, and accelerated gestural "mouse ahead" for experts: smoothly, seamlessly, and unconsciously training users to advance from novice to expert via "rehearsal".
Pie menus are much better than gesture recognition for several synergistic reasons: Most importantly, they are self revealing. Also, they support visual feedback, browsing, error recovery, and reselect. And all possible gestures have a valid and easily predictable and understandable meaning, while most gestures are syntax errors.
Plus the distance can also be used as an additional parameter, like a "pull out" font:direction / size:distance selection pie menu, with live feedback interactive both in the menu center and in the text document itself, which is great during "mouse ahead" before the menu has even been shown.
The exact same gesture that novices learn to do by being prompted by the pop-up pie is the exact same action experts use more quickly to "mouse ahead" through even nested menus without looking at the screen or needing to pop up the pie menu. (By the principle of "Lead, follow, or get out of the way!")
Linear menus with keyboard accelerators do not have this "rehearsal" property, because pressing multiple keys down at once is a totally different (and more difficult to remember and perform) action than pointing and clicking at tiny little menu labels on the screen, each one further from the cursor and more difficult to hit than the next.
Our controlled experiment compared pie menus to linear menus, and proved that pie menus were 15% faster, and had a significantly lower error rate.
Fitts' Law unsurprisingly predicted that result: it essentially says the bigger and closer a target is to the cursor, the faster and more reliably you can hit it. Pie menus optimize both the distance (all items directly adjacent in different directions), and the area (all items huge wedge shaped target areas that get wider as you move away from the center, so you get more precise "leverage" as you move more, trading off distance for angular precision.
The Design and Implementation of Pie Menus: They’re Fast, Easy, and Self-Revealing.
Originally published in Dr. Dobb’s Journal, Dec. 1991, cover story, user interface issue:
>Pie menus are faster and more reliable than linear menus, because pointing at a slice requires very little cursor motion, and the large area and wedge shape make them easy targets.
>For the novice, pie menus are easy because they are a self-revealing gestural interface: They show what you can do and direct you how to do it. By clicking and popping up a pie menu, looking at the labels, moving the cursor in the desired direction, then clicking to make a selection, you learn the menu and practice the gesture to “mark ahead” (“mouse ahead” in the case of a mouse, “wave ahead” in the case of a dataglove). With a little practice, it becomes quite easy to mark ahead even through nested pie menus.
>For the expert, they’re efficient because — without even looking — you can move in any direction, and mark ahead so fast that the menu doesn’t even pop up. Only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections.
>Most importantly, novices soon become experts, because every time you select from a pie menu, you practice the motion to mark ahead, so you naturally learn to do it by feel! As Jaron Lanier of VPL Research has remarked, “The mind may forget, but the body remembers.” Pie menus take advantage of the body’s ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.
>By moving further from the pie menu center, a more accurate selection is assured. This feature facilitates mark ahead. Our experience has been that the expert pie menu user can easily mark ahead on an eight-item menu. Linear menus don’t have this property, so it is difficult to mark ahead more than two items.
>This property is especially important in mobile computing applications and other situations where the input data stream is noisy because of factors such as hand jitter, pen skipping, mouse slipping, or vehicular motion (not to mention tectonic activity).
>There are particular applications, such as entering compass directions, time, angular degrees, and spatially related commands, which work particularly well with pie menus. However, as we’ll see further on, pies win over linear menus even for ordinary tasks.
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
>[...] Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing” [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
>They also provide the ability of “Reselection” [6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
>Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
>There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
>But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
>Pie menus also support “Rehearsal” [7] — the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it’s not rehearsal.
>Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
a lot of the things being pointed out seem like non issues. It seems to me that this doesn't really explore that knowledge in head UIs are actually a lot more straightforward and easy to use with the knowledge in head. Most attempts to circumvent that bloat UIs. Also whatever you give people, if it's a repetitive use UI they tend to learn it and turn into knowledge in head, even if its a knowledge in world type of UI, you then change it and people get confused.
The rotational On-Off switch for a computer is cool and provides excellent feedback, but like many stateful electromechanical input elements it has the problem that it might run out of sync with the system it controls. E.g. what if the PC is shutdown, it is practically off (you can't do useful stuff with) but technically on (only in a weird shutdown state).
I am a fan of the conceptional clarity, but having to wait for my PC to shutdown only to have to flip a switch myself is not good UX. The absolute ideal would be the switch mechanically turning to off once it is off, and such switches exist, but they are expensive and require extra electronics to drive the electromagnetic part. A really good example of this UX principle are the motor faders in digital audio mixers: You can move them with your hand but if you cange to a different channel layout the mixer can move the faders for you. The downside of those is mainly cost.
The cheap 80/20 solution for the PC is a momentary push-button and a Green/Red LED to display the current state. 5s holding is power-off because everything else has the danger of accidentally switching off — but this isn't obvious to the non-initialized.
Mobile is a deliberately second-class platform, in many cases to prevent closing an obtrusive window to serve an advertisement, or to provoke an inadvertent click on an ad. Many ads with malware simply don't present if the platform is not mobile, by design, from the creator.
Steve Jobs was mocking Microsoft for this kind of UI two decades ago when shipping the first iPhone.
None of this is new. But this kind of dysfunctional product is what a dysfunctional organization ships, despite knowledge.
Why? Because leadership wants features. Leadership also wants a clean, marketable product. Leadership also wants both of those done on a dime, quickly and doesn't care about the details. The only way to satisfy all constraints at the same time is to implement features and hide them so they don't clutter the UI.
This is one thing that pisses me off about modern computing. This is shit that we mostly already figured out, but people with no context decided that visual design was the most important part of UI design, with no forethought to usage or discoverability.
The golden age of computing is sadly long, long passed.
> If you want to lock the door, then the hidden control problem becomes evident... to lock the door, I must know that the hidden control to lock is the pound key. To make matters worse, it's not a simple press of the pound key. It's a press of the pound key for a full five seconds in order to activate the lock sequence. The combination of the long temporal window and the hidden control makes locking the door nearly impossible, unless you are well acquainted with the system and its operation.
Isn't that kind of the point? You don't want people accidentally locking the door, but if it's your door, it's easy enough to remember how to do it.
My gosh I was unaware there were so many old men shaking their fists at clouds here. The level of nitpicking here is ridiculous, none of this is hard, no one else seems to have any issues with most of this stuff, it seems to me like people are bored and want to be angry at something.
> no one else seems to have any issues with most of this stuff
In my experience, 9 times out of 10 what this actually means is that they just don't know it's an issue! The type of person who would be confused by, say, the iOS control center, is not necessarily the type of person who would easily identify and raise the issue of it being difficult to do something on their device. They would just be mildly annoyed that they can't figure it out, or that the device "can't do it", and move on to find some other way. You may not realize it if you don't interact with those types of people but they fundamentally do not think like you or I do and what may be an obvious problem-solving process to you (e.g. identify a problem, figure out what tools are at your disposal and whether each could be helpful, check for functionality that could do what you are wanting, ask for help from others if you can't figure it out on your own, etc.) may actually not always be so obvious.
That's why the main way I find out people don't know how to do something is from them seeing me do it with my device and going "what!! I didn't know it could do that!!"
This is the mistake allowing this phenomenon to continue. It is not a "Boomer" or old-person thing. It is a thing for people who enjoy other things in life than electronics. We've already wasted years of our lives learning how to use a bunch of weak features and apps that weren't worth the time. Now those are all gone and we have to learn more? Forget it. Your app is not worth it.
I think there are couple of conflated aspects here - and some of them are fine, and likely a consequence of computing devices being more ingrained in common day, and some of them are very hostile, and clearly intended to subvert the interests of the user.
As an example:
I think hiding controls in favor of "knowledge in the head", as the author phrases it, is absolutely fine when the user is presumed to be aware of features, should be able to understand they exist and know how to use them, and can reasonably learn them. Especially fine if those controls aren't used all that often, and are behind a keyboard shortcut or other common and efficient route to reach them.
On the other hand - I think there's also been a drive to visibly reduce how much control and understanding basic users might have about how a machine works. Examples of this are things like
- Hiding the scheme/path in browser url bars
- Hiding the file path in file explorers and other relevant contexts
- Hiding desired options behind hoops (ex - installing windows without signing into an account, or disabling personalized ads in chrome)
Those later options feel hostile. I need to know the file path to understand where the file is located. I can't simply memorize it - even if I see the same base filename, is it in "c:/users/me/onedrive/[file]" or "c:/users/me/backed_up_spot/[file]"? No way to know without seeing the damn path, and I can have multiple copies floating around. That's intentional (it drives users to Microsofts paid tooling), and hostile.
Basically - knowledge that can be learned and memorized can benefit from workflows that give you the "blank canvas" that the author seems to hate. Command lines are a VERY powerful tool to use a computer, and the text interface is a big part of that. R is (despite my personal distaste for it as a language) a very powerful tool. Much more powerful and flexible than SPSS.
But there are also places where companies are subverting user goals to drive revenue, and that can rightfully fuck right off.
One of my biggest complaints with modern computing is that "The internet" has placed a lot of software into a gray zone where it's not clear if it's respecting my decisions/needs/wants or the publisher's decisions/needs/wants.
It used to be that the publisher only mattered until the moment of sale. Then it was me and the software vs the world - ride or die. Now far too much software is like judas. Happy to sell me out if there's a little extra silver in it.
This is why I really despise “Material Design” and the whole Google aesthetic.
Look at Google Meet for example. How many times and I trying to remember what the Share Screen icon looks like? Apple generally does this stuff far better: text labels for example. Also clicking some “+” icon to reveal more options — how does a “normal” person know what’s buried inside all of those click to reveal options?
Diversity in tech has always been a concern — but one concern I have is that diversity has always meant race, gender, or sexual orientation stuff — but a 28 year old Hispanic LGBT person doesn’t react to a UI much differently than a 28 year old Black hetero person. But a 68 year old Hispanic woman with English as a second language absolutely has potentially different UI understandings than an 18 year old white woman from Palo Alto.
Real diversity (especially age and tech experience levels) should be embraced by the tech companies — that would have a strong impact on usability. Computers are everywhere and we shouldn’t be designing UI around “tech people” understanding and instead strive for more universal accessibility — especially for products we expect “everyone” to potentially use. (Some dev ops tool obviously would have more latitude than an email app, but even then, let’s stop assuming users understand your visual language just because you do.)
I want to see more UX designers who are “old” rather than some clever kid who lives on Behance. I also want to see more design that isn’t created by typical higher educated designers who think everyone should understand things they take for granted. The blue collar worker that works construction, the grandmother from Peru, the restaurant cook, or the literature professor — whatever. Usability should be clear and obvious. That’s really hard — but that’s the job.
One of the original genius aspects of iPad is that a toddler can immediately start using it. We need all usability to be in that vein.
> Witness the navigation system in Apple Maps in CarPlay. The system developers obviously wanted to display as much map as possible, as shown in Figure 3 a). This makes sense, but to do that they relied on the use of hidden controls. If I want to enter a destination or zoom in on the map, I have to know to touch the bottom left-hand portion of the map
What? You don't have to touch any specific portion of the map. You tap anywhere and it brings up those controls.
I think this article largely has a point, and most of it seems true, but to me these bits of untruth are unamusing at best.
I sort of disagree with this: once I’ve internalized the gestures, I really appreciate the lack of UI for them. It’s like vim and emacs: the sparse ui creates a steeper learning curve but becomes a feature once you’ve learned the tool
It’s one thing to learn a few gestures that work consistently across the platform. But every app tends to do its own thing, and even if you are a power user of the respective apps and learn their idiosyncrasies, it’s still annoying that they all work in slightly or sometimes drastically different ways, and that they aren’t consistent in terms of discoverability.
My point is that no one is a new user forever and so I think we need to come up with a better solution than UI taking up screen space for things people end up doing via shortcuts. Menus and command palettes are great for this because they are mostly invisible.
The other important thing is learning to fit into the conventions of the platform: for example, Cocoa apps on Mac all inherit a bunch of consistent behaviors.
I started out with gVim with menu and toolbars. I quickly removed toolbars and after a while longer menus, as I didn't need them any more, they had taught me—though I seem to recall temporarily setting guioptions+=m from time to time for a while longer, when I couldn’t remember a thing. I think I had also added some custom menu items.
Being a modal editor probably makes removing all persistent chrome more feasible.
The default should be a clutter for new users, and the customization option should be make the UI customizable by hiding things you won't ever touch because you use shortcut keys.
The other way around is yeah, hostile. But of course it looks sleek and minimalistic!
On the early iPhones, they had to figure out how to move icons around. Their answer was, hold one of the icons down until they all start wiggling, that means you've entered the "rearrange icons" mode... Geezus christ, how intuitive. Having a button on screen, which when pressed offers a description of the mode you've entered would be user-friendly, but I get the lack of appeal, for me it would feel so clunky and like it's UI design from the 80's.
If you drive a car, you've demonstrated being willing to spend time learning a tool to take advantage of something being more efficient (than walking).
There is a tradeoff between efficiency and learnability, in some cases learning the tool pays off.
Look at the image of 2.0. There is permanent screen space dedicated to:
- Open
- Print
- Save
- Cut
- Copy
- Paste
I'm guessing you know the shortcuts for these. You learned the tool.
But by taking up so much space, these are given the same visual hierarchy as the entirety of the word 'Wikimedia'!
>Configurable options are certainly a good approach for those that know the tool well, but the default state shouldn’t require “learning.”
In practice, IME, this just means there being combinatorially many more configurations of the software and anything outside the default ends up clashing with the rest of the software and its development.
I drive a Toyota that is nearly old enough to run for US Senator. Every control in the car is visible, clearly labeled and is distinct to the touch - at all times. The action isn't impeded by routine activity or maintenance (ex:battery change).
Because it can be trivially duplicated, this is minimally capable engineering. Yet automakers everywhere lack even this level of competence. By reasonable measure, they are poor at their job.
Youtuber/Engineer William Osman had a great rant some time back when he bought a new microwave and it came with a ton of buttons, his argument being that a microwave only really needs one (and ideally its just a dial instead of a button).
My previous one lasted more than 20 years, from when my parents bought it for me when I went to study until some time in my 40s. It was still functional, but its dial had become loose and it didn't look that great anymore.
The one I bought after that follows the new pattern, it has buttons up the wazoo and who even knows what they do? To be honest I just need one power setting with a time and maybe a defrost option?
I'm sympathetic , but think it's a disservice to the designers to present it like that:
> Every control in the car is visible
No. And that would be horrible.
Every control _critically needed while driving_ is visible and accessible. Controls that matter less can be smaller and more convoluted, or straight hidden.
The levers to adjust seat high and positions are hidden while still accessible. The latch to open the car good can (should ?) be less accessible and can be harder to find.
There are a myriad of subtle and opinionated choices to make the interface efficient. There's nothing trivial or really "simple" about that design process, and IMHO brushing over that is part of what leads us to the current situation where car makers just ignore these considerations.
The older designs weren't perfect, but they generally respected that you might need to adjust something without thinking too hard or taking your eyes off the road
i disagree. i only want minimalist functionality and therefore it's reasonable to have ALL controls always present and physical. someone needs to have the courage to say no to features that will get people killed. a simple gun doesn't jam in the heat of battle. u my 1989 Toyota corolla has manual windows and that is great.
IMHO we'd need to ban anything fancier than a bare bone golf cart if we're following the principles you're describing. Not that I'd disagree with that either, I genuinely think it would have a positive impact on cities, and even most rural towns; especially as the population is growing older in so many places.
Simple guns jam all the time bro. Even 100 year old super simple designs jam.
All guns can jam. However a simpler design has less potential to jam.
If that's the case, why not simply delete all controls and shove them into a smartphone app?
Right, because it's fucking ridiculous to expect a driver to fumble through menus while driving.
It's cost, not competence. These days making a touch screen is easier and cheaper than manufacturing and assembling lots of little buttons and knobs.
It allows UI designers to add nearly endless settings and controls where they were before limited by dash space. It's similar to how everything having flash for firmware allows shipping buggy products that they justify because they can always fix it with a firmware update.
> It allows UI designers to add nearly endless settings and controls where they were before limited by dash space
Except, they don't do it.
Just like your Windows PC is capable of drawing a raised or sunked 3D button, or a scrollbar, but, they don't do it anymore.
The real cost saving is in the touch panel being a single component. It eliminates the need to optimize UI in physical space, and decouples the UI design and testing from the rest of the car design and manufacturing process. As a bonus, both hardware and software for the panel can then be outsourced do the lowest bidder or bought as a bottom-of-the-barrel COTS package.
Is this true given all the chips modern cars have, all the programming that must be done, and all the complex testing and QA required for the multitude of extra function?
I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
Both lane assist and backup cameras are mandatory safety systems for new cars in the EU. Same goes for things like tired driver detection and other stuff that was considered opulent luxury ten years ago.
With the land tanks we call SUVs today, I can imagine it wasn't hard for politicians to decide that mirrors are no longer enough to navigate a car backwards.
Still, you don't need touch screens. Lane assist can be a little indicator on a dashboard with a toggle somewhere if you want to turn it off, it doesn't need a menu. A backup camera can be a screen tucked away in the dash that's off unless you've put your car in reverse. We may need processing to happen somewhere, but it doesn't need to happen in a media console with a touch screen.
You can actually put a backup camera in the rearview mirror. Back before rollover protection cars had quite amazing visibility. Best vehicle visibility I've had in the past 5 years was actually a 1997 F-150. You'd think it's a big truck, but you could more or less see all around you, and it didn't have that crazy high front hood either.
Yeah my big old truck has basically no blind spot. I'm getting a new work vehicle soon and am going to need to retrain my brain hard.
> I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I would pay more for decent physical switches and knobs, but I would give up AC before the backup camera. Getting this was life changing. I also wish all cars had some kind of blind spot monitoring.
In some countries it's a legal requirement to have a backup camera, which means you need a screen to display it, and hardware to render it.
I have always thought they should put the display for the backup camera behind the driver and facing the front of the car, so that it would be easily visible to a driver looking out the rear and rear-side windows while backing up.
That could be an accessibility issue for people with neck problems. I can see why general legislation put it at the front with everything else.
which countries?
For new cars, US/Canada since 2018, Japan/EU since 2022
https://wikipedia.org/wiki/Backup_camera
> and hardware to render it
Not really, you legally could have a video camera and a CRT as a backup camera. I wouldn't say that anything is rendered in an analog video system.
The last thing I'd want in an accident is a little CRT exploding glass shards next to me.
You're not thinking about the manufacturing part. Buttons and knobs have to get assembled and physically put into every car. Software just needs to be written once.
> I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
You should check how SW and HW are tested in the car.
A typical SW test is: Requirement: SW must drive a motor if voltage reaches 5 V. A typical SW test is: Increase the voltage to 5 V, see that the motor moves.
Now what happenes at 20 V is left as an exercise for the user.
Knobs (plus mechanical circuitry) that can survive 100k miles of use are expensive.
One of the reasons I purchased a (newer but used Mazda) was because it still has buttons and knobs right next to the driver's right hand in the center console. I can operate parts of the car without even having to look.
(another reason was because it still has a geared transmission instead of a CVT, but that's a separate discussion)
Look ma, I can change the air conditioning controls without looking moment.
A friend got a tesla on lease and it was quite cheap, 250/month. Been driven in that car a few times and was able to study the driver using the controls and it’s hideusly badly designed, driver has to take eyes off the road and deep dive in menus. Plus that slapped tablet in the middle is busy to look at, tiring and distacting. The 3d view of other cars/ pedestrians is a gimmick, or at least it looks like one to me. Does anyone actually like that? Perhaps im outdated or something but I wouldn’t consider such a bad UX in a car.
The 3D view is a marketing gimmick and maybe something to show off to your passengers. You've for a massive screen, so you can't just leave it empty, the owners would realize it's a gimmick.
In practice many drivers seem to be dealing fine with the touch screen because they've stopped paying attention to the road, trusting their car to keep distance and pay attention for them. Plus, most of the touch screen controls aren't strictly necessary while driving, they mostly control luxury features that you could set up after pulling over.
luxury features like... the windshield wiper
https://www.bbc.com/news/technology-53666222
My newer phev saves me a large pile of money ever month in gas. Not as much as payments, but closer than you would think.
At an average 14K miles per year and a guessed 25 mpg, that’s 560 gallons/year. At $4/gallon (guessed and well over the US average), that’s $2240/yr.
If you exclusively charged with completely free electricity and still managed to drive that 14K miles in a year, you’d save $187/mo.
If it moved you from 25mpg to 40mpge, it’d save you a little over $70/mo.
Our two cars are a BEV and a hybrid, so I’m no battery-hater, but neither is cheaper than a reasonable gas-only equivalent would be.
> It's cost, not competence.
This implies it's a consequential cost. Building with tactile controls would take the (already considerable) purchase price and boost that high enough to impact sales.
If tactile controls were a meaningful cost difference, then budget cars with tactile controls shouldn't be common - in any market.
Are controls uniquely important, though? There are hundreds of things in a car that could be made better (more durable, longer lasting, better looking) for just $10 to $100 extra a piece. But it adds up.
It's not just cost, though. The reality is that consumers like the futuristic look, in theory (i.e., at the time of the purchase). Knobs look dated. It's the same reason why ridiculously glossy laptop screens were commonplace. They weren't cheaper to make, they just looked cool.
> knobs look dated
Not all. Knobs designed with dated designs and/or materials look dated. There's a million ways to make a knob, just use a modern or novel one.
Thank you; this ridiculous non-argument also pollutes discussion on GUI/UX. "Skeuomorphism looks outdated"--no, skeuomorphism that looks like old UIs looks dated, by definition, but that does not mean it is the only way to design tactile UIs.
It is the job (and in my opinion, an exciting challenge) for the UI designers to come up with a modern looking tactile design based on the principles of skeuomorphism, possibly amalgamated with the results of newer HCI research.
Yes, controls are uniquely important.
This is often repeated but I don't believe this for a second. I have an 90s vehicle which is based on 60/70s technology. A switch for a fog light is like £10 on ebay for a replacement and I know I am not paying anywhere near cost i.e. I am being ripped off.
You think you’re being ripped off for a £10 fog light switch on a ~30 year old car?
That sounds like an incredible bargain to me.
Why do you think you should pay near cost? What’s the incentive for all the people who had to make, test, box, pack, move, finance, unpack, inventory, pick, box, label, and send it to you? I can’t imagine the price between £10 and free that you’d think wasn’t a rip-off for a part that probably sells well under a 100 units per year worldwide.
I shouldn't have worded it that way. I wanted to stress that the £10 would have been way more than the price per unit if there was a bulk order.
As for it being a bit of a rip off yes it was a little bit. I found the same part for cheaper literally the next day.
In any-event. It isn't the important part of what I was trying to communicate.
I'm pretty sure that simple switch is something directly in the circuit for the fog light, and there is a dedicated wire between the fog light, the switch, and the fuse box. And if its an old Jag, those wires flake out and have to be redone at great expense.
Compare this to the databus that is used in today's cars, it really isn't even a fair comparison on cost (you don't have to have 100 wires running through different places in your car, just one bus to 100 things and signal is separated from power).
> I'm pretty sure that simple switch is something directly in the circuit for the fog light, and there is a dedicated wire between the fog light, the switch, and the fuse box. And if its an old Jag, those wires flake out and have to be redone at great expense.
I don't really want to get into a big debate about this as I haven't worked on Jags, but I don't believe that replacing parts of the loom is would be that expensive. Remaking an entire loom, I will admit that would expensive as that would be a custom job with a lot of labour.
> Compare this to the databus that is used in today's cars, it really isn't even a fair comparison on cost (you don't have to have 100 wires running through different places in your car, just one bus to 100 things and signal is separated from power).
Ok fine. But the discussion was button vs touch screens and there is nothing preventing buttons being used with the newer databus design. I am pretty sure older BMWs, Mercs etc worked this way.
They can be used, they just need more complexity than a simple switch that completes a circuit, they now have tiny cpus so they can signal the bus correctly. The switch must broadcast turn thing on when the switch is set to on, and then turn thing off when the switch is set to off, all with whatever serial protocol being used (including back off and retry, etc. ..). So your input devices need to be little computers so that you can use one bus for everything, now you can see where one touch screen begins to save money.
Ah, the classic "a keyboard has a CPU for each key" argument
I don't believe what you are describing is necessary. I am pretty sure you could have a module where the switches are wired normally into something and that communicates with the main bus. I am pretty sure this is how a lot of cars already work from watching people work on more modern vehicles.
In any event. I've never heard a good explanation of why I need all of this to turn the lights on or off in a car, when much simpler systems worked perfectly fine.
Many of the low-speed switches are connected to a single controller that then interfaces over LIN or CAN to the car.
Reducing the copper content of cars and reducing the size of the wiring bundles that have to pass through grommets to doors, in body channels, etc. was the main driver. Offering greater interconnectedness and (eventually) reliability was a nice side effect.
It used to be a pain in the ass to get the parking lights to flash some kind of feedback for remote locking, remote start, etc. Now, it’s two signals on the CAN bus.
OK, thanks for the explanation.
> Offering greater interconnected news and (eventually) reliability was a nice side effect.
I am not sure about that. You still suffer from electronic problems due to corrosion around the plugs, duff sockets and dodgy earths as the vehicle ages.
Depending on age, it’s more likely that the physical switch drives an electric relay and the relay switches the actual fog lamp current which could be 3-5amps per lamp, letting the manufacturer use a small gauge trigger wire to run to/from the dash and thicker wire only for the shorter high-current path.
Not just that, wiring it in to the single control bus is easier, otherwise you are stuck doing an analog to digital conversion anyways. Even new cars that have separate controls, these are mostly capacitive buttons or dials that simply send a fixed signal on the bus (so your dial will go all the way around, because it isn't actually the single volume control on the radio, but just a turn the volume up or down control).
Most of the cost savings is in having a single bus to wire up through the car, then everything needs a little computer in it to send on that bus...so a screen wins out.
Most of the seeming analog controls on cars switched to digital in the 1990s. The digital control bus saved several hundred dollars per car. It still looked analog until around 2010 when touch screen started taking over.
Probably in 2010 the price of the touch screen began to out compete the price of the analog controls on the bus.
I’m not sure if this is actually true for the volumes produced by the big carmakers. You’d very quickly get to volumes that make the largest component the material cost.
The good news over here is that the European NCAP is now mandating they put a bunch of those physical controls back if they want a 5-star safety rating. Would not be sorry to say good bye to the awful touchscreen UI in my car...
Now they just need to fix their testing of pedestrian collisions with SUVs and I can go back to praising EuroNCAP.
Don't forget the headlight regulations desperately need an update. RAC survey said 89% think some are too bright, 30% think *most* are too bright. Insane.
I had similar discussions with my father who started his career in the 80s as an engineer, and has been a CEO for the last ~15 years. The discussion was a bit broader, about engineering and quality/usability in everything.
His perspective was that companies were "run" by engineers first, then a few decades later by managers, and then by marketing.
Who knows what's next, maybe nothing (as in all decisions are accidentally made by AI because everyone at all levels just asks AI). Could be better than our current marketing-driven universe.
The free market does not optimize for quality.
It's wild how we've come full circle. It's baffling how something so simple and effective has been abandoned in favor of glossy screens and guesswork
I commented on here about the surge in US car mfg recruiters contacting me about working on their new car systems. The HN opinion seemed to that they are complete disasters and stay away if I value my sanity.
Has this cost risen?
Why is this so expensive it can't even be put into a premium car today when it used to be ubiquitous in even the cheapest hardware a few decades ago?
Because most companies are ruthless penny-pinchers and over-optimizers. They're willing to burn dollars to save pennies. The reason is that they're trading things they can measure for things they can't.
Basically, if you remove the knobs you can save, say, 10 dollars on every vehicle. In return, you have made your car less attractive and will lose a small number of sales. You will never, ever be able to quantify that loss in sales. So, on paper, you've saved money for "free".
Typically, opportunity cost is impossible or close to impossible to measure. What these companies think they are doing is minimizing cost. Often, they are just maximizing opportunity cost of various decisions. Everyone is trying to subtly cut quality over time.
Going from A quality to B quality is pretty safe, it's likely close to zero consumers will notice. But then you say "well we went from A to B and nobody noticed, so nobody will notice B to C!". So you do it again. Then over and over. And, eventually, you go from a brand known for quality to cheap bargain-bin garbage. And it happened so slowly that leadership is left scratching their heads. Sometimes the company then implodes spontaneously, other times it slowly rots and loses to competitors. It's so common it feels almost inevitable.
Really, most companies don't have to do much to stay successful. For a lot of markets, they just have to keep doing what they're doing. Ah, but the taste of cost-cutting is much too seductive. They do not understand what they are risking.
> Basically, if you remove the knobs you can save, say, 10 dollars on every vehicle. In return, you have made your car less attractive and will lose a small number of sales.
Is there evidence that fancy looking screens don't show better in the showroom than legacy looking knobs and buttons? Where under use, they may be better, I am not sure all that sells better.
No, there isn't. Like I said, the opportunity cost is invisible and impossible to measure.
All I know is personal anecdotes from people I talk to. I know a couple people who have a Mercedes EQS - they've all said the same thing: the big screen is cool for a little bit, then it's just annoying.
I think it will take a generation or two of cars before some consumers start holding back on purchases because of this. For now, they don't know better. But I'm sure after owning a car and being pissed off at it, they'll think a little bit harder on their next purchase. I think consumers are highly impacted by these types of things - small cuts that aren't bad, per se, but are annoying. Consumers are emotional, they hold grudges, they get pissed off.
I sort of feel the same way about fix-a-flat kits. Once people actually have the experience of trying to use a fix-a-flat kit, they'll start asking car salesmen if the car comes with a spare...
And not every consumer has to feel the pain to know. Many, like myself, have seen others suffer and have made their mind of not buying such a car.
The problem isn't just that. These screens are actual safety hazards. Whatever you display in a showroom doesn't justify this: https://grumpy.website/1665
It was always expensive. Car makers need their cars to last (the used market is imbortant since few can afford a new car the scrap in 3 years) so they are not buying the cheap switches. a cherry mx will run near a dollar each in quantity. Then you put the cap an it plus wires and it adds up fast per switch. A touch screen is $75 in quantity and replaces many switches.
Because cars have long design times and a big touchscreen have generally been seen as more premium than a bunch of push buttons and dials. I think the tide has turned somewhat, but it’s going to take some time.
Because being more expensive than a competitor for something most consumers don’t care about is a hit to sales.
No, but every cost cut is additional profit
> designing and manufacturing custom molds for each knob and function ... dash does have a cost.
Manufacturing car components already involves designing and custom molds, does it not? Compared to the final purchase price, the cost of adding knobs to that stack seems inconsequential.
Yes, but the touch screen is one large mold. The button needs a custom mold for each button. The touch screen also has large flat areas with reduces cost since is prevents extra cost round shapes.
Yeah, seems like a really weird cope to defend the automakers.
Your average transmission will have an order of magnitude more parts that also needed to be designed and produced with much higher precision.
The interior knob controls are just a rounding error in the cost structure.
Power abhors a vacuum. Choosing to not change is viewed as failure to innovate, even if the design suffers. Planned obsolescence is as old as the concept of yearly production models themselves, and likely older, going back to replacement parts manufacturing and standardized production overtaking piecework.
It’s a race to the bottom to be the least enshittified versus your market competitors. Usability takes a backseat to porcine beauty productization.
I think an indicator that something is going wrong in UI design is what I'd call the "the food is in the fridge" anti pattern that seems to pop up lately.
Essentially it's UI text in random places telling you what steps you should take to activate some other feature, instead of - you know - just providing a button to activate that feature.
A variant of this is buttons or menu items that don't do anything else than move focus onto another button, or open a menu in a different location, so you can then click on that one.
Increasingly seeing this in Microsoft products, especially in VS Code.
I guarantee that you will LOVE the user interface design and menu system in World Quester 2:
Game Helpin' Squad: World Quester 2
https://www.youtube.com/watch?v=0Gy9hJauXns
Every time I'm using Cursor and select "Cursor => Settings => Cursor Settings" I giggle and think of World Quester 2.
I love World Quester 2 so much, I implemented its most innovative feature, the "Space Inventory", in the WASM version of Micropolis (SimCity):
https://micropolisweb.com/
WARNING: DO NOT PRESS THE SPACE BAR!!!! (And if you accidentally do, then definitely DO NOT PRESS IT AGAIN!!!! Or AGAIN!!! Or AGAIN!!!)
SimCity Micropolis Tile Sets Space Inventory Cellular Automata To Jerry Martin's Chill Resolve:
https://www.youtube.com/watch?v=319i7slXcbI
This is amazing! :D Thanks a lot!
I get why you would hide interface elements to use the screen real estate for something else.
I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
IntelliJ does this, for example, with the icons above the project tree. There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab. You have to know the secret spot on the screen where it is hidden and if you move your mouse pointer to the void there, it magically appears.
Why? What is the rationale behind going out of your way to implement something like this?
Some people complain about "visual clutter". Too many stimuli in the field of view assault their attention, and ruin their concentration. Such people want everything that's not in the focus of attention be gone, or at least be inconspicuous.
Some people are like airliner pilots. They enjoy every indicator to be readily visible, and every control to be easily within reach. They can effortlessly switch their focus.
Of course, there is a full range between these extremes.
The default IDE configuration has to do a balancing act, trying to appeal to very different tastes. It's inevitably a compromise.
Some tools have explicit switches: "no distractions mode", "expert mode", etc, which offer pre-configured levels of detail.
This is why we used to have customizable toolbars, and relevant actions still accessible via context menu and/or main menu, where the respective keyboard shortcuts were also listed. No need to compromise. Just make it customizable using a consistent framework.
This is a good idea. In basic/beginner mode, every control should be readily visible and discoverable.
In practice, "beginner mode" just makes inaccessible all controls deemed by the designer to be outside the realm of basic use cases.
Intellij on Windows also buries the top menus into a hamburger icon and leaves the entire area they occupied empty! Thankfully there is an option to reverse it deep in the settings, but having it be the default is absolutely baffling.
Microsoft pulls the same BS. Look at Edge. Absolute mess. No menu. No title bar. What application am I even using?
This stupidity seems to have spread across Windows. No title bars or menus... now you can't tell what application a Window belongs to.
And you can't even bring all of an application's windows to the foreground... Microsoft makes you hover of it in the task bar and choose between indiscernible thumbnails, one at a time. WTF? If you have two Explorer windows open to copy stuff, then switch to other apps to work during the copy... you can't give focus back to Explorer and see the two windows again. You have to hover, click on a thumbnail. Now go back and hover, and click on a thumbnail... hopefully not the same one, because of course you can't tell WTF the difference between two lists of files is in a thumbnail.
And Word... the Word UI is now a clinic on abject usability failure. They have a menu bar... except WAIT! Microsoft and some users claim that those are TABS... except that it's just a row of words, looking exactly like a menu.
So now there's NO menu and no actual tabs... just a row of words. And if you go under the File "menu" (yes, File), there are a bunch of VIEW settings. And in there you can add and remove these so-called "tabs," and when you do remove one, the functionality disappears from the entire application. You're not just customizing the toolbar; you're actually disabling entire swaths of features from the application.
It's an absolute shitshow of grotesque incompetence, in a once-great product. No amount of derision for this steaming pile is too much.
No title bars or menus... now you can't tell what application a Window belongs to.
I hate when applications stuff other controls (like browser tabs) into the title bar --- leaving you with no place to grab and move the window.
The irony is that we had title bars when monitors were only 640x480, yet now that they have multiplied many times in resolution, and become much bigger, UIs are somehow using the excuse of "saving space" to remove title bars and introducing even more useless whitespace.
We don't do desktop computing like we did then. Most of what was separate applications then are now done in-browser: it's like running a virtual machine inside your OS.
I don't need to know that what I'm using is Edge/Chrome/Firefox any more than I need to know that what I'm using is Windows/etc.
This argument would make more sense if it wasn't in a thread talking about all the other apps besides the browser that does this.
My point is that there rarely are other 'apps' in use.
Amen. And then there's the idiotic peek-a-boo UI that hides controls until you accidentally roll over them with the cursor... not saving any space at all.
This isn't just a Windows thing. Look at Gnome for another example. macOS of late also likes to take over the title bar for random reasons, although there at least the menu bar is still present regardless.
I've always considered the Mac's shared menu bar a GUI 1.0 mistake that should have been fixed in the transition to OS X. Forcing all applications to share a single menu that's glued to the top of the screen, and doesn't switch back to the previous application when you minimize the one you're working with, is dumb.
Windows and Unix GUIs had it right: Put an application's menu where it belongs, on the application's main frame.
But now on Windows... NO menu? Oh wait, no... partial menus buried under hamburger buttons in arbitrary locations, and then others buried under other buttons.
...The Mac menu bar is what it is for a very good reason. Being at the top of the screen makes it an infinitely-tall target.
All you have to do to get to it is move your mouse up until you can't move it up any more.
This remains a very valuable aspect to it no matter what changes in the vogue of UIs have come and gone since.
The fact that you think that you've "minimized the application" when you minimized a window just shows that you are operating on a different (not better, not worse, just different) philosophy of how applications work than the macOS designers are.
This argument never made much sense to me, although I do subscribe to Fitts' Law. With desktop monitor sizes since 20+ years ago, the distance you have to travel, together with the visual disconnect between application and the menu bar, negates the easier targetability. And with smaller screen sizes, you would generally maximize the application window anyway, resulting in the same targetability.
The actual historical rationale for the top menu bar was different, as explained by Bill Atkinson in this video: https://news.ycombinator.com/item?id=44338182. The problem was that due to the small screen size, non-maximized windows often weren't wide enough to show all menus, and there often wasn't enough space vertically below the window's menu bar to show all menu items. That's why they moved the menus to the top of the screen, so that there always was enough space, and despite the drawback, as Atkinson notes, of having to move the mouse all the way to the top. This drawback was significant enough that it made them implement mouse pointer acceleration to compensate.
So targetability wasn't the motivation at all, that is a retconned explanation. And the actual motivation doesn't apply anymore on today's large and high-resolution screens.
> With desktop monitor sizes since 20+ years ago, the distance you have to travel, together with the visual disconnect between application and the menu bar, negates the easier targetability.
Try it on a Mac; the way its mouse acceleration works makes it really, really easy to just flick either a mouse or a finger on a trackpad and get all the way across the screen.
I’m not saying it’s necessarily harder to reach a menu bar at the top of the screen, given suitable mouse acceleration. But you also have to move the mouse pointer back to whatever you are doing in the application window, and moving to the top menu bar is not that much (if at all) easier to really justify the cognitive and visual separation. It that were the case, then as many application controls as possible should be moved to the border of the screen.
At least on Linux you have 100 choices of window manager (and 100 themes of KDE). 101 if you roll up your sleeves and roll your own.
Turn on never combine taskbar labels in the taskbar settings
How does turning that off help? Does it let you bring an entire application to the foreground (all of its windows) at once?
For your complaints about the taskbar, yes I too find it incredibly annoying that they compress all the application windows into a tiny thumbnail but there is a setting to expand thumbnails to include titles and separate them if there are multiple windows which is what I use. I don't currently have access to my windows machine or I'd help you out with the exact setting but it's there somewhere in the "taskbar settings"
Thanks very much. But it doesn't sound like that would help. First, I doubt a giant network path would fit in the title of a thumbnail.
Second, I want to give focus to the entire application at once. ALL of its windows need to be brought to the foreground at once.
> I get why you would hide interface elements to use the screen real estate for something else.
Except that screens on phones, tablets, laptops and desktops are larger than ever. Consider the original Macintosh from 1984 – large, visible controls took up a significant portion of its 9" display (smaller than a 10" iPad, monochrome, and low resolution.) Arguably this was partially due to users being unfamiliar with graphical interfaces, but Apple still chose to sacrifice precious and very limited resources (screen real estate, compute, memory, etc.) on a tiny, drastically underpowered (by modern standards) system in the 1980s for interface clarity, visibility, and discoverability. And once displays got larger the real estate costs became negligible.
I really disagree.
An IDE, and the browser example given below, are tools I'll spend thousands of hours using in my life. The discoverability is only important for a small percentage of that, while viewing the content is important for all of it.
This is exactly when I will have the 'knowledge in the head'.
I agree, I know those buttons are there and how to activate them, but I still occasionally stare blankly at the screen wondering where the buttons are before remembering I need to hover them
> There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab.
Don’t quote me on this, but I vaguely remember there being an option to toggle hiding it, if not in the settings it is in a context menu on the panel.
That thing is a massive time saver, and I agree—keeping it hidden means most people never learn it exists.
In some apps I don’t know more controls are not hidden, at least have the option to hide them. Looking at you google maps.
I think the article overlooks that it is not really an accident that apps and operating systems are hiding all their user interface affordances. It's an antipattern to create lock in, and it tends to occur once a piece of software has reached what they consider saturation point in terms of growth where keeping existing users in is more important than attracting new ones. It so turns out that the vast majority of software we use is created by companies in exactly that position - Google, Apple, Microsoft, Meta etc.
It might seem counter intuitive that hiding your interface stops your users leaving. But it does it because it changes your basis of assumptions about what a device is and your relationship with it. It's not something you "use", but something you "know". They want you to feel inherently linked to it at an intuitive level such that leaving their ecosystem is like losing a part of yourself. Once you've been through the experience of discovering "wow, you have to swipe up from a corner in a totally unpredictable way to do an essential task on a phone", and you build into your world of assumptions that this is how phones are, the thought of moving to a new type of phone and learning all that again is terrifying. It's no surprise at all that all the major software vendors are doing this.
I think you picked a hypothesis and assumed it was true and ran with it.
Consider that all the following are true (despite their contradictions):
- "Bloated busy interface" is a common complaint of some of Google, Apple, Microsoft, and Meta. people here share a blank vscode canvas and complain about how busy the interface is compared to their 0-interface vim setup.
- flat design and minimalism are/were in fashion (have been for few years now).
- /r/unixporn and most linux people online who "rice" their linux distros do so by hiding all controls from apps because minimalism is in fashion
- Have you tried GNOME recently?
Minimal interface where most controls are hidden is a certain look that some people prefer. Plenty of people prefer to "hide the noise" and if they need something, they are perfectly capable to look it up. It's not like digging in manuals is the only option
If I had to pin most of this on anything I’d pick two:
- Dribbble-driven development, where the goal is to make apps look good in screenshots with little bearing to their practical usability
- The massive influx of designers from other disciplines (print, etc) into UI design, who are great at making things look nice but don’t carry many of the skills necessary to design effective UIs
Being a good UI designer is seeking out existing usability research, conducting new research to fill in the gaps, and understanding the limits of the target platform on top of having a good footing in the fundamentals. The role is part artist, part scientist, and part engineer. It’s knowing when to put ego aside and admit that the beautiful design you just came with isn’t usable enough to ship. It’s not just a sense for aesthetics and the ability to wield Photoshop or Figma or whatever well.
This is not what hiring selects for, though, and that’s reflected in the precipitous fall in quality of software design in the past ~15 years.
> Dribbble-driven development,
I've been calling modern designers "dribbble-raised" for a while now precisely for these reasons. Glad to see I'm not the only one.
I agree with you it's very fashion driven and hence you see it in all kinds of places outside the core drivers of it. But my argument is, those fashions themselves are driven by the major players deciding to do this for less than honorable reasons.
I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user. But what is happening is that when these designs get reviewed, nobody is pushing back - when someone says "but how will the user know to do that?", it doesn't get listend to. Instead the people responsible are signing off on it saying, "it's OK, they will just learn that, once they get to know it, then it will be OK". It's all passive but it's based on an implicit assumption that uses are staying around and optimising for the ones that do, making it harder for the ones that want to come and go or stop in temporarily.
Once three or four big companies start doing it, everybody else cargo cults it and before you know it, it looks like fashion and GNOME is doing it too.
Is this the same GNOME? https://wiki.gnome.org/Design(2f)Studies.html
Somehow in your theory you omit the fact that people can learn how to use a new interface? It’s not like you’re entitled to a UI that never adds functionality anymore, ever. Sure, vendors ought to provide onboarding tutorials and documentation and such, but using that material is on the user.
UIs tend to have a universality with how people structure their environments. Minimalism is super hot outside of software design too. Millennial Gray is a cliche for a reason. Frutiger Aero wasn't just limited to technology. JLo's debut single is pretty cool about this aesthetic https://www.youtube.com/watch?v=lYfkl-HXfuU
I think you picked a hypothesis and assumed it was true and ran with it.
The tone of your post and especially this phrase is inappropriate imo. The GP's comment is plausible. You're welcome to make a counter-argument but you seem to be claiming without evidence their was no thinking behind their post.
> Have you tried GNOME recently?
God, no. I switched to xfce when GNOME decided that they needed to compete with Unity by copying whatever it did, no matter how loudly their entire user base complained.
Why would I try GNOME again?
> Why would I try GNOME again?
It is widely used, the default DE in many installs, and it can be handy to be familiar with, for starters.
Turning learned friction into a form of psychological lock-in is a dark UX pattern if there ever was one
I see nonprofit OSS projects doing it too, and wonder if they're just trendchasing without thinking. Firefox's aggravating redesigns fall under this category, as does Gnome and the like.
It's a double edged sword though in that it can discourage users from trying their interface.
Apple's interface shits me because it's all from that one button, and I can never remember how to get to settings because I use that interface so infrequently, so Android feels more natural. Ie. Android has done it's lock-in job, but Apple has done itself a disservice.
(Not entirely fair, I also dislike Apple for all the other same old argument reasons).
Which button do you mean?
Yeah, that's how old my Apple knowledge is.
Another comment elsewhere on this page informed me that the universal button no longer exists.
While this makes several cars a terrible choice for rentals, I do wish car owners would take maybe half an hour of their day after spending a couple thousand to read through the manual that came with their car. The manual doesn't just tell you how to change the radio station, it also contains a lot of safety information and instructions for how to act when something goes wrong.
How can I trust a driver to take things like safe maximum load into account when they don't even know they can open their car if their battery ever goes flat?
This also happened to me in a rental. We drove it off the lot to our hotel a half-hour away before we discovered the remote was busted, with all of our possessions locked inside.
I did know that there must be a physical key (unless Tesla?), and the only way I found the keyhole was because a previous renter had scratched the doorknob to shit trying to access the very same keyhole.
I'm yet to drive a car with a doorknob but it sounds awesome.
All of which you should know, and can be easily found with a quick google. The moment we got a car with no physical key my first question was “what’s the backup option and how does it work”.
Basic knowledge about the things you own isn’t hard. My god there is a lot of old man shakes fist at cloud in here.
This is such an Apple user take. "Yes you can do that, but you're not supposed to so it's hidden behind so many menus that you can't find it except by accident and since I use it, I say sowwy to my phone every night before I go to sleep to make sure Apple doesn't get maddy mad at me"
Knowing how to get around a stupid design doesn't make it any less stupid.
The opposite take would be that there’s no need to shove something in the users face that they need less than once per year, but offer a more elaborate way to get there just in case.
How is some real clear key inside label on the fob "shoving something in the user's face"? How is visible keyhole, or at least not buried behind a snap off cover, "shoving something in the user's face"?
It’s right in front of it? There’s a reason we hide unused sockets behind lids.
This is what happens when "designers" who are nothing more than artists take control of UI decisions. They want things to look "clean" at the expense of discoverability and forget that affordances make people learn.
Contrast this with something like an airplane cockpit, which while full of controls and assuming expert knowledge, still has them all labeled.
The "clean aesthetic at all costs" mindset has definitely gone too far
I still don't understand why desktop OSes now have mobile style taskbar icons that are twice as large as they need to be, grouped together so you need to hover to see which instance of what is what, and then click again to switch to the one you actually want if you can even figure out what it even is with just a thumbnail without any labels. All terminal windows look the fucking same!
Win NT-Vista style, aka the way web browsers show tabs with an icon + label is peak desktop UX for context switching and nobody can convince me otherwise. GNOME can't even render taskbars that way.
Most people coming into the workforce today have grown up on iOS and Android. To them, the phone is the default, the computer used to be what grownups use to do work. Watching them start using computers is very similar to those videos from the 80s and 90s of office workers using a computer for the first time.
The appification of UI is a necessary evil if you want people in their mid twenties or lower to use your OS. The world is moving to mobile-first, and UI is following suit, even in places it doesn't make sense.
Give a kid a UI from the 90s, styled after industrial control panels, and they'll be as confused as you are with touch screen designs. Back in the day, stereos used to provide radio buttons and sliders for tuning, but those devices aren't used anymore. I don't remember the last device I've used that had a physical toggle button, for instance.
UI is moving away from replicating the stereos from the 80s to replicating the electronics young people are actually using. That includes adding mobile paradigms in places that don't necessarily make sense, just like weird stereo controls were all over computers for no good reason.
If you prefer the traditional UX, you can set things up the way you want. Classic Shell will get you your NT-Vista task bar. Gnome Shell has a whole bunch of task bar options. The old approach may no longer be the default one, but it's still an option for those that want it.
Maybe you're right, but I mean I'm in my late twenties and I grew up on Win 95 and XP mainly, smartphones only started to become a thing in early high school. You'd probably have to look under like 16 to really find those who haven't ever seen an interface designed for the mouse.
> Classic Shell, Gnome Shell task bar options
Yeah mods, hacks, and extensions don't really count for either. The more time passes the more this nonsense becomes mandatory. Luckily KDE still exists for now and has it all native.
Next you’ll be complaining that the taps in your house don’t have a label telling you that they need to be twisted and in what direction.
Phones aren’t 747’s, and guess what every normal person that goes into an airplane cockpit who isn’t a pilot is so overwhelmed by all the controls they wouldn’t know what anything did.
Interface designers know what they’re doing. They know what’s intuitive and what isn’t, and they’ve refined down to an art how to contain a complicated feature set in a relatively simple form factor.
The irony of people here with no design training that they could do a better job than any “so called designer” shows incredible levels of egotism and disrespect to a mature field of study.
Also demonstrably, people use their phones really quite well with very little training, that’s a modern miracle.
Stop shaking your fist at a cloud.
Interface designers know what they’re doing. They know what’s intuitive and what isn’t
No they don't. The article refutes your points entirely, as does everyone else here who has been confounded by puzzling interfaces.
"They know what’s intuitive and what isn’t"
... and then they ignore it? It triggers me when someone calls hidden swipe gestures intuitive. It's the opposite of affordance, which these designers should be familiar with if they are worth their salaries.
I don't think I can do better, I just feel betrayed,
Very slightly unrelated, but this trend is one of the reasons I went Android after the iPhone removed the home button. I think it became meaningfully harder to explain interactions to older users in my family and just when they got the hang of "force touch" it also went away.
First thing I do on new Pixel phones is enable 3 button navigation, but lately that's also falling out of favor in UI terms, with apps assuming bottom navigation bar and not accounting for the larger spacing of 3 button nav and putting content or text behind it.
Similarly the disappearing menu items in common software.
Take a simple example: Open a read-only file in MS Word. There is no option to save? Where's it gone? Why can I edit but not save the file?
A much better user experience would be to enable and not hide the Save option. When the user tries to save, tell them "I cannot save this file because of blah" and then tell them what they can do to fix it.
I half agree. The save option should be disabled, since there is something very frustrating about enabling a control that cannot be used. However, there could be a label (or a warning button that displays such a label) explaining why the option is disabled.
The Mac HIG specifies exactly this: don’t hide temporarily unavailable options, disable them. Disabling communicates to the user the relationships between data, state, etc and adds discoverability.
This has been the norm on every desktop. But lately I don't think app designers know what "HIG" even is. Everything is web (or tries real hard to look like it even when it's native apps...), which is to say, everything is broken.
I had the same story, which is why the last phone I got for my grandma was an iPhone SE (which still has the home button). This way, no matter where she ends up, there's this large and obvious thing that she can press to return back to the familiarity of the home screen.
I am firmly in the “key UI elements should be visible” camp. I also agree that Apple violates that rule occasionally.
However, I think they do a decent job at resisting it in general, and specifically I disagree that removing the home button constitutes hiding an UI element. I see it as a change in interaction, after which the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself. It is debatable whether it is intuitive or better in general, but I personally think it is rather similar to double-clicking an icon to launch an app, or right-clicking to invoke a context menu: neither have any visual cues, both are used all the time for some pretty key functions, but as soon as it becomes an intuition it does not add friction.
You may say Apple is way too liberal in forcing new intuitions like that, and I would agree in some cases (like address bar drag on Safari!), but would disagree in case of the home button (they went with it and they firmly stuck with it, and they kept around a model with the button for a few more years until 2025).
Regarding explaining the lack of home button: on iOS, there is an accessibility feature that puts on your screen a small draggable circle, which when pressed displays a configurable selection of shortcuts—with text labels—including the home button and a bunch of other pretty useful switches. Believe it or not, I know people who kept this circle around specifically when hardware home button was a thing, because they did not want to wear out the only thing they saw as a moving part!
>the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself.
Right, but while it's obvious to everyone that a button is a control, it's not obvious that an edge is a control. On top of that, swiping up from the bottom edge triggers two completely different actions depending on exactly when/where you lift your finger off the screen.
Why not move the physical home button to the back of the phone?
I think a button that is located behind the screen fits the definition of “hidden interface control” more so than a swipeable screen edge.
Forwhat it’s worth, back tap is a feature of iOS to which you can assign an action, though it only triggers on double or triple tap.
I still have my iPhone with home button. That’s also a solution ;-)
I am the same, long time Android user and when I borrow my wife's iPhone it is an exercise in frustration. Interactions are hidden, not intuitive, or just plain missing.
Now that Pixel cameras outclass iPhone cameras, and even Samsung is on par, there is really no reason to ever switch to the Apple ecosystem anymore IMO.
> there is really no reason to ever switch to the Apple ecosystem anymore IMO
Not having anything to do with Google is a pretty good reason I think.
The best one, unfortunately it's a terrible user experience for a high cost.
> [iPhone] Interactions are hidden, not intuitive, or just plain missing.
And they aren't even consistent from app to app. That's perhaps the most frustrating thing.
That’s thanks to third party devs, not Apple. If you look primarily at proper native UIKit/SwiftUI apps, there’s a lot more consistency, but there’s a lot of cross platform lowest common denominator garbage out there that pays zero mind to platform conventions.
You see this under macOS, too. A lot of Electron apps for instance replace the window manager’s standard titlebar with some custom thing that doesn’t implement chunks of the standard titlebar’s functionality. It’s frustrating.
If you were a long time iphone user you’d say the same thing about android. It’s just about what you’re used to dude.
Not really. In Android there will be a back button, on iPhone you're supposed to know to swipe in some direction. On Android there will be a button to show running apps, on iPhone you will need to swipe correctly from somewhere. When 3d touch existed I think there were like 11 different ways of pressing the home button depending on context.
Android by default is also swipe swipe swipe. You need to tweak the settings to get the older and saner 3-button setup back.
As far as the Back button, on iOS the norm is for it to be present somewhere in the UI of the app in any context where there's a "back" to go to. For cross-app switching, there's an OS-supplied Back button in the status bar on top, again, showing only when it's relevant (admittedly it's very tiny and easy to miss). Having two might sound complicated but tbh I rather prefer it that way because in Android it can sometimes be confusing as to what the single global Back button will do in any given case (i.e. whether it'll navigate within the current app, or switch you back to the previous app).
Modern Android defaults to the same random swipe experience as iOS. But you can go back to the much more usable three-button setup.
Like everything, this goes in cycles. When the iPhone launched, its UI was touted as revolutionary; simple, discoverable, not the convoluted mess that a typical windows experience was. "lol lolyou have to click start to power off your computer" and the likes. You had the physical home button, or the three buttons on android. They were discoverable, you handed an old phone to your grandma and she could just try things and figure it out.
Nowadays everything has to be clean and minimalist. No scrollbar, no buttons, just gestures. Hand a modern smartphone to someone who never used one in their life and see how they struggle to ever leave the first app they open. What are the odds they discover one of the gestures?
We have a user interface design rule that keyboard shortcuts and context menus must only be "shortcuts" for commands that are discoverable via clear buttons or menus. That probably makes our apps old-fashioned.
I recall learning that the four corners of the screen are the most valuable screen real estate, because it's easy to move the mouse to those locations quickly without fine control. So it's user-hostile that for Windows 11 Microsoft moved the default "Start" menu location to the center. And I don't think they can ascribe it to being mobile-first. Maybe it's "touch-first", where mouse motion doesn't apply.
I think it's user-hostile that 'maximise' is next to 'close'. After moving my mouse so far, I need to start using fine control if I want to maximise it. I want more of the program and, if I fail, I get none of it - destructively!
Can't you double click the titlebar or use Super-Up ?
I think the centered icons on W11 were done for one reason and one reason only: ripping off MacOS (probably because it's what the design team uses themselves and it felt familiar to them). There is no sensible UX reason to do it, and even in MacOS it's a detriment to its interface.
I don't think it's a macOS ripoff, they would've also ripped off more of the dock if that was the goal. For instance, you would've been able to do things like "pin the task bar to the side".
I think they wanted the start menu to be front and center. And honestly, that just sounds like a good idea, because it is where you go to do stuff that's not on your desktop already. But clicking a button in the bottom left and having the menu open in the middle would look weird, so centering the icons would make sense.
I think there are better ways to do it and I'm sure they've been tried, but they would probably confuse existing Windows users even more.
Corners and edges are rarely used that way. They should be. See "Fitts Law".[1]
My metaverse client normally presents a clean 3D view of the world. If you bring the cursor to the top or bottom of the screen, the menu bar and controls appear. They stay visible as long as the cursor is over some control, then, after a few seconds, they disappear.
This seems to be natural to users. I deliberately don't explain it, but everybody finds the controls, because they'll move the mouse and hit an edge.
[1] https://en.wikipedia.org/wiki/Fitts%27s_law
Not sure I agree with all of the OP's opinions. I prefer a clean, calm, uncluttered user interface over a noisy, busy, cluttered one. In the OP's example with maps, I'd rather see a full-screen map, instead of a map that is always partially covered by a bunch of big buttons, obfuscating my view. Please let me see the map. Yes, fill the entire screen with it.
Gradually, over decades, society has evolved a "shared language of touch-screen actions" for controlling touch-screen devices. Many actions are familiar to everyone here: tap to hide/show controls, press and hold to bring contextual menus, pinch with two fingers to zoom out, etc.
It's OK for UI designers to assume familiarity with this common language to keep UIs clean, calm, and uncluttered. I like it.
Your "shared language of touch screen interactions" is and will forever be unrealized as endless 'innovations' 'novelties' of creative developers and companies remain unfettered by any requirements for compliance voluntart or otherwise to UI 'standards'. Software developers are focused on myriad depths and constraints of toolkits and frameworks languages and libraries to get network, cloud, and actual functionality right, and immersed in those worlds, burden users with their own vast congitive prejudice that their application is the only one in the world, figuring 'users' have unlimited time to decipher undocumented UIs effectively gamified and unique across hundreds of spplications by not only gestures but by required and precise cadences to correctly effect those gestures, cadences which are overloaded and confounded by network and device delays and zero haptic or audio or visual feedback on what may have been 'commanded' or what is yet to be acomplished and displayed onscreen.
I might be tired, and this isn’t meant as anything other than constructive criticism, but good grief I think you need to use full stops a little more. I had to re-read that 3-4 times to make out what you meant.
to those trying to draw analogies to driving cars -
Go rent a hundred of them. Next try and drive them in 30 different countries.
Only tangentially related, and a seemingly lost old-man battle: stop hiding my scrollbar.
Interesting article. Some points I didn't quite agree entirely with. There's a cost and practically limitation to some things (like a physical knob in a car for zooming in and out on a map - although that was probably just an example of intuitive use).
I just recently switched a toggle on a newly installed app that did the opposite of what it was labelled - I thought the label represented the current state, but it represented the state it would switch to if toggled. It became obvious once changed, but that seems the least helpful execution.
I hate toggle switches IRL too. They are just as ambiguous there. Checkboxes and pushed-in buttons are far clearer, but have unfortunately been sacrificed at the altar of "modernity".
Toggle switches in real life can and often are labeled. A toggle pointing to OFF means it’s off. Moving it to on turns it on.
And a toggle colored/glowing red & green, for off & on, is clear.
Boggles my mind how badly many interfaces manage to be.
Unless you're red/green colour-blind, of course ;)
Perhaps chartreuse and teal?
More seriously, my understanding is that the octopus retina does not have color receptors, just aggregate light, I.e. brightness.
But the octopus practically has a sub-brain behind each respective eye, and the eye brains can extract color from the slight lensing differences across frequencies.
They are amazing magical creatures.
Taking that approach, and some sort of ocular lathe, and we can fix this.
> Taking that approach, and some sort of ocular lathe, and we can fix this.
Well, you can also just give people different coloured lenses for their two eyes. Eg one that filters out red and one that filters out green.
We can always use more chartreuse.
The name sounds nice, but it's just a muddy green.
There was such a confusing toggle at the ticket machines for the train here in Austria many years ago. It was for immediately validating your ticket, which is a potentially costly mistake.
About the scroll bars: Also stop making them so thin that I have to have FPS skills to hit them! Looking at you, Firefox! (And possibly what standard CSS allows?) Yeah, I can scroll, but horizontally the scrollbar would be more convenient than pressing shift with my other hand.
Firefox nonobviousity: Type in about:config in your address bar Search for widget.non-native-theme.scrollbar.size.override Edit it to whatever number you want You can also edit widget.non-native-theme.scrollbar.style to change the shape of it, set it to 4 for a nice chonk rectangle Finally, turn on “Always show scrollbars” in the normal settings window about:settings if you want them always on.
>"Always show scrollbars” This is missing from my version (140.0.2)
New location is: about:config
layout.testing.overlay-scrollbars.always-visible
I’ve never known until this moment that shift makes you scroll horizontally, because I’ve always either used a mouse with horizontal scrolling built into the scroll wheel, or a touchpad.
And Ctrl is used for the third dimension.
It's been a standard Windows feature for quite some time! I don't think people need to scroll horizontally as much now that most screens are no longer rectangular, but this feature goes back to the dialup era and very few people seem to know about it.
> I thought the label represented the current state, but it represented the state it would switch to if toggled. It became obvious once changed, but that seems the least helpful execution.
Such ambiguous switches are often associated with "opt out" misfeatures.
Right! If you want it to denote an action, you need to include the verb: "TURN ON" would be entirely clear. It's even clear if you sometimes DO want to show state / not a button "IS ON" is also perfectly clear. There's only a few that might he confused when the verb is shown, like "INCREASE," although I would have to work a little to imagine the UI where it's not clear whether the button is showing the verb or noun.
you can get the same issue with icons too. The one that gives me anxiety is the microphone with a line through on a button. I _am_ muted or I should click to _mute_. If my kids are arguing in the background and it's an important call it can feel like a high stakes thing to get wrong and often times it only becomes clear what state I'm in by toggling a few times. Does the icon change to a mic without a line when I click or does the previously shown mic with a line now get coloured in, what does _that_ mean?
And even WORSE are the services that use different variations on those depending on the platform you're using! Yes, I am looking directly at you, Amazon Chime.
And thankfully you won’t be looking at it much longer
One of my big beefs with modern UI is two-state controls where it's impossible to determine what the current state actually is. Like a button that says "Music Off" where it's unclear if that means the music is CURRENTLY off, or if clicking the button turns it off.
Yep the best example. Especially if the result is not immediately obvious. Am I commanding "system on" or are you telling me "system on"
Similarly, I can’t tell which state the control is in until I touch it.
> stop hiding my scrollbar
https://superuser.com/a/1720363
Use Firefox?
Stop making my scrollbar so impossible thin, Firefox!
Gnome enjoys the impossibly thin scrollbars too even when you do manage to find them.
GTK+ used to be a decent widget toolkit.
https://askubuntu.com/questions/1407179/widening-firefox-scr...
You might be able to set a different scrollbar style in about:config.
In macOS you can have the scroll bar always on, globally (using System Settings) or per-app (using Terminal command)
But can I make them wider? I don't have the precision to hit something that narrow.
(Most of the time I use the scroll gesture on the trackpad to get round this)
They’re still too thin, and they look awful to boot.
And please support PgUp and PgDn while you're at it.
Yes, and please retain the behavior of clicking in the non-thumb areas to page up and down (with a modifier key to jump instead of paging).
on MacOS you can use Fn+Up and Fn+Down
I recently used the washroom at a Starbucks. The one where you have to enter a code to get in. Once I was inside, there were no knobs or any mechanical way to lock the door - just one circular button with a lock icon on it. I pressed it, and the button lit up as green. Pressed it again, it lit up as red. No indication on what light colour meant what. Does red mean it's unlocked? Or does it mean it is locked, since red usually indicates no entry.
It made for the quickest pee break ever.
Reading through the responses of your comment, I came to the conclusion that the topic is on point. There are many complains about people missing things (please add ...), and people responding with a solution because it's already there - just hidden.
Hidden options tend to degrade. Defaults are important.
I can't recall the app but it was a similar toggle with a label, when you flipped the toggle the label lit up green indicating it was turned on. But the default state was off but how would you know?
The green / red is at least a half decent indicator (questionable for the colour blind folks though), but the current trend of very slightly different shades of grey is the pinnacle of utterly fucking stupid design; perfect for a non-interactive set piece in a gallery, just dumb for use for by human beings.
I have to believe in the case of cookie banners it is intentional.
Yeah, a hidden scroll bar makes a UI unusable if you prefer a touch screen, as I do.
I absolutely despise switches. I'm also constantly asking myself if the label represents the current state or the state it would switch to.
And then the second factor: has this change been applied immediately or do I need to scroll around to find a SAVE CHANGES button?
In the 90's I had this vision that the menu and the scrollbar should be physically separated from the screen.
If you have (next to your monitor on the left side) a narrow physical display with menu entries in it. You get 4 things for "free", the user will expect there to be menu entries, the developer will understand the expectations to have menu entries, there is limited room to go nuts with the layout or shape of the menu and last but most funny, you won't feel part of the screen has been taken away from you.
The physical scrollbar should be a transparent tube with a ball (or ideally a bubble) floating in it.
Usage could be moving the pointer out of the screen. The scrollbar led goes on and you can hold the button to move the page. When using the menu the pointer [also] vanishes and the menu entry at that height is highlighted. (much better usability) Moving the mouse up or down highlights the above or below entries, if there are a lot of entries it may also scroll. It may be a touch screen but the most usuable would be a vertical row of 5 extra wide (3 fingers) keyboard buttons on the left with the top 4 corresponding to the 1st, 2nd, 3rd, 4th menu entry and the 5th one for page down. (scrolling down 4 entries) Ideally these get some kind of texturing so that one can feel which button one is touching.
This way knowledge in the world can smoothly migrate to knowledge in the head until eventually you can smash out combinations of M keys in fractions of a second without looking at the screen or the keyboard. The menu displayed is always in focus, you don't have to examine the view port to use it. Having a row of horizontal F keys is a design fiasco. Instinctively bashing the full row of those might come natural after learning to type, then learning to type numbers, then symbols and only if you frequently use applications that have useful F key functionality. I only really know F5 and F11 but I cant smash them blindly as I pretty much never use them. I just tried F1 in firefox and no help documentation showed up... I think that was what it was suppose to do? Not even sure anymore.
Having the antenna menu (file, edit, etc) at the top of the viewport is also ugly. For example, smashing the second then the top M key could easily become second nature. CTRL+Z is fine of course but it aint knowledge in the world. Does anyone actually use ALT+E+U for undo? Try it on the CTRL+F input area. It's just funny. Type something in the address bar then compare ALT+E+U with using the Edit menu.
A separate display would take many of these "design" privileges away from the clowns.
(note: I think it is ALT+E+U as the Dutch layout is forced on me by windos. Edit is called Bewerken and the shortcut is ALT+W!?! ALT+E does nothing.)
No one wants or needs an entire seperate device just to handle scrollbars that work absolutely fine currently…
Speak for yourself. Id love this as a monitor attachment!
It was just a vision from long ago. But okay, for sake of argument. It doesn't need to be ultra hd in a billion colors, it can go on the bezel and be screen height so that you don't have to aim to hit it. No need for it to glow intensely, perhaps not at all, perhaps simple single color LCD would do the trick.
I don't agree scrollbars work fine, they use to work fine, now they are to tiny to click on.
There also was/is the issue where the view port width needs to be adjusted when page state grows beyond the screen height then word wrap makes the content shift down. Is the solution to have one so tiny it is hard to use or should one always display a scrollbar? The one outside the screen is always there :)
I like things that do only one thing, do it well and in a simple way.
You could also go the other direction and put everything on the screen. Huawei just made a horrifying laptop where the keyboard is also a screen.
> In the 90's I had this vision that the menu and the scrollbar should be physically separated from the screen.
Buttons alongside, above, or below screens appear now and then. Some early terminals had them. Now that seems to be confined to aircraft cockpits and gasoline dispensers.
Other applications ...
Some ATMs have unmarked physical buttons next the screen and tge text displayed on the screen next to those buttons defines what the key does.
TV remotes have A/B/C/D (red/blue/green/yellow) physical buttons whose function is dynamically defined by your context or which setting / function / menu you are currently inside.
I guess this goes back to video game controllers that have A/B X/Y buttons that can have different Functions in different contexts.
BTW. The technical term for such a button is a "soft key", vs a "hard key" that has a single function.
> The physical scrollbar should be a transparent tube with a ball (or ideally a bubble) floating in it.
Oh, god, the Touch Bar was already a frustrating enough piece of UI, don't give Apple more ideas.
I would have been fine with the Touch Bar, if they hadn't sacrificed the function/escape keys to put it there.
It enabled a neat set of affordances, but not worth losing core functionality over.
Amen. Good riddance to Jony Ive and his embarrassing emoji bar.
touch screens cant match physical buttons. This one is extra funny by taking keys away and giving you unknown things in return. Finally one can once again look down at the keys wondering which is which. After moving your hands away.
If I was on the design team they would have fired me for screaming at everyone. Screaming is good UI tho.
> If I was on the design team they would have fired me for screaming at everyone.
Oh man. I really do start screaming sometimes.
At user interfaces, too often. At unbelievably bad product choices of all kinds.
The simpler & dumber the issue the louder I get.
Someone creates a quality flat tine garden rake with about 40 metal tines, and charges accordingly. The person who manages stickers, because everything needs stickers, creates huge stickers they glue across all the tines. You try to peel it off and now you have over two dozen tines with long streaks of shredded paper glued hard to them.
Screaming is an appropriate place to put the high spin WTF-a-tons that might otherwise feed the universe’s dark energy.
And that, dear reader, is my theory of dark energy.
...And how should this work for the case where you have more than one window on the screen?
This is easily one of the most frustrating parts of the user experience on Discord. So many buttons are hidden until you mouse over them, which absolutely drives me UP A WALL. I really hope this trend discontinues.
Agree utterly. It's a real shame, and severely affects accessibility for disabled and elderly people. Not only UI discoverability but also the types of swiping or holding movements required on mobile devices. The initial mobile interfaces felt way more accessible, so I don't think its an implicit implication of limited screen real-estate. This has been a trend-driven flattening of UI, with aesthetics over functionality. The palm and compaq pilots felt sublime to use, and the ipod and early mp3 players were fine, as was the originally charming iphone skeudomorphic iconography. It's all been downhill since then.
I don’t know that I agree. Take reading HN comments on my phone. There’s dozens of UI controls that are hidden behind a few buttons at the top or bottom of the screen. Getting that stuff out of the way makes the page itself take up almost all of my phone screen - and that makes the webpage much more beautiful and enjoyable. My phone screen is only so large. The palm pilot era equivalent browser would fill half the screen with buttons and controls and scroll bars, leaving much less room for the website content.
In my opinion, hidden controls aren’t bad per se. But they are something you have to learn to use. That makes them generally worse for beginners and (hopefully) better for experts. It’s a trade off and sometimes getting users to learn your UI is the right decision. I’m glad my code editor puts so much power at my fingertips. I’m glad git is so powerful. I don’t want a simplified version of git if it means giving up some of its power.
That said, I think we have gone way too far toward custom per-app controls. If you’re going to force users to learn your UI conventions, those learnings should apply to other applications on the same platform. Old platforms like the palm were amazing for this - custom controls were incredibly rare. When you learned to use a palm pilot, you could use all the apps on it.
The interaction that messes me up all the time is the side button and payment related stuff
One press turns on/off the display Two taps enables Apple Pay
Quite often my timing is not perfect or one press isn’t hard enough so I shut off the display
Then, paying with Apple Pay is a double press but paying for transit is no press. but often I’m absent minded and as I’m walking through the transit gate my brain thinks “must pay” “pay = double press” so I subconsciously double press and the gate screams since is not in transit mode now it’s in Apple Pay mode
There is a sweet spot in between those two extremes if you stop trying to build a compromise of a UI for both, touch screens and desktops. (sadly we still do not have the reality demoed by apple and google 10 years ago, where touch screens have hover detection, maybe XR gaze detection will bring this) You can pack extreme amounts of features without cluttering the interface or sacrificing discoverability by only showing most features on hovering certain areas. This is risk and effort free from user perspective, as users are much more explorative when no clicking is involved and it is clear an interaction will not trigger features by accident in the discovery process.
Im especially passionate about this because having ADHD makes one sensitive to irrelevant stimuli in the periphery but being a power user for most software the dumbification of software happening since mobile apps drives me insane. I want software where a feature being used by the top 5 to 10 % power users once a month is not being ripped out if that once a month use provides high value for that group.
On the risk of sounding like a grandpa, but there used to be a pretty effective "division of labor" for this in UIs:
(1) The "fast" path: Provide toolbars, keyboard shortcuts and context menus for quick access to the most important features. This path is for users who already have the "knowledge in the head" and just want to get there quickly, so speed takes priority over discoverability.
(2) The "main" path: Provide an exhaustive list of all features in the "title bar"/"top of the screen" menus and the settings dialogues. This path is mainly for users who don't have the "knowledge in the head" and need a consistent, predictable way to discover the application's features. But it's also a general-purpose way to provide "knowledge in the world" for anyone who needs it, which may also include power users. Therefore, for this path, discoverability and consistency is more important than speed.
Crucially, the "main" features are a superset of the "quick" features. This means, every "quick-access" feature actually has at least two different ways to activate it, either through 1 or through 2.
This sounds redundant, but makes perfect sense if it allows peoples to first use the feature through 2 and then later switch to 1 when they are more confident.
My impression is that increasingly, UIs drop 2 and only provide 1, changing the "fast" into the "main" path. Then suddenly "discoverability" becomes a factor of its own that needs to be implemented separately for each feature - and in the eyes of designers seems to become an unliked todo-list bullet point like "accessibility".
Usually then, it's implemented as an afterthought: Either through random one-time "new feature" popups (if it popped up at an inappropriate time and you just closed it to continue with what you wanted to to, or if you want to reopen it later - well, sucks to be you) - or through unordered "everything" menus that just contain a dump of all features in an unordered list, but are themselves hidden behind some obscure shortcut or invisible button.
> if you stop trying to build a compromise of a UI for both, touch screens and desktops
Agree many of the problems have to do with this, yet it’s barely mentioned by armchair designers. Temporally hidden and narrow scrollbars? Makes perfect sense for scrolling on touch screen (since you don’t touch them directly), but very annoying on desktop.
Back in the pre-touch days we’d have a lot of hover menus. But with a phone today? Nobody likes the hamburger/three dots, but there isn’t a better alternative without losing context. And nobody uses hover anymore for functional purposes.
But, I also don’t think building entirely separate apps and especially web sites for different form factors is desirable. We probably should be better at responsive design, and develop better tooling and guidelines.
My car’s audio system seems to go out of its way to bury sound settings (bass, treble, balance, etc.) in as many nested menus as possible. And when you do finally find the settings, they are greyed out. I had to actually watch a youtube video to figure out that they are configured at the individual source level. Super confusing and unintuitive, and especially egregious considering that this is in a vehicle you are DRIVING - confusion, distraction, and frustration are the last things you want drivers to experience.
I would argue though you shouldn’t be messing with treble and bass settings while you are driving.
Respectfully disagree. My point is that it should be easy and intuitive to do things like this while driving, just like anything else such as adjusting HVAC controls, operating turn signals, shifting gears, etc. Most major controls and operations should be tactile and easily understandable even if you have never driven that particular car before. I believe that drivers feel more distracted by modern vehicles’ UI/UX than ever before, and I rented a BMW last year that perfectly exemplifies this. It was a nightmare of unintuitive screens and menus just to do basic things - actively driving or not. It really turned me off to BMWs.
I imagine that makes having the settings be specific to each source even worse. How else are you going to adjust them for navigation instructions?
My car has something like that, but thankfully I have only needed to adjust volume, which can be done from the steering wheel…
but I'm not driving - my wife is. Thus I should be able to mess with those settings
I used to drive a Camry where on the factory radio, bass and treble had individual knobs and you could adjust them without taking your eyes off the road. Oh, those were the days.
Some had a whole equalizer: https://hackaday.com/wp-content/uploads/2021/06/btcaraudio-v...
I fully agree with you on this. If the car is moving you shouldn't really do anything more than previous/next/volume. And of those they should be on the steering wheel.
You want to mess with your equalizer, do it when stopped. IDGAF if it's dozens of physical buttons and knobs and sliders or hidden in menus; you're supposed to be driving not mastering an audio file.
Apple Photos has fallen into this trap as well.
As a user, you have no way to see if a photo has been "scanned" with smart features and what it has detected (e,g found person x, found dog, blue sky, beach etc).
Trips features, has this algorithm finished scanning your library? You have no idea, it's just hidden.
Faces, detection, has this completely scanned your library? You don't know. Photos that don't seem to have faces detect, was it scanned or failed or did it not scan yet?
The list is nearly endless - but in line with the rest of the direction of MacOS, getting worse.
This has not been a new development. The lack of feedback and status have been there since the iPhoto days.
I'm sorry, this website doesn't have a mobile interface, are you seriously complaining about accessibility when you don't support majority of the web?
I don't think the author of the article is in control of the stylesheet of ACM's interactions domain.
Websites made for desktop work just fine on mobile. What is your issue? Having to scroll to see all the text?
Plain text ones do. Not the OP website which forces a fixed width multi column layout preventing the text from wrapping correctly.
My favorite is how Android shuffles buttons around just as I'm about to tap, so I wind up tapping the wrong one.
I'm convinced advertisers will find a way to leverage that behavior in some new dark UI pattern.
I think the new Apple design tries to do this too much and it will cause some issues. They're trying to make many things modal, split and merge on scroll, show and hide contextually. The intentions might be good, an intelligent interface that adapts sounds good in theory, but who knows really what the users want to do?
The car key example especially resonated. That kind of design isn't just annoying, it's stressful in real-world situations
While I appreciate the ACM having an article on this, their own site is a poor example of good UX.
And some of their conferences are just downright awful UI
https://s2025.siggraph.org/
I remember Nokia E-series phones with QWERTY keyboards had a little torch printed on the tiny spacebar. Everything else now feels unintuitive compared to that.
Just a minor quibble. Terminal based UI's weren't completely memorized. Many of us had a reference card taped to the wall, or a list of commonly used commands. It was an acceptable way to extend the limited information density of the 80x25 text display, and a really good manual was as discoverable as a GUI.
Not too convenient to carry along with a pocket computer, though.
Something which drives me mad is how modern operating systems (both desktop and mobile) keep hiding file system paths. There used to be a setting on OSX which let you show the address bar in Finder (though it wasn't default) but nowadays it seems to be impossible (unless you get some third-party extension) and I have to resort to using the terminal. It's bonkers.
It makes it impossible to locate files later when I need to move or transfer them.
I have this issue when links are shared directly to a file on SharePoint.
It's often more useful to share the directory it's in rather that the file itself. MS Office dies have a way to get that information, but you have to look for it.
It's still there. Finder → Show menu → Show Path Bar
Unfortunately it's not exposed in the UI, but:
Fig.1 doesn't look like a drop-down menu - is the term really used for that style?
I have had to explain it as such while teaching kids to use Zoom over the pandemic, and yeah one of the first things I got was "it's a drop up menu!"
The term “popover” has been gaining popularity in the last decade or so, as a superset of dropdowns. HTML adopting the term a couple of years ago has helped with this.
The only thing that seems wrong about it to me is that it's above the point where the user clicked rather than underneath; and that's only because that point is near the bottom of the screen.
But the pervasive "form over function" design school disagrees with your desire for the UI to be useful, it has to look clean!
Ironically the article is barely readable on an iPhone…
https://askubuntu.com/questions/1552055/general-rant-about-t...
The article suggests a “simple, well-labeled rotary control ... would accomplish the same function” as a power button and “prevent the user from accidentally activating the control in a way that is no longer hidden”. But a rotary control itself has a serious problem, in that it can mislead the user as to the state, on or off. If the power has failed and the machine does not restart when it comes back, the rotary control will remain in the ON state when the machine is off. From memory, Donald Norman called this kind of thing “false affordance” and gave the example of a door that needed to be pulled having a push-plate on it.
So my iMac, among many other devices like the light I wear on my head camping, has a button which you long-press to turn on. It is a very common pattern which most people will have come across, and it’s reasonable to expect people to learn it. The buttons are even labelled with an ISO standard symbol which you are expected to know.
If the power has failed and the machine does not restart when it comes back, the rotary control will remain in the ON state when the machine is off.
A better example may be a solenoid button, used on industrial machinery which should remain off after a power failure, which stays held in when pushed, but pops out when the power is cut. They are not common outside of such machinery, because they're extremely expensive. In the first half of the 20th century, they also saw some use in elevators: https://news.ycombinator.com/item?id=37385826
I have never looked at a fan that isn't running and been confused by the switch being set to “on”. The affordance is that it immediately tells me that the switch is on, so the problem is somewhere else. Compared to the typical phone's “hold for 3 seconds to turn on, hold for 10 seconds to enter some debug mode”, this is a breath of fresh air when anything unusual is going on with the device.
I live in a country where the socket on the wall the fan is plugged into also has a switch, which could be on or off. So to make the fan go around, both switches must be on; the user needs to know about and have a mental model of serial circuits.
If it’s just a button the user just has to know two things: turn the switch on at the wall socket when plugging in, which becomes habit since childhoood; and press and hold the button on the fan to make it go, which I suspect most children in 2025 can manage. These two things don’t interact and can be known and learned separately.
As you said, the knob’s position tells you about the switch. But it’s the fan the user is interested in, not the switch.
(BTW, if the fan has a motion sensor you can’t tell it’s off by the fact the blades aren’t turning. There’s probably a telltale LED.)
> BTW, if the fan has a motion sensor you can’t tell it’s off by the fact the blades aren’t turning. There’s probably a telltale LED
Or, you know, a switch that is in an off position :p
This article reminds me of one of my favourite comments on the subject I've seen here: https://news.ycombinator.com/item?id=24965293
Notion is horrendous for this. Hiding every control behind an invisible hover target. No, I don't want my company documentation to have a minimalist aesthetic. I just want to use it.
The article mentions the late Mark Weiser's work on Ubicomp at Xerox PARC. Before he went to run PARC, we worked together at the University of Maryland, where he supported and collaborated with my work on pie menus.
Mark Weiser, Ben Shneiderman, Jack Callahan, and I published a paper at ACM CHI'88 about pie menus, which seamlessly support both relaxed "self revealing" browsing for novices, and accelerated gestural "mouse ahead" for experts: smoothly, seamlessly, and unconsciously training users to advance from novice to expert via "rehearsal".
Pie menus are much better than gesture recognition for several synergistic reasons: Most importantly, they are self revealing. Also, they support visual feedback, browsing, error recovery, and reselect. And all possible gestures have a valid and easily predictable and understandable meaning, while most gestures are syntax errors.
Plus the distance can also be used as an additional parameter, like a "pull out" font:direction / size:distance selection pie menu, with live feedback interactive both in the menu center and in the text document itself, which is great during "mouse ahead" before the menu has even been shown.
The exact same gesture that novices learn to do by being prompted by the pop-up pie is the exact same action experts use more quickly to "mouse ahead" through even nested menus without looking at the screen or needing to pop up the pie menu. (By the principle of "Lead, follow, or get out of the way!")
Linear menus with keyboard accelerators do not have this "rehearsal" property, because pressing multiple keys down at once is a totally different (and more difficult to remember and perform) action than pointing and clicking at tiny little menu labels on the screen, each one further from the cursor and more difficult to hit than the next.
Our controlled experiment compared pie menus to linear menus, and proved that pie menus were 15% faster, and had a significantly lower error rate.
Fitts' Law unsurprisingly predicted that result: it essentially says the bigger and closer a target is to the cursor, the faster and more reliably you can hit it. Pie menus optimize both the distance (all items directly adjacent in different directions), and the area (all items huge wedge shaped target areas that get wider as you move away from the center, so you get more precise "leverage" as you move more, trading off distance for angular precision.
https://en.wikipedia.org/wiki/Fitts%27s_law
An Empirical Comparison of Pie vs. Linear Menus, Proceedings of CHI'88:
https://donhopkins.medium.com/an-empirical-comparison-of-pie...
Pie Menus: A 30 Year Retrospective (37 years now):
https://donhopkins.medium.com/pie-menus-936fed383ff1
The Design and Implementation of Pie Menus: They’re Fast, Easy, and Self-Revealing. Originally published in Dr. Dobb’s Journal, Dec. 1991, cover story, user interface issue:
https://donhopkins.medium.com/the-design-and-implementation-...
>[...] Pie Menu Advantages
>Pie menus are faster and more reliable than linear menus, because pointing at a slice requires very little cursor motion, and the large area and wedge shape make them easy targets.
>For the novice, pie menus are easy because they are a self-revealing gestural interface: They show what you can do and direct you how to do it. By clicking and popping up a pie menu, looking at the labels, moving the cursor in the desired direction, then clicking to make a selection, you learn the menu and practice the gesture to “mark ahead” (“mouse ahead” in the case of a mouse, “wave ahead” in the case of a dataglove). With a little practice, it becomes quite easy to mark ahead even through nested pie menus.
>For the expert, they’re efficient because — without even looking — you can move in any direction, and mark ahead so fast that the menu doesn’t even pop up. Only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections.
>Most importantly, novices soon become experts, because every time you select from a pie menu, you practice the motion to mark ahead, so you naturally learn to do it by feel! As Jaron Lanier of VPL Research has remarked, “The mind may forget, but the body remembers.” Pie menus take advantage of the body’s ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.
>By moving further from the pie menu center, a more accurate selection is assured. This feature facilitates mark ahead. Our experience has been that the expert pie menu user can easily mark ahead on an eight-item menu. Linear menus don’t have this property, so it is difficult to mark ahead more than two items.
>This property is especially important in mobile computing applications and other situations where the input data stream is noisy because of factors such as hand jitter, pen skipping, mouse slipping, or vehicular motion (not to mention tectonic activity).
>There are particular applications, such as entering compass directions, time, angular degrees, and spatially related commands, which work particularly well with pie menus. However, as we’ll see further on, pies win over linear menus even for ordinary tasks.
Gesture Space:
https://donhopkins.medium.com/gesture-space-842e3cdc7102
>[...] Excerpt About Gesture Space
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
>[...] Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing” [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
>They also provide the ability of “Reselection” [6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
>Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
>There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
>But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
>Pie menus also support “Rehearsal” [7] — the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it’s not rehearsal.
>Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
a lot of the things being pointed out seem like non issues. It seems to me that this doesn't really explore that knowledge in head UIs are actually a lot more straightforward and easy to use with the knowledge in head. Most attempts to circumvent that bloat UIs. Also whatever you give people, if it's a repetitive use UI they tend to learn it and turn into knowledge in head, even if its a knowledge in world type of UI, you then change it and people get confused.
Did you just have a stroke?
Just missing hyphens and a couple clarifying rephrases, no?
The rotational On-Off switch for a computer is cool and provides excellent feedback, but like many stateful electromechanical input elements it has the problem that it might run out of sync with the system it controls. E.g. what if the PC is shutdown, it is practically off (you can't do useful stuff with) but technically on (only in a weird shutdown state).
I am a fan of the conceptional clarity, but having to wait for my PC to shutdown only to have to flip a switch myself is not good UX. The absolute ideal would be the switch mechanically turning to off once it is off, and such switches exist, but they are expensive and require extra electronics to drive the electromagnetic part. A really good example of this UX principle are the motor faders in digital audio mixers: You can move them with your hand but if you cange to a different channel layout the mixer can move the faders for you. The downside of those is mainly cost.
The cheap 80/20 solution for the PC is a momentary push-button and a Green/Red LED to display the current state. 5s holding is power-off because everything else has the danger of accidentally switching off — but this isn't obvious to the non-initialized.
I'm on the edge of my seat waiting to see how skeuomorphic icons will solve this and every other problem.
Kind of ironic since this website isn’t mobile optimized
This is a great post!
Mobile is a deliberately second-class platform, in many cases to prevent closing an obtrusive window to serve an advertisement, or to provoke an inadvertent click on an ad. Many ads with malware simply don't present if the platform is not mobile, by design, from the creator.
Steve Jobs was mocking Microsoft for this kind of UI two decades ago when shipping the first iPhone.
None of this is new. But this kind of dysfunctional product is what a dysfunctional organization ships, despite knowledge.
Why? Because leadership wants features. Leadership also wants a clean, marketable product. Leadership also wants both of those done on a dime, quickly and doesn't care about the details. The only way to satisfy all constraints at the same time is to implement features and hide them so they don't clutter the UI.
The problem isn't awareness. It goes deeper.
This is one thing that pisses me off about modern computing. This is shit that we mostly already figured out, but people with no context decided that visual design was the most important part of UI design, with no forethought to usage or discoverability.
The golden age of computing is sadly long, long passed.
Not just UI, but performance as well. It is absolutely inexcusable if your app struggles with loading text and images on modern hardware.
It's the computational equivalent of exploding a hydrogen bomb to open a chocolate wrapper.
A lot of these comments sound like people who can’t get with the times to be honest
I wish web developers would stop hiding the scrollbars and stop taking over the back button.
Also hiding key navigation behind hamburger menu instead of using tab bar should be discouraged.
Published on a site that isn’t responsive lol
> If you want to lock the door, then the hidden control problem becomes evident... to lock the door, I must know that the hidden control to lock is the pound key. To make matters worse, it's not a simple press of the pound key. It's a press of the pound key for a full five seconds in order to activate the lock sequence. The combination of the long temporal window and the hidden control makes locking the door nearly impossible, unless you are well acquainted with the system and its operation.
Isn't that kind of the point? You don't want people accidentally locking the door, but if it's your door, it's easy enough to remember how to do it.
Then put the lock operation on the fingerprint reader too. Doesn't that make more sense?
My gosh I was unaware there were so many old men shaking their fists at clouds here. The level of nitpicking here is ridiculous, none of this is hard, no one else seems to have any issues with most of this stuff, it seems to me like people are bored and want to be angry at something.
Touch grass people.
> no one else seems to have any issues with most of this stuff
In my experience, 9 times out of 10 what this actually means is that they just don't know it's an issue! The type of person who would be confused by, say, the iOS control center, is not necessarily the type of person who would easily identify and raise the issue of it being difficult to do something on their device. They would just be mildly annoyed that they can't figure it out, or that the device "can't do it", and move on to find some other way. You may not realize it if you don't interact with those types of people but they fundamentally do not think like you or I do and what may be an obvious problem-solving process to you (e.g. identify a problem, figure out what tools are at your disposal and whether each could be helpful, check for functionality that could do what you are wanting, ask for help from others if you can't figure it out on your own, etc.) may actually not always be so obvious.
That's why the main way I find out people don't know how to do something is from them seeing me do it with my device and going "what!! I didn't know it could do that!!"
This is the mistake allowing this phenomenon to continue. It is not a "Boomer" or old-person thing. It is a thing for people who enjoy other things in life than electronics. We've already wasted years of our lives learning how to use a bunch of weak features and apps that weren't worth the time. Now those are all gone and we have to learn more? Forget it. Your app is not worth it.
Guessing you’re not often called to explain e.g. iOS control center to a boomer?
If you have older loved ones, understanding their reality might go a ways towards growing empathy!
I think there are couple of conflated aspects here - and some of them are fine, and likely a consequence of computing devices being more ingrained in common day, and some of them are very hostile, and clearly intended to subvert the interests of the user.
As an example:
I think hiding controls in favor of "knowledge in the head", as the author phrases it, is absolutely fine when the user is presumed to be aware of features, should be able to understand they exist and know how to use them, and can reasonably learn them. Especially fine if those controls aren't used all that often, and are behind a keyboard shortcut or other common and efficient route to reach them.
On the other hand - I think there's also been a drive to visibly reduce how much control and understanding basic users might have about how a machine works. Examples of this are things like
- Hiding the scheme/path in browser url bars
- Hiding the file path in file explorers and other relevant contexts
- Hiding desired options behind hoops (ex - installing windows without signing into an account, or disabling personalized ads in chrome)
Those later options feel hostile. I need to know the file path to understand where the file is located. I can't simply memorize it - even if I see the same base filename, is it in "c:/users/me/onedrive/[file]" or "c:/users/me/backed_up_spot/[file]"? No way to know without seeing the damn path, and I can have multiple copies floating around. That's intentional (it drives users to Microsofts paid tooling), and hostile.
Basically - knowledge that can be learned and memorized can benefit from workflows that give you the "blank canvas" that the author seems to hate. Command lines are a VERY powerful tool to use a computer, and the text interface is a big part of that. R is (despite my personal distaste for it as a language) a very powerful tool. Much more powerful and flexible than SPSS.
But there are also places where companies are subverting user goals to drive revenue, and that can rightfully fuck right off.
One of my biggest complaints with modern computing is that "The internet" has placed a lot of software into a gray zone where it's not clear if it's respecting my decisions/needs/wants or the publisher's decisions/needs/wants.
It used to be that the publisher only mattered until the moment of sale. Then it was me and the software vs the world - ride or die. Now far too much software is like judas. Happy to sell me out if there's a little extra silver in it.
jetbrains could learn.
I really don't know why a power-user tools needs to hide all menus behind an extra hamburger menu.
because otherwise they wouldn't be able to fuck up their successful run of twenty-plus years
This is why I really despise “Material Design” and the whole Google aesthetic.
Look at Google Meet for example. How many times and I trying to remember what the Share Screen icon looks like? Apple generally does this stuff far better: text labels for example. Also clicking some “+” icon to reveal more options — how does a “normal” person know what’s buried inside all of those click to reveal options?
Diversity in tech has always been a concern — but one concern I have is that diversity has always meant race, gender, or sexual orientation stuff — but a 28 year old Hispanic LGBT person doesn’t react to a UI much differently than a 28 year old Black hetero person. But a 68 year old Hispanic woman with English as a second language absolutely has potentially different UI understandings than an 18 year old white woman from Palo Alto.
Real diversity (especially age and tech experience levels) should be embraced by the tech companies — that would have a strong impact on usability. Computers are everywhere and we shouldn’t be designing UI around “tech people” understanding and instead strive for more universal accessibility — especially for products we expect “everyone” to potentially use. (Some dev ops tool obviously would have more latitude than an email app, but even then, let’s stop assuming users understand your visual language just because you do.)
I want to see more UX designers who are “old” rather than some clever kid who lives on Behance. I also want to see more design that isn’t created by typical higher educated designers who think everyone should understand things they take for granted. The blue collar worker that works construction, the grandmother from Peru, the restaurant cook, or the literature professor — whatever. Usability should be clear and obvious. That’s really hard — but that’s the job.
One of the original genius aspects of iPad is that a toddler can immediately start using it. We need all usability to be in that vein.
Alan Dye in shambles.
> Witness the navigation system in Apple Maps in CarPlay. The system developers obviously wanted to display as much map as possible, as shown in Figure 3 a). This makes sense, but to do that they relied on the use of hidden controls. If I want to enter a destination or zoom in on the map, I have to know to touch the bottom left-hand portion of the map
What? You don't have to touch any specific portion of the map. You tap anywhere and it brings up those controls.
I think this article largely has a point, and most of it seems true, but to me these bits of untruth are unamusing at best.
[dead]
[dead]
I sort of disagree with this: once I’ve internalized the gestures, I really appreciate the lack of UI for them. It’s like vim and emacs: the sparse ui creates a steeper learning curve but becomes a feature once you’ve learned the tool
It’s one thing to learn a few gestures that work consistently across the platform. But every app tends to do its own thing, and even if you are a power user of the respective apps and learn their idiosyncrasies, it’s still annoying that they all work in slightly or sometimes drastically different ways, and that they aren’t consistent in terms of discoverability.
That was the point of the article. Users with knowledge of how it works can do it fine, but new users can't.
Your average dev who's never used vim or vi will start frustrated by default.
My point is that no one is a new user forever and so I think we need to come up with a better solution than UI taking up screen space for things people end up doing via shortcuts. Menus and command palettes are great for this because they are mostly invisible.
The other important thing is learning to fit into the conventions of the platform: for example, Cocoa apps on Mac all inherit a bunch of consistent behaviors.
I started out with gVim with menu and toolbars. I quickly removed toolbars and after a while longer menus, as I didn't need them any more, they had taught me—though I seem to recall temporarily setting guioptions+=m from time to time for a while longer, when I couldn’t remember a thing. I think I had also added some custom menu items.
Being a modal editor probably makes removing all persistent chrome more feasible.
The default should be a clutter for new users, and the customization option should be make the UI customizable by hiding things you won't ever touch because you use shortcut keys.
The other way around is yeah, hostile. But of course it looks sleek and minimalistic!
On the early iPhones, they had to figure out how to move icons around. Their answer was, hold one of the icons down until they all start wiggling, that means you've entered the "rearrange icons" mode... Geezus christ, how intuitive. Having a button on screen, which when pressed offers a description of the mode you've entered would be user-friendly, but I get the lack of appeal, for me it would feel so clunky and like it's UI design from the 80's.
Jef Raskin said it best:
"Another example is the absurd application of icons. An icon is a symbol equally incomprehensible in all human languages."
source https://ubiquity.acm.org/article.cfm?id=941396
“Once you’ve learned the tool”
I don’t have time to learn the tool. I want to use the tool immediately. Otherwise, I’m moving on.
Configurable options are certainly a good approach for those that know the tool well, but the default state shouldn’t require “learning.”
If you drive a car, you've demonstrated being willing to spend time learning a tool to take advantage of something being more efficient (than walking).
There is a tradeoff between efficiency and learnability, in some cases learning the tool pays off.
https://statetechmagazine.com/article/2013/08/visual-history...
Look at the image of 2.0. There is permanent screen space dedicated to:
I'm guessing you know the shortcuts for these. You learned the tool.But by taking up so much space, these are given the same visual hierarchy as the entirety of the word 'Wikimedia'!
>Configurable options are certainly a good approach for those that know the tool well, but the default state shouldn’t require “learning.”
In practice, IME, this just means there being combinatorially many more configurations of the software and anything outside the default ends up clashing with the rest of the software and its development.
I love how everyone is UX/UI specialists in this thread :) exactly like in projects I worked for, everyone had something to say in that area.
> A DOS command window. Without specific knowledge in the head, the user cannot perform a single action.
To be fair, even CLI environments provide some UI discovery. E.g. DOS had 'help' and it would list available commands and a short description.