function jdb_page_navigation()
sPageSlug = blog
sPageTitle = easily amused
header:139:aPageArgs:page_title = easily amused
header:140:aPageArgs:section_title =
functions-johndberry:262:aPageArgs:page_title = easily amused
functions-johndberry:298:sPageTitle = easily amused
functions-johndberry:359:sPageTitle = easilyamused

easilyamused |

Archive for the category ‘tech’

Typographer’s lunch 6: the coming demise of PostScript fonts

Published

When I recently opened a book file that had been created several years ago, InDesign informed me, “Type 1 fonts will no longer be supported starting 2023. Your document contains 1 Type 1 fonts.” It was easy enough to replace the Type 1 font with an OpenType version of the same typeface, but what does this portend for book publishers with long lead times and large backlists?

I asked Thomas Phinney, the former CEO of FontLab and a former Product Manager for fonts at Adobe, what he thought about this. He told me he had just gotten off an hour-long call with an unnamed university press to discuss exactly this question.

The OpenType font format has been around for more than 20 years, and pretty much every digital font foundry upgraded its library to OpenType long ago. But not every user has upgraded their own type library. Anyone involved in publishing has probably made a big investment in fonts and is not in a hurry to make the same investment all over again.
The fact is that it’s time to bite that particular bullet. Thomas Phinney’s advice is to start thinking about your upgrade path right now: make a plan, budget for it, don’t leave it to the last minute.

If you subscribe to Adobe Fonts, you already have all those fonts in OpenType format. It makes sense, Phinney points out, to inventory the fonts you commonly use that are not in Adobe’s library and plan to upgrade those fonts first.

Incidentally, you don’t have to be actively using a Type 1 font to get that warning message when you open a document; if a Type 1 font is referenced in a paragraph or character style, even if you’re not using that style, it can trigger the warning.

Although there are apps for converting a Type 1 font into an OpenType font (notably FontLab’s TransType), the font’s license may not let you modify the font. Check with the font foundry to see what your options are.

[Originally published on December 1, 2021, in PPN Post and Updates, the newsletter of the Publishing Professionals Network.]

A history of TypeLab

Published

At the beginning of 2020’s online virtual TypeLab, Petr van Blokland was telling the story of how TypeLab started in the early ’90s.

He described it as “a rogue version of ATypI,” which he and a few collaborators (among them Gerrit Noordzij, David Berlow, Erik van Blokland, and other ATypI designers) put together for the 1993 conference in Antwerp. It grew out of the experience in Budapest a year earlier, when the various international delegates who didn’t speak Hungarian found themselves milling around outside during a lecture on Hungarian type that was being delivered in Hungarian (naturally) without translation (unfortunately). It became apparent, Petr said, that there might be value in providing something else for people to do when they didn’t want to spend all their time in the official program. (In those days, ATypI conferences were fairly small, and they had only a single track of programming.)

After Budapest, Petr suggested to the ATypI Board of Directors that they plan some kind of informal alternative for the Antwerp conference, but the Board wasn’t willing to do that. So Petr and his friends set up their own alternative, which they dubbed TypeLab.

This was a time when digital typography was still thought of as new; it was only three years since Zuzana Licko had épaté la typoisie at Type90 with her HyperCard-based, music-enhanced presentation on fonts for the screen. Very little content about digital type had made its way into ATypI’s main program so far, and what had been included was largely theoretical. TypeLab was meant to be a sort of hands-on side-conference, an experimental laboratory, with a room full of equipment where anybody could try out the new technologies.

They managed to secure sponsorship from Agfa, which made it possible to have the computers, software, and printers all freely available.

“The room of 15 x 15 meters,” says Petr, “was divided into four quarters: a little lecture theatre of 40 chairs, a design studio with Macs and software, a ‘lounge’ where people could sit, talk, and show their sketches and drawings (note that there wasn’t anything like phones or laptops back then), and a printing department (loaded with printers, a typesetter, and copying machines).

“The board of ATypI didn’t go for the idea, so we planned to rent a space on the other side of the street. In the summer of 1993 Agfa, the main sponsor of ATypI that year in Antwerp, got wind of the idea, so Petr got invited to the Antwerp headquarters in late July. The appointment was made with the chairman of the board of Agfa, and also present was the then chairman of ATypI, who still didn’t want TypeLab to happen. But Agfa left ATypI no choice and promised the intended lunch space to TypeLab, also allowing a wish list for equipment.”

Over the course of the conference, they made their own magazine for the delegates, conceived and printed on the fly, using fonts that had been created right there just the day before. “The A3 printed newspapers, ready at breakfast for the attendees, were indeed made with the type that was created the day before. Many traditional/regular ATypI participants thought that to be impossible. Making type was something costing years, not days.”

Petr recalls a student at the Antwerp conference telling him how Adrian Frutiger had wandered into the lab, and the student had shown him how Fontographer worked – a technology that Frutiger was completely unfamiliar with at the time.

That was the first of six TypeLabs, Petr said, the last one being held at the 1996 conference in The Hague. By that time, Petr himself was on the ATypI Board, and from then on, the essence of TypeLab got incorporated into the regular conference program. It was no longer necessary as a guerrilla alternative; it had arrived.

Five years ago, TypeLab got revived as an adjunct to the Typographics conferences that were getting started at Cooper Union in New York. Organizer Cara di Edwardo had suggested that there ought to be some sort of program on the side during the main conference, so Petr re-created TypeLab for the occasion. It has been a Typographics fixture ever since, and this year, because of the coronavirus pandemic, TypeLab became an online-only, virtual event (“a 72-hour marathon,” says Petr), with participants and audience from around the world.

[Image: big blue TypeLab-branded folder for conference materials, from ATypI 1993 in Antwerp.]

Facing the world, typographically

Published

On Dec. 1 & 2, Stanford University hosted “Face/Interface,” a small conference on “Type Design and Human-Computer Interaction Beyond the Western World.” The conference was held in conjunction with an exhibition at Stanford’s Green Library: “Facing the World: Type Design in Global Perspective.” The exhibition, organized by Becky Fischbach, runs until March 24. (Go see it!)

The organizer of Face/Interface was Thomas S. Mullaney, an associate professor of Chinese history at Stanford who has spoken at ATypI and who wrote the canonical book on the history of the Chinese typewriter. Tom is an indefatigable organizer and a generous host, with a clear idea of what is required to make an event like this a success (and a ruthless way with a stopwatch, if speakers run over).

The roster of scheduled speakers was impressive. I knew this would be a notable event, but, as everyone seemed to agree, it turned out to be even better than we had been expecting. There was not a single talk that I was willing to miss, even first thing in the morning, and the interplay among them, dealing with varying languages and technologies and cultures, wove a rich tapestry of ideas. Which is exactly what a scholarly conference ought to do.

Not surprisingly, there were a number of references to an earlier typographic event at Stanford: the famous 1983 ATypI Working Seminar, “The Computer and the Hand in Type Design,” which was recently written about in an article by Ferdinand Ulrich in Eye magazine. That 1983 seminar had been organized by Chuck Bigelow, who at the time was an associate professor of typography at Stanford (the only person ever to hold such a position there – so far). And Bigelow was one of the closing speakers this year, thus tying together these events 33 years apart. (Donald Knuth, also a key figure of the 1983 seminar, dropped by on Friday for a while, though he had no official involvement in this year’s event.) I wouldn’t be surprised if Face/Interface didn’t figure as prominently in future typographic memory as the 1983 gathering has over the last three decades. It felt like a pivotal moment.

Highlights for me included Thomas Huot-Marchand on the contemporary successor to the Imprimerie nationale; Bruce Rosenblum’s highly personal account of “Early Attempts to Photocompose Non-Latin Scripts”; Liron Lavi Turkenich‘s visual tour through trilingual signage in Israel; Lara Captan’s tour-de-force performance, “Facing the Vacuum: Creating Bridges between Arabic Script and Type“; Gerry Leonidas on Adobe’s treatment of Greek typefaces; and the other two closing talks (mine was sandwiched between them), by Chuck Bigelow and John Hudson. Other notable memories include Tom Milo projecting his ground-breaking live-text Qur’an technology on a wall-sized screen in the Stanford maps collection, upstairs from the exhibition reception, and a lively conversation with Chuck Bigelow over breakfast on the last day.

For those speakers who didn’t have to rush off on Sunday, there was an informal brunch and tour of the Letterform Archive in San Francisco, where Rob Saunders showed off his collection and ended up selling off some of his duplicates to eager collectors such as myself.

[Images, top to bottom:] Chuck Bigelow, John Hudson, & John D. Berry after the closing presentations (photo by Chen-Lieh Huang); Chuck Bigelow at the podium; Sumner Stone, asking a question from the audience; John D. Berry at the podium (photo by Eileen Gunn); Becky Fischbach & Fiona Ross outside the hotel in Palo Alto; Rob Saunders’s hands showing off the original Depero bolted book at the Letterform Archive.]

Typography of the future: variable fonts

Published

I’ve just finally watched the Special OpenType session, ATypI 2016 Warsaw of the “Special OpenType Session” from the ATypI 2016 Warsaw in Warsaw in September. (Because of scheduling and flight conflicts, I didn’t arrive in Warsaw until the evening of that day, so I missed the live event. Not surprisingly, it was the talk of the town among attendees at the conference.) The discussion in the video is highly technical, but the upshot of this development is exciting.

Variable fonts” seems to be the name that everyone’s adopting for this new extension of the OpenType font format. What it means is that an entire range of variations to a basic type design can be contained in a single font: all the various styles from Extra Light to Extra Bold, for instance, and from Compressed to Extended. Instead of a super-family of separate font files, you can have one font that, conceivably, contains them all.

The presentation had representatives from Adobe, Microsoft, Apple, and Google, reflecting the fact that this is truly a cooperative effort. All four major companies (and several smaller ones) have committed to supporting and implementing this new standard. That’s a very important fact: usually, adventurous people come up with an ambitious new spec for wonderful typographic features, but the problems arrive when the developers of operating systems and applications don’t fully commit to supporting them. This time, from the very first, the companies that develop those apps and OSes are committed.

What that means is that, if it’s implemented properly, the new format will make it possible for font developers to create fonts that adapt to changing circumstances. For instance, in a responsive web layout, you might change the width of the text font as the width of the window gets narrower or wider. You could also change the weight subtly when the screen colors are reversed. These small, almost unnoticeable, but very important variations could make reading onscreen much more comfortable and natural.

This is a watershed. What it reminds me of is two different nodal points in the development of digital type: multiple master fonts, and web fonts. The introduction of variable fonts at this year’s ATypI conference has the same “Aha!” and “At last!” feeling that the introduction of the WOFF font format standard for web fonts had at Typ09, the 2009 ATypI conference in Mexico City. Both events mark the coming-together of a lot of effort and intelligent work to make a standard that can move the typographic world forward.

The history of multiple master fonts is sadder, and it points up the pitfalls of creating a good idea without getting buy-in from all the people who have to support it. The multiple-master font format was a breakthrough in digital type; with its flexible axes of variable designs, it made possible a nearly infinite variation along any of those design axes: a weight axis, a width axis, or (most promising of all) an optical-size axis, where the subtleties of the design would change slightly to be appropriate to different sizes of type.

But the multiple master technology, developed by Adobe, never made it into everyday use. The various Adobe application teams didn’t adopt it in any consistent or enthusiastic way, and it wasn’t adopted by other companies either. Instead of being incorporated into the default settings of users’ applications, giving them the best version of a font for each particular use, multiple master was relegated to the realm of “high-end typographers,” the experts who would know how to put it to use in airy, refined typographic projects. That’s not the way it should have worked; it should have been made part of the default behavior of fonts in every application. (Of course, users should have had controls available if they wanted to change the defaults or even turn it off; but the defaults should have been set to give users the very best, most appropriate typographic effects, since most users never make any changes to the defaults at all. It’s important to make the defaults as good as possible.)

Now it sounds like the new variable-fonts technology is going to be incorporated into the operating systems and the commonly used applications. If this really happens, it will improve typography at the everyday, ordinary, pragmatic level. And what that means is the improvement of communication.

I’m looking forward to seeing how this works in practice. And to putting it to use myself, and helping in any I can to improve and implement these new standards.

[Images: (left, top) Peter Constable speaking at the Special OpenType Session at ATypI Warsaw, September 2016; (left) schematic of the design variations of Adobe’s Kepler, designed by Robert Slimbach.]

Opening up OpenType

Published

Besides their other useful aspects, OpenType fonts may include a variety of alternate glyphs for the same character: anything from capital and lowercase forms to small caps, superscripts, subscripts, or swash versions. In non-Latin fonts, they may include other related variants, such as the several forms that each character takes in Arabic, and alternate forms preferred in different languages. In some large calligraphic fonts, there may be quite a lot of alternate forms available – but in the past it’s been hard to find them and put them to use.

The most recent update to Adobe’s InDesign CC (2015.2, released Nov. 30) finally addresses this problem. You could always go spelunking in the Glyphs palette to find alternates, but now you have a more direct method: simply select a single character in a text string, and any OpenType alternate forms appear in a small pop-up right on your layout page. Choose one, and it replaces the selected character.

This is the first fruit of a popular groundswell that got started at the ATypI conference in Barcelona last year: type users needed better ways of using OpenType layout features, and petitioned Adobe to improve their products. This new feature in InDesign is a good start.

It has a few glitches, though. Sometimes the relationships among the displayed alternates is not obvious. In Adobe Caslon Pro, for instance, many of the ordinary letters show among their alternates one of the Caslon ornaments. That’s a little odd.

One practical limitation of the current version of this feature is the size of the glyphs in the pop-up. The example shown on Adobe’s tutorial page (left, above) uses a large, bold, flashy typeface (Lust Script) with obvious swash features; it’s not hard to make out the alternates on the screen. But if you try the same thing with a typeface like Bickham Script Pro, which has a very small x-height, it’s virtually impossible to tell one alternate from another.

InDesign pop-up showing OpenType alternates for Bickham Script Pro

The InDesign team added another useful capability while they were figuring out how to access alternate glyphs. Since an OpenType font may include real fractions, you can now select a string of numerals, with a slash in the middle, and turn it into a fraction, using a pop-up much like the one for glyph alternates. How well the fraction is constructed will depend on the font, but if the function is in the font, you can now get at it easily.

Way to go, Adobe! Don’t stop now.

P.S. Yves Peters has done a more in-depth exploration of these features, pointing out some useful things that I had missed. Check it out.

FontCasting

Published

During last year’s TypeCon in Washington DC, FontShop’s David Sudweeks videotaped interviews with a number of type designers, and with at least one non-type-designer: me. He asked questions about how I’d gotten started in the field of typography (“sideways”) and about book design, which gave me an opportunity to set out my ideas about the typography of onscreen reading, and the nascent Scripta Typographic Institute. (That’s a subject that I’ll be taking up again at ATypI 2015 in São Paulo next month.)

Now that interview has been published. The parts about book design & e-book design start at 1:25, after some introductory material.

All of the FontCast interviews are short, focused, and well edited.

Structured writing for the web

Published

At the end of June, at the Ampersand conference in Brighton, Gerry Leonidas gave a shout-out to an early version of the prospectus for Scripta (“Typographic Think Tank”) in his talk. I had somehow missed this until Tim Brown mentioned it in an e-mail recently inquiring about Scripta. I can highly recommend Gerry’s talk, and not only because he quotes me (7:07–7:49 in the video). Although he starts out with a disclaimer that “this is a new talk” and he’s not sure how well it will hang together, in fact it’s extremely coherent; Gerry is both articulate and thoughtful about the wide range of questions (and, rarely, answers) involved in typography on the web.

Gerry used my “wish list” from “Unbound Pages” (in The Magazine last March) as a jumping-off point for his own ideas about the structure of documents and the tools that he wants to see available. He wants tools for writers, not just for designers, that will make it easy to create a well-structured digital document, one that will maintain its integrity when it gets moved from one format to another (as always happens today in electronic publishing). Gerry’s own wish list begins at 20:47 in the video, though you won’t want to skip the entertaining steps by which he gets there.

What he proposes is a way to separate the sequence of information from its relative importance and interrelatedness. “This is what I really want: I want someone to go out there and take Markdown, which I use constantly, and take it from something that clearly has been written to deal with streams of stuff with some bits thrown on the side … and allow me to have this extra intelligence in the content – while I’m writing it – that will tell me how important something is, what sequence it has with other things, and will then allow me to ditch quite a lot of this stuff that is happening there.” The “stuff” he wants to ditch is all the hand-crafted formatting and positioning that makes a digital document cumbersome and difficult to translate from one form to another.

The problem is, as Gerry admits, training people to write with structure in mind. (Every editorial designer who has tried to get writers to use paragraph and character styles will break out into a hollow laugh at this point.) What he’s advocating is tools that will make this easy to do, instead of something that only makes sense to experts. I think he was a little disappointed that nobody leapt up at the end of his talk to say, “We’ve already done that!” But perhaps he has planted the seed.

The Magazine & I

Published

There’s something curiously recursive about linking to a website to read an article I wrote for an app-based magazine. But happily, The Magazine has just expanded its subscription base from its iOS6 app to include web subscriptions; and even non-subscribers can access and read one full article each month. (Other links that month will just show previews: the first couple of paragraphs.) So you can go right now and take a look at “Unbound pages,” my article about the typography of reading onscreen, in the Mar 28 issue.

I questioned the initial limitation of The Magazine as an iOS-only app (indeed, one that requires iOS6, which lets out older iOS devices like my first-generation iPad). But what that limitation did was focus squarely on a specific audience: tech-savvy readers who are likely to be early adopters and who work or play in one of the creative fields (which tilt strongly toward Apple’s digital ecosystem). It also meant that the app itself could be clearly and simply designed.

While there’s a difference between how the app displays an issue on an iPhone and on an iPad, the basic typographic treatment is fixed; the only real change appears if you shift the orientation, which affects how long the lines of text are. (This is all directly related to what I was writing about.) The display of the articles on the website is similarly simple and limited, but by the nature of current web-design tools (again, directly related!) it’s less typographically sophisticated, less well adapted to the screen “page.” But you can subscribe to the web version even if you don’t own an iOS6 device; that expands the potential readership.

With the current update to The Magazine’s software, you can also link, as I’ve just done, to a single article that a non-subscriber can read in its entirety, along with previews of other articles in the issue. That “porous paywall” is a smart marketing move: it encourages dissemination of material from The Magazine without giving it all away. (And the terms of The Magazine’s contracts are that they’re only buying one month of exclusivity anyway; authors can do anything they want after that first month, including posting the article for free on their own website or broadcasting it to the entire world. This seems like a model that’s well adapted to the realities of current online publishing.)

Go ahead and subscribe. There’s a bunch of other good material in the new issue, and I expect that I’ll write for The Magazine again.

[Image: detail of one of the illustrations done for my article by Sara Pocock.]

What is needed

Published

Books are digital. This is not, strictly speaking, true; but it’s about to be, with a few honorable exceptions. Already today, pretty much all commercial books are produced digitally, although the end product is a physical one: ink printed on paper, then bound and marketed and sold. Already, the selling may be done as often online as in a bookstore. Already, the same books are being issued more and more in electronic form – even if, as yet, the e-books are mostly very shoddy in conception and execution.

But that will change. In order for it to change in a worthwhile way, we have to spell out just what form these books ought to take.

So what’s needed? How do we make good e-books? What should a good tool for designing and creating e-books look like and do? What should the result – the e-book itself – be capable of? And what should the experience of reading an e-book be like?

Last question first. If it’s immersive reading – a story or narrative of some kind – then you, as the reader, should be able to lose yourself in the book without thinking about what it looks like or how it’s presented. This has always been true for printed books, and it’s equally true for e-books.

But e-books present a challenge that printed books do not: the page isn’t fixed and final. At the very least, the reader will be able to make the font bigger or smaller at will, which forces text to reflow and the relative size of the screen “page” to change. That’s the minimum, and it’s a fair bet already today. But the reader many read the same book on several different devices: a phone, a laptop, a tablet, a specialized e-reader, or even the screen of a desktop computer.

For a real system of flexible layout in e-books and e-periodicals that might be viewed on any number of different screens at different times, what’s needed is a rules-based system of adaptive layout. I like to think of this as “page H&J”: the same kind of rules-based decision-making on how to arrange the elements on a page as normal H&J uses to determine line endings.

The requirements for this are easy to describe – maybe not so easy to implement. We need both design & production tools and the reading software & hardware that the result will be displayed on.

A constraints-based system of adaptive layout

The interesting problems always come when you have two requirements that can’t both be met at the same time. (For example: this picture is supposed to stay next to that column of text, but the screen is so small that there isn’t room for both. What to do?) That’s when you need a well-thought-out hierarchy of rules to tell the system which requirement takes precedence. It can get quite complicated. And the rules might be quite different for, say, a novel, a textbook on statistics, or an illustrated travel guide.

OpenType layout support. This means support for the OpenType features that are built into fonts. There are quite a few possible features, and you might not think of them as “layout”; they affect the layout, of course, in small ways (what John Hudson has called “character-level layout”), but they’re basically typographic. Common OpenType layout features include different styles of numerals (lining or oldstyle, tabular or proportional), kerning, tracking, ligatures, small-caps, contextual alternates, and the infinitely malleable “stylistic sets.” In complex scripts like Arabic, Thai, or Devanagari, there are OpenType features that are essential to composing the characters correctly. None of these features are things that a reader has to think about, or ought to, but the book designer should be able to program them into the book so that they’re used automatically.

Grid-based layout. It seems very obvious that the layout grid, which was developed as a tool for designing printed books, is the logical way to think about a computer screen. But it hasn’t been used as much as you’d imagine. Now that we’re designing for screens of varying sizes and shapes, using a grid as the basis of positioning elements on the screen makes it possible to position them appropriately on different screens. The grid units need to be small enough and flexible enough to use with small text type, where slight adjustments of position make a world of difference in readability.

Media query. This is the name used for the question that a program sends to the device: What kind of device are you? What is the resolution of your screen? How big is that screen? What kind of rendering system does it use for text? With that information, the program can decide how to lay out the page for that screen. (Of course, the device has to give back an accurate answer.)

Keep & break controls. These are rules for determining what elements have to stay together and what elements can be broken apart, as the page is laid out. This means being able to insist that, say, a subhead must stay with the following paragraph on the page (keep); if there isn’t room, then they’ll both get moved to the next page. It also means that you could specify that it’s OK to break that paragraph at the bottom of the page (break), as long as at least two lines stay with the subhead.

Element query. I’ve made up this term, but it’s equivalent to media query on a page level. The various elements that interact on a page – paragraphs, columns, images, headings, notes, captions, whatever – need a way of knowing what other elements are on the page, and what constraints govern them.

H&J. That stands for “hyphenation and justification,” which is what a typesetting program does to determine where to put the break at the end of a line, and whether and how to hyphenate any incomplete words. Without hyphenation, you can’t have justified margins (well, you can, but the text will be hard to read, because it will be full of gaping holes between words – or, even more distracting, extra spaces between letters). Even unjustified text needs hyphenation some of the time, though it’s more forgiving. When a reader increases the size of the font, it effectively makes the lines shorter; if the text is justified, those gaps will get bigger and more frequent. But there are rules for deciding where and how to break the line, and a proper H&J system (such as the one built into InDesign) is quite sophisticated. That’s exactly what we need built into e-book readers.

In digital typesetting systems, the rules of H&J determine which words should be included on a line, which words should be run down to the next line, and whether it’s OK to break a word at the end of the line – and if so, where. A system like InDesign’s paragraph composer can do this in the context of the whole paragraph, not just that one line. A human typesetter makes these decisions while composing the page, but when the font or size might be changed at any moment by the reader, these decisions need to be built into the software. In “page H&J,” where the size and orientation of the page itself might change, the whole process of page layout needs to be intelligent and flexible.

Up until now, in the digital work flow, the software’s composition engine has been used in the creation of the published document; the human reader is reading a static page. But now, with flexible layout and multiple reading devices, the composition engine needs to be built into the reading device, because that’s where the final page composition is going to take place.

It’s easy to create a document with static pages that are designed specifically for a particular output device – a Kindle 3, for instance, with its 6-inch e-ink screen, or a 10-inch iPad. I’ve done it myself in InDesign and turned the result into a targeted PDF. But if that’s your model, and you want to target more than one device, you’ll have to produce a new set of static pages for each different screen size and each different device. Wouldn’t it be better to have a flexible system for intelligently and elegantly adapting to the size, resolution, and rendering methods of any device at all?

[Photo: a 17th-century Mexican handbook, about the size of a hand-held device, from the collection of the Biblioteca Palafoxiana, displayed during Typ09 in Mexico City. With ink show-through from the back of the page, which will probably not be a feature of e-books.]

Substrate

Published

I’ve been musing about that wonderful word substrate, and contemplating its many permutations. The word has uses in biochemistry and philosophy, but the meaning that intrigues me is literal. By its etymology, a substrate is an “under-layer,” or what lies behind or underneath something. When it comes to letters, the substrate is the surface you write or print on.

The substrate gives typography its third dimension. Even when the surface is perfectly flat, it’s the surface of something. In printing, the substrate is the paper (and the occasional non-paper surfaces that people choose to print on). The substrate for digital type is the screen that it appears on, whether that screen is held in your hand or propped on your desk. (Or, indeed, mounted on the wall in your living room or a theater.)

Printing, in all its many forms, deposits ink on the paper. Type on screen is projected out of the substrate on the surface (and from there into our eyes). In e-ink and other kinds of smart paper, the letters are actually displayed inside the substrate. The substrate is the physical ground of “figure & ground.”

Essentially, type is about the nature of the substrate and how the type is rendered on that surface. In traditional printing, this is a matter of inking and presswork. On a screen (like this), this depends on resolution, and all the many tricks for making it appear finer than it really is.

Printing or display depends on the relationship between substrate and rendering. Everything else – the real heart of typography – is arranging.

[Photo: “Rock 6,” copyright Dennis Letbetter.]