Aspect ratios in Errol Morris's Wormwood

Thanks to Peter Oleksik for sending me this “tip” last week:

“The aspect ratios in Wormwood are BANANAS”

I was curious and have some free time, as I was spared from family events this holiday season and don’t want to leave my apartment because it’s freezing outside, so I took a look. They are, indeed, BANANAS.

For context into aspect ratios, you might want to read or re-read the duo blog posts Peter and I wrote investigating the aspect ratios in Beyonce’s LEMONADE. The first part is available here and links to the second post and the pre-post about video errors in Formation. It goes into the history and ways we talk about aspect ratios, as well as common errors and degradation found in home movies, which I’ll also cover a little bit. I’m not going to repeat what we said in those posts.

On with the show!


Wormwood is a new “docudrama” directed by Errol Morris and recently released on Netflix. It follows the story of a CIA agent’s death in 1953, told primarily via interview format but also through a recreated narrative of concluded events, home movie footage, news footage, and collage.

By default, Morris works within a frame of 2.39:1 and this is how Wormwood opens, as shown above. This is the wider of the two most common cinema aspect ratios; the other being 1.85:1. Both of these are wider than HD television size, which is 1.77. 1.77:1 is likely to “fill up the entire screen” if you are watching Netflix on your laptop or home television, so this feature will still have letterboxing at the top and bottom of the screen.

It shortly shifts to splitting the frame in half, so there are now two frames with a 1.85:1 aspect ratio each, or 3.7:1 when side-by-side (maybe slightly more due to the frameline in the middle – my ability to determine sizes is by clumsily taking screenshots and dividing pixel width into pixel height).

Anything 3:1 and above is super unconventional – I was notably bewildered when Beyonce took some shots in LEMONADE to 3.5:1. Is it because Wormwood is widening our perception of reality???


Below seems to be something approximate to 3.17:1, with an approximate 1.58-9:1 aspect ratio each. This still crops the top and bottom from the original broadcast footage, which would have been 1.33:1, but allows for a fuller frame. Two standard-definition frames next to each other would have been 2.66, which is taller than the default chosen ratio of 2.39:1. But instead of trimming each frame down slightly to 2.39:1, Morris crops it even tighter, to an unconventional ~3.17:1.

Is this supposed to make us uncomfortable? Because it makes me uncomfortable! So wide and so far away from cinematic conventions! It’s almost as if we have been secretly drugged.


Here, below and above, you can see how the footage fits into a traditional television set (1.33:1). Below, you can compare it to how it was cropped on the right to create a 3.285:1 aspect ratio from 1.64 each. (I know my ratios are not scientific – but this really seems to be different from the very similar shot up above! Morris, tell me what you’re thinking!)


Below is the widest ratio used, with two 2.35:1 ratios split, creating a massive aspect ratio of 4.7:1! Our minds are now FULLY EXPANDED.


Sometimes the frames split more than just into halves, consolidating into many frames but keeping the 2.39:1 aspect ratio. These come in at 3:1, with a 2:1 aspect ratio each. Below are two examples of that (and the many, many cameras used while filming the interviews). AHHHH ALTERNATE DIMENSIONS??



The 4-by display used in introduction sequences and elsewhere fits into the 2.39:1 ratio, with each frame of the 4x4 also being 2.39:1. Logical.


This frequent use of multi-angle perspectives complements the collage-effect (and both are relevant and complementary to the story) which appears in animated sequences like below:


Wormwood intercuts a lot of analog archival footage, which are all shot in Standard Definition, 1.33:1. But unlike standard cropping methods, the material is cropped in a way that emphasizes the edges of the frame, like this edge-to-edge scanned film clip with visible recorded-sound-on-film on the left:


Analog errors

Wormwood relies heavily on archival footage to move the story forward, and I just mentioned the irregular framing. I like this below shot because it showcases so many naturally-occurring artifacts related to film, particularly home movies. This is or is-supposed-to-be Standard 8mm and is identifiable from the size and position of the perforations. There are scratches on the film (maybe hard to see in this down-rezzed version), the sprocket holes are visible, the text that appears alongside the film edge is visible, and the film jitters, showing off-balanced film sprockets and/or light leaks. Also, on the right is an irregular film line. Some of these errors are probably computer-generated or cut from a different source, but I won’t speculate. But some of the sizing just doesn’t add up here. What is reality, anyway, after watching Wormwood?


Video footage is used heavily too, and the interlacing lines are visible. Although I don’t know when the lines are there because they are pulling only one field, or if they are fake lines put in to emphasize this is video.


Below is a good example of the Moire Effect, where this man’s suit contrasts with the horizontal video lines, creating a light rainbow effect.


When referencing news articles, a visual representation of an active microfilm machine is used. Here’s an action shot:


This kind of fuzzing can be seen in the framing and focus of this camera angle, giving a similar aesthetic to the above.


What is the truth, Errol Morris?!?!!


In addition to playing with the aspect ratio of the entire frame, Morris makes a lot of choices in framing scenes, like the scene below, reminiscent, of course, of the classic opening shot in The Searchers. Sorry I had to use my Intro to Film History skills there.


Television footage is displayed framed within a vintage television set, which solves the problem of how to fit 1.33:1 footage into a 2.39:1-and-beyond framework (as discussed earlier). Also, this TV is cute.


As a bonus, there are a couple of round frames too, like various views from a peephole…


…and a (potentially archival footage?) view similar to a microscopic lens.


That’s it! Hope you enjoy the tale of mystery found in the series as well as the mysteries found in frame choices within the series. If you come up with any theories or have some more research to add to this brief investigation, let me know!

Introducing: Audiovisual Preservation Training

Hello! I think a proper introduction is in order.

Audiovisual Preservation Training ✨

First, a backstory. I was homeschooled. Well, at least that’s what I tell people, because it’s easier to say things in three words than it is in three sentences. Actually, I spent high school doing correspondence courses via a combination of physical mail and the internet. I don’t know which was the cause and which was the result, but I’m pretty driven to seek out learning on my own. I also grew up in a rural environment, surrounded completely by grass for ~10 miles on all sides, so I didn’t have the resources that I’m so privileged to have now living in the biggest city in the States (and yet woefully take it for granted!).

All of this to say, education is foundationally important to me. And unrestricted access to education is important to me. I want what I wish I had access to when I was first coming into the field (many?) years ago.

So, something I am launching now and look forward to continuing work on in 2018 is this collection of audiovisual preservation training materials, available ✨ here. I’m very excited to put everything I know in one place and I hope it will be helpful to others, to a me-eight-years-ago.

What do I hope to achieve with this? I want to:

  • Use it to teach others.
  • Allow others to use it to teach themselves.
  • Allow others to use it to teach others.

This field has a gatekeeper problem. Conferences are a wonderful way to meet colleagues and learn about the field, but they are prohibitively expensive, requiring registration fees, hotel fees, travel costs, and time taken off of work. And many conferences still do not put their material online for free betterment of the field broadly, so not only is the networking value lost to those who can’t afford to attend, the educational value is also lost.

Local audiovisual archiving training is up to ~$118,052 (not including rent and other costs for 2 years), which is a salary that few in our field will achieve, even at the top of their careers. This number is utterly staggering, especially compared to my Southern scholarships-and-state-school training (BA+MLIS ≈ 1 semester). And yet students and emerging professionals in our field complain about their education being insufficient (across the board, myself included). I believe in education, but not like this.

Addendum for a local comparison – Pratt rolls in at around $55k for an MLIS. They estimate around $105k when factoring all expenses attending full-time for 2 years.

So what is to be done? I sort of intrinsically demand that education should be free – in the many definitions of the word. Free as in gratis (one should not have to pay*), free as in libre (it should not be restricted), and free as in “freely shared amongst each other.” Don’t we all have the same goals?

* (For the residual produced content. I understand paying people for their labor – teaching is hard work. Also, we live in capitalism, which puts a real damper on everything.)

This isn’t entirely out of the goodness of my heart, of course, so I don’t want to trick you into thinking it is. These documents help me a lot, because I also teach and need consistent slidedecks from which to pull from and personalize depending on the class. (Should I mention here that you can pay me to teach or run workshops, or should I have mentioned that after I complained about living under capitalism? 😘). I also like how the decks are components that can be configured together or work independently, growing or shrinking the lessons. And because I share everything I do anyway (open source for life, in life), it seems to be a good way to keep all of the information structured, in the same place, and aesthetically similar.

Here are my goals moving forward with this project:

  • Deepen content. Right now most of this relies on me-or-someone being present to go into more details than the slides currently do, but by adding speaker’s notes, it can become more in-depth.
  • Expand content. I have a huge list of things I want to add, but it takes time making sure I know what I’m talking about!
  • Be wrong and have those wrongs corrected. (This has already happened! yayyayay) Who peer-reviews lectures given in classes that are not open to peers?
  • Have people contribute. I went back-and-forth on whether or not I wanted to push this as a shared resource outside of my own domain, to strive for community-driven and selfless work like my sweet ffmprovisr, but finally opted to keeping it personal, because teaching is also personal and opinionated. Also it means I can make dumb jokes. I do hope people will still feel free to modify, clarify, and better contextualize the material, just as they are free to use it in teaching others (Note: SHARE-ALIKE!).
  • Different content? Some initial feedback from a couple of people was that they like videos for learning, which is manageable and sorta helps in deepening the content.

Thank you for reading this too-long rant. I can’t change the sad state of our field (jobs requiring master’s degrees, debt, cronyism, underpayment) but I hope I can better facilitate the supplemental knowledge that one needs to succeed in it.

2017 Reflection and 2018 Goals

Welcome to the Ashley Blewer annual report. Here is last year’s.


2017 was not good. We all know it was not good. I spent a lot of time getting hurt and staying hurt. I spent a lot of time supporting those hurting near me (even if not physically near me). I’ve experienced a lot of heartbreak from institutions and the people within them and have had to distance myself from a lot of people who turned out to be not very good. I don’t want to dwell on this, but I feel like I lost a lot of time to mitigating damage from multiple sources.

I’ve learned to pay attention to what people do, not what people say they do. I feel like I’ve learned habits I wish I didn’t have to learn.

I suppose, then, this was the year of reinforcing personal infrastructure.

Writing this entry was actually very cathartic and made me feel like I did accomplish things despite a mix of personal setbacks and the endless, disastrous news cycle affecting all of us.


What did I do?

My goals were lofty and I made some progress, but there is more work to be done.


The new A/V Artifact Atlas was launched at the beginning of this year – a huge redesign! We got more contributors, more contributions, and more eyes on the project as a result. I am happy!

My site and ffmprovisr both got redesigns. Well, my personal website got many redesigns in one. I put these both in the category of “getting organized.” I still love ffmprovisr very much and it’s been so nice to see it grow. I pushed to ensure its longevity and health by making sure I wasn’t the default benevolent-dictator to the project and set up a maintainer team. Many hands make light work.

Behind the scenes, my physical and digital archives are both in order, and I reduced my online presence and ties with fundamentally corrupt systems (although there’s always more to do!). I restructured some previous tiny applications too, reducing their codebases, and also made some new ones.

I followed up with the successful Minimum Viable Workstation document with one for recipes and a similar-in-spirit project, the Collection Management System Collection. I continue to oversee both of these little projects and people seem to like them.

Work things

The good thing about blogging regularly is it is easy to remember what I did this year. I went to No Time To Wait! in Vienna and Open Source Bridge in Portland. I spoke at both. I went to !!Con and textAV. Did I do anything else?

Both MediaConch and QCTools have concluded as projects (although final announcements are imminent!). I had a good time organizing and teaching at BAVC’s SignalServer Workshop, which featured SignalServer, QCTools, and the new command-line report generator, qcli. Here is a blog post I did for qcli. And here is a report from the workshop. I got to fill up the MediaConch blog with nine interviews (with eleven people). It’s incredibly rewarding to see the cumulation of these projects and get user feedback in this way. I look forward to QCTools 1.0 being released in early 2018 and MediaConch’s future efforts.

I started 2017 still in love with my job. Unfortunately there is something very bad happening there, and I became the 14th to leave my department in less than a year. That number is now at 20.

New things

Internet Girlfriend Club launched over the summer, which has brought me a lot of joy. In a way, it’s been one of the hardest personal projects I have ever done because it relies entirely on other people sending in contributions and involves me endlessly nudging people to fulfill favors of writing stories or sending material to me. Asking for help has always been a weakness for me. It is worth it though. BTW, contribute!!!

I’ve taken this picture every day this year.

There a couple of projects in formation that I won’t speak about right now.

Still hardly watched any movies. I did read a lot of books.

I joined the ALA ACRL TechConnect blogging team, and I will need to deliver in 2018!


I am, as ever, dedicated to vigilance.

If you are reading this far down, you get a sneak peek of something I am launching very soon and look forward to continuing work on in 2018, and that is my collection of audiovisual preservation training materials, available ✨ here. I’m very excited to put everything I know in one place and I hope it will be helpful to others.

You may notice the URLs for this blog have changed to my domain (as well as the above link to training documents). I want to continue this pattern of minimizing dependence on systems I do not have control over. My content is still being served by GitHub but this is a step towards having the power to re-direct if necessary. I keep threatening to switch to a Dell running Linux but we’ll see if that happens.

I’ve spent the past few months feeling like I am intensely in a limbo, and I find that picking definitive goals in for the next year during this time period would be ill-advised. (My I Ching app tells me this constantly!) So although I have some things to propose, impatiently I will wait.


Addendum: I will note that I have set myself up for over-conferencing-syndrome in early 2018. Maybe you’ll catch me…

  • Teaching FFmpeg and the command line to WGBH Fellows in January
  • Speaking about data packaging for a brighter future at Code4lib in D.C.
  • Workshopping minimum digital repositories at The Collective in Knoxville
  • Speaking on the technical details of web archiving at NEA/ART
  • Workshopping a/v analysis tools at ARCH2018 in D.C.

And potentially a couple more that are TBD. Maybe I will come back with a goals list after this first-quarter marathon, with “less conferences!” certainly at the top.

No Time To Wait! 2


Kieran O’Leary of Irish Film Archive did a perfect job summarizing the conference here, so I really recommend reading that one and only reading mine if you want a sub-par sequel. Or don’t even listen to either of us, just go watch the talks! Here you can find the schedule.


Some of the major themes this year:

  • Open source funding models So many of the talks went into the struggles of open source funding, from maintainers to support-contract models to working within institutions to support open source based projects instead of using them for free or paying a closed-source vendor.
  • Labor, from open source and archival perspectives Maybe this is just constantly on my mind, but I felt this was addressed frequently, coinciding with the discussions of funding models for open source.
  • Format normalisation I was maybe a little surprised to hear so many conversations around file format normalisation, but do think it’s an important topic we should be discussing more. I’m maybe also surprised Archivematica didn’t come up more frequently in talks, so I want to mention that if a user chooses to normalize video before creating an AIP in Archivematica, they normalise to FFV1/MKV. :) They will also have MediaConch integrated into their next release at the beginning of 2018, so keep an eye out if you are an Archivematica user!

General highlights

I was really into Dave’s super accessible explanation of the significance of the granular fixity available in FFV1/MKV – comparing it to how if you pull a fire alarm, it notifies the fire department of your exact address and they know exactly where the fire is. But, with file fixity only at the file level, it is like pulling a fire alarm to let the fire department that there is a fire “somewhere in Vienna.” Along with this, I was excited Ethan Gates was able to attend the conference and speak on the work he is doing “indoctrinating” emerging a/v archivists with open source (although I worry a few people didn’t realize he was using that word as a joke!).

nttw Check out the MediaConch-style Star Trek logo!

I got so much out of Martin Below’s talk and look forward to going back to it. In summary, he was explaining the ways in which he uses the Menu/Chapters aspects of Matroska files and did some in-depth demonstrations using a digitized Star Trek boxed set. He uses it to refine down to skipping the intros, showing a list of his favorite episodes, or cutting out the credits, and how that affected the overall time. I’ve been working with the Matroska specification for three years and although I knew of these features, I hadn’t been able to appreciate them and see them in use like this. It’ll help me work with him to refine the specification further.

Agathe Jarczyk’s talk, “Dreaming of an Ideal Software Player for Video”, was a favorite among many, myself included. She did an excellent job at laying out the concerns shared by time-based media conservators when diagnosing work. I’m always a bit jealous of conservators that get to spend lots of time on a small amount of strange objects, rather than the shovel-it-all-through/automate-automate-automate mentality I hold as a combo archivist/developer. I was also happy to hear from Ana Ribeiro from Tate during the Format Implementation Panel and would have liked to hear a whole talk about the formatting and normalisation issues at Tate as they have such a focus on presentation, but was glad to get to talk with her afterwards.

nttw Wow, have never seen a chart like this! – From Agathe’s slides

Reto Kromer also spoke of his wishes in a gentle way for the Matroska and FFV1 standards, and was pleased at Steve Lhomme’s presentation updating us on the Matroska specification efforts, because it sounds like he will be getting many of his wishes. It was also fortunate that Steve works on VLC and could let Agathe know that the upcoming release (available now in beta) of VLC should be able to fulfill some of her wishes too.

These kind of productive conversations between the preservation-practicing tool-users and tool-makers is what makes this conference such a delight, and was largely the motivation for having it exist. It was unfortunate to hear from a presenter dead-set on making sure we knew she didn’t give a fuck about uncompressed files, and who did not seem to understand that her research is dependent on preservationists getting it right for her benefit, particular when it comes to digitized film assets that require framerate and playback speed expertise up-front. Maybe this means we need to do a better job at educating the general public as to why our professional is valuable, or maybe this person was just both very rude and very uneducated?

Speaking of film, the final batch of talks, being film-related, made me surprisingly nostalgic for when I used to work with film materials. I enjoyed the depth of technical knowledge across a wide spectrum of film-based issues and the final panel, which was able to successfully hit the tired “film vs. digital” debate with new perspectives.

I felt I didn’t have much to contribute during the panel on open source and dealing with philosophical challenges within institutions, but thoroughly appreciated Alessandra Luciano’s direction, input, and her perspective that cultural heritage institutions have a moral imperative to choose open source. I also thoroughly appreciated Steven Villereal’s emphasis on acknowledging the need for an understanding of who holds the power within institutions more largely, and his note on how archival institutions often share very little values of SMPTE members, so why do we end up relying on SMPTE-approved standards? Is IETF not trusted enough by old-school broadcasters as a valid standardizing body? (If they don’t, too bad, cuz IETF is the best!). In general, I hope more institutions allow for closer collaboration between their IT teams and archiving teams, and I hope more skill-sharing can happen between these teams, because it is essential to successful preservation of materials.


Amazing to see the conference grow from a group of ~50 to a group of nearly 100 this year, and to see it mature in many ways. I missed the breakout sessions we had last year but I know it would have been difficult given the space and size, and I think a lot of the panels made up for it. I find that breakout sessions help increase the likelihood of more timid audience members to share their experiences, though. Despite this, though, there were always many questions and comments after the talk, and this was the first time I’ve been at a conference where I’ve heard “This is more of a comment than a question…” (and this happened frequently) but then the comments would be actually very valuable, thoughtful, and considerate. As always, I look to Code4lib’s conference model, which has been able to scale itself and maintain a close-knitted community feeling. As Kieran mentioned, it was great to have a core ffmpeg developer come and very actively participate, and he seemed to leave thinking fondly of his two days spent with us and of archivists in general. I like to think that we converted Steve Lhomme over to Team Archives last year, and this year we were able to convert Carl Eugen Hoyos.

Very excited to re-watch many of the talks and review the slides – there was so much information crammed into these two days that I just wasn’t able to take it all in! I also have to go back and watch Jimi’s talk and Kieran’s talk, because they both immediately preceded times when I spoke so my mind went completely blank due to presentation-nervousness. I owe it to them to re-watch in a more calm state!

Finally – Thanks so much to the conference organizers, volunteers, and sponsoring organizations!

nttw Funny to see my Terminal style while someone else presents!

ffmprovisr gets a redesign


ffmprovisr before

If you haven’t been to ffmprovisr in a while but check it out right now, you’ll notice it recently got a makeover (go check, we’ll wait)! ffmprovisr has been looking the same since its initial inception over three years ago, as I recently noticed while looking through old images. The most noticeable difference, at first, is going to be some of the visual changes, but ffmprovisr actually got a full, comprehensive redesign in terms of an information architecture overhaul, lightened codebase, better handling of different screen sizes and improved design/animation for an overhaul better user experience.

ffmprovisr after

So, what changed, and how?

Here is Katherine Nagels to introduce some of the initial changes…


About a month ago, I realised that ffmprovisr had grown so much that its navigability was now a bit lacking. At the beginning of October, we had almost 80 (!) recipes, grouped under 10 different headings. Some of these categories, such as Change codec (transcode) were clear and accurate, but others were less useful: for example, the Other section of miscellany had grown to 20 recipes, whereas we had a solitary entry under Repair files. Would someone looking for a way to synchronise audio find the latter recipe? Likewise, some groupings didn’t seem all that conceptually tight to me: the Make derivative variations category included recipes for making animated GIFs, ISO creation, and trimming video.

I set out to reorganise the page by creating new headings, renaming others, and moving around the recipes accordingly. To give several examples: all the commands to do with trimming, joining, or excerpting a video now became grouped together under a heading of that name; Work with interlaced video was another new section. Change formats, a name which I found quite vague, became Change video properties, as that section groups recipes with which one alters things like a video’s aspect ratio or colourspace.

So far, so good. Or was it? Actually, a lot of these decisions weren’t as trivial as they seem. Classification and taxonomy are big concerns in library and archival world, and they proved to be sometimes tricky even on a small-scale project like this. For example, did the recipe Images to GIF really belong in its original home, the Change codec section? (We decided it did not). Should all the audio-related commands be grouped together in one section, or should we separately retain the Normalize/equalize audio section? (We currently have combined them under the heading Change or view audio properties).

These changes were a process rather than absolute actions; for example, I split out recipes for creating thumbnails and recipes for creating GIFs into two separate sections before more sensibly bringing them back together under the umbrella of Create thumbnails or GIFs. Conversely, we added the entry on filtergraphs in a section called FFmpeg concepts before realising that we were presenting a pretty advanced topic as something of an entry point - not very beginner-friendly. (Thus the FFmpeg basics and Advanced FFmpeg concepts sections were born). This is also a good example of how important the review and feedback cycle was to these changes - it’s easy to get lost in one’s own viewpoint.

The main idea I tried to keep sight during this reorganisation was simple: what would make ffmprovisr a better resource for beginners? Not that it’s not useful for more experienced people too, but as I emailed Ashley recently, I love the idea that people, even from outside archives, could find ffmprovisr and learn how to use ffmpeg from it. Applying this concept to page structure meant that steps like adding a Table of Contents were obvious. But it also provided a good opportunity to fill in certain blanks, like adding an entry describing the basic structure of an ffmpeg command, and a generic rewrap command. I know from experience that unfamiliar and/or technical things can be intimidating, so I’m all about lowering the barrier to entry for such a useful and extremely learnable tool as ffmpeg.

Now at 30th October, we have 18 categories and, if I count correctly, 84 recipes - including just 7 in the Other category. ;-) There’s always a tradeoff to be made re: granularity v. efficiency, but I think the current balance is pretty decent. There is always room for improvement, of course, so feedback and contributions are welcomed!

Of course, usability is about much more than just the structure of information - visual design and user experience are even bigger pieces of the puzzle. Over to Ashley to describe how she refactored ffmprovisr visually, as well as cleaned up the codebase!


ffmprovisr was built on Bootstrap and not optimally sized for smaller screens other than what Bootstrap inherently delivers. It was also built relying on Bootstrap’s Modal feature and used some of Bootstrap JS to perform some magic associated with that.

I’m really obsessed with removing Bootstrap from projects (for some reason) and even more obsessed with replacing it with CSS Grid Layout. I like CSS Grid because it’s the “hot new thing”, but it’s the hot new reliable, well-supported, built-into-the-CSS-specification thing, so it’ll soon grow to become the “stable new thing” and stay that way. It feels really great to remove a large framework library and replace it with just a handful of lines of code.

I also really love extracting jQuery out of projects but I ended up not doing that with ffmprovisr, even though there is very little JavaScript used on the page. It is used just for ensuring anchor tag reliability and updating the site – some more work needs to be done there, and you can take on that task if you want to contribute! There is some reliable but inelegant JavaScript currently keeping it in place.

The first thing to go in this redesign was the Modal view. I replaced it with an inline collapsing open/close functionality instead. I initially and temporarily did this using Bootstrap while I got feedback from the community/fellow maintainers and then replaced it with pure CSS. This way, people can browse through the rest of the site and open multiple scripts at the same time, which wasn’t possible before. Also, modals just aren’t very good, so I was happy to be rid of them.


All of the above changes inevitably caused the site to be changed visually. The Table of Contents section was new, for instance.

After adopting Grid Layout, we were able to make portions of the website resize themselves based on size of screen. For small windows, like on a phone, everything will appear in one long column. For windows with more space, the Table of Contents will appear on the left. For very big screens, there is some space on the right and left so the content isn’t stretched too far across the screen, which would make it hard to read.

The font size increased a little bit and we switched from using pixel sizes (which do not change) to using em sizes (which change in relation to the default screen size). The main header, where it says ffmprovisr with some swirly unicode, is using the vw size. If you resize your browser window, you’ll see that the header automatically shrinks at every step. This is how vw works, it is a size calculated based on the “viewport width” (and the viewport is your browser).

Next, ffmprovisr used to have buttons that would open up modals. After modals were removed, as mentioned above, it was visually less appealing to click through. The content would appear immediately under the button, with other buttons dangling around “in the air.” The buttons were replaced with rows that light up as green when hovering, and the grow-the-icon-slightly-bigger animation feature was maintained but re-written.

Since these big changes, Katherine has come back around to fix some things that needed improvement, especially related to media queries and some CSS sizing. Thank you Katherine! It’s great to have such good teammates to collaborate with on these kinds of projects.

ffmprovisr after


Those are our improvements! We hope that all these changes make ffmprovisr easy to use, which in turn will make ffmpeg easier to use and understand, not just for archivists but anyone wanting to improve their skills around this powerful and valuable open source tool. There are a few more small improvements that can be made, and if you want to learn to submit your first open source pull request, please get in touch with us and we can help you!

Thanks always to fellow maintainers Kieran O’Leary and Reto Kromer, and everyone who has submitted contributions to this project.