don't push it
…
browsers are fragile.
Stress-testing browsers beyond serving them the normal, free-standing, test cases, reveals quite a spread in performance across browser-land. Can be quite frustrating.
Browsers that buckle under stress can be a real drag. For instance when 1) nearing the deadline for a commercial project, and when 2) testing out new ideas in my own sandbox. 1) Have to either avoid problematic features, or resort to polyfill. 2) Preferred alternative is to implement whatever features I want, and ignore failing browsers. This very site is my “sandbox”, so guess what…
case at hand
On screens wide enough for two-column layout, a number of animations acting on generated content elements are designed in to make the clouds in the page-header appear to vary in lightness over time in several places.
All animations are set to run slow, and are rather subtle in appearance. They are designed in as site-wide test-objects, and are not meant to catch much attention.
My test base is Vivaldi, and in most browsers running on Blink all animations show up and run as expected. Browsers on other engines either don't support relevant code, or show varying degree of failure.
ActualCSS
for one of the animations is as follows…
@keyframes fltr1 { 0% {filter: sepia(0) contrast(100%) drop-shadow(40px 40px 15px rgba(255,255,255,1));} 40% {filter: sepia(50%) contrast(50%) drop-shadow(-100px -50px 30px rgba(0,0,0,1));} 70% {filter: sepia(30%) contrast(130%) drop-shadow(-70px -70px 20px rgba(255,255,255,1));} 100% {filter: sepia(0) contrast(100%) drop-shadow(40px 40px 15px rgba(255,255,255,1));} } #sec::before, main aside::before {content: url(../imagedepot/op-peregrine-1.png); width: 50px; padding: 45px 0 0 0; height: 5px; line-height: 0; box-shadow: -2px -2px 5px #fff inset; background: rgba(220, 220, 220, 0.15); position: absolute; z-index: 1; top: -70px; left: -20%; margin-top: -10%; margin-top: calc(-35px - 2%); border-radius: 50px; line-height: 0; display: block; width: 3rem; padding: 2.8rem 0 0 0; height: .2rem; border-radius: 50%; box-shadow: -.15rem -.15rem .3rem #fff inset; top: -4.25rem; animation: fltr1 28s ease-in-out forwards infinite;}
Notice “double-coding” in CSS above – crutches for older browsers are left in place.
Two stages of that animation rendered in Vivaldi, show up like this…


The above is one of six such filter()
animations set up by four keyframes
.
In addition, six transform
animations driven by three keyframes
create slight movements
in that area and elsewhere on the page.
success or failure…
Running Win10 on a faster (game) computer, some key browsers are tested. Main suspects follow…
- Chrome and Opera show full support – handle all
filter()
andtransform
animations fine.
Low on CPU use – smooth rendering. - IE11 doesn't support
filter()
. It handles alltransform
animations OK.
High on CPU use – various instability problems. - Edge supports
filter()
, but fails to run any of those animations. It handles alltransform
animations OK.
High on CPU use – various instability problems. - Firefox and Pale Moon support
filter()
, but fail to run all animation – twofilter()
animations are observed running, but not smooth. They handle alltransform
animations OK.
High on CPU use – various instability problems. - Firefox Quantum 57.0 runs smoother than earlier versions, but fails on same
filter()
animations. Handles alltransform
animations OK.
Moderate CPU use in “safe mode”.
For the sake of testing, all troublesome @keyframes are active in sitewide design.
bad browsers be damned
That headline is not to be taken quite literally, but I have observed quite a few cases where browsers that can handle simplified test
cases fine, fail when served more complex cases where the exact same rendering is called for. In my book that is total failure.
Few web sites are built on one or a small number of simplified test cases, so that kind of “validation”
is worth next to nothing.
testing, testing…
On one hand it is unrealistic to check @support
for every variant and combination of standard code.
On the other hand we cannot entirely trust browsers' response on @support
if they literally buckle under stress.
We know that not all browsers support @support
– see: IE11. We also know that there are browsers that don't respond
truthfully to all @support
requests. As a consequence we are left with testing, testing and more testing,
just like we did a decade or two ago.
What else has not changed over the years?
same methodology…
The old CSS sledgehammer approach has not fallen out of favor
around here (have even updated the hammer-picture), and the toolset has of course been extended and refined with
improved standards and support over the years.
Browsers are served as complete and detailed code as found necessary, leaving nothing to chance or defaults. Browsers that fail are (usually)
saved from revealing their worst flaws, and get to expose the ones I find less troublesome.
our playground…
The average visitor has no idea, and rarely ever any interest in knowing, what a site is supposed to look like, and won't test and compare designs across browser-land. Means we can play with design-details to our hearts' delight in our own sandboxes, without causing problems for visitors.
As an example on wide screens, I have “marked” the sledgehammer images with a background image in browsers that don't support
shapes
. Not a serious problem design-wise, just slightly irritating to have all that empty space there.
case closed … for now
The advantage of leaving browser weaknesses, flaws and failures exposed on my private site, is that I sure ain't gonna forget they exist. May find time, and muster interest, to study them later.
In the mean time anyone in the trade with interest in such matters, can see how different browsers handle the same code, and notice if, and when, any progress, or regress, is made.
sincerely
Hageland 30.oct.2017
31.oct.2017 - minor revises.
16.nov.2017 - updated to reflect Firefox Quantum 57.0.
last rev: 16.nov.2017