Being an author of my own HTML/CSS engine I know HTML/CSS very well as anyone can imagine.
BUT! I don't like to write formatted texts in HTML sources - too hard to consume it while writing.
Markdown (a.k.a. poor man WYSIWYG) is not that good either. Still palliative.
So I did html-notepad ( https://html-notepad.com ) - WYSIWYG editor. For myself and my wife primarily.
I am writing stuff for publishing in it and pasting to WordPress and other destinations.
WYSIWYG HTML editing cannot be used for web site creation/design. But for formatted texts, content islands in HTML, it is perfectly adequate.
Yet.
While 90% of formatted text editing tasks can be done in WYSIWYG, there are tasks that are more convenient to do in source form. I use source view in html-notepad with sync of caret position between source and WYSIWYG view - quite handy if you would ask me.
Not too much interest from the public expressed, unfortunately. Existing customers are happy with built-in script and compactness of the engine as it is.
Instead I am making efforts to add ES7 features to the Sciter's script. So there are arrow functions, destructuring assignment, Node's file functions, etc. now. More of those to come - Sciter uses same libuv library for asynchronous IO as Node so porting of all NodeJs runtime is just a matter of time (and practical need).
> I had never heard …
Well Sciter is 12 years old. Originally it was a "secret" UI engine of Norton Antivirus. Symantec uses it since Norton AV 2007 and other their apps, check this: https://sciter.com/from-skeuomorph-to-flat-ui-evolution-of-o.... Use of CSS is quite convenient in UI of desktop applications. All these years they do not change UI drastically - just modernizing look-and-feel by CSS.
Since then pretty much all serious antiviruses are using the engine for their UI. My estimation (based on customers' survey) is that Sciter code runs on 500 mln PCs and Macs now . That, I think, is comparable with, for example, current FF installation base.
> But how can I then keep the style and layout of all my posts and pages in sync?
I love his answer to this one. Just don't and you end up with blog posts that are like a time capsule. This is a benefit in my books and something that might actually shift the scale in favor of writing my blog in HTML next.
But I'd also want to have a list of posts somewhere, a link to posts about the same subject matter as this one, and working contact info everywhere. I'd still want some things to be generated. And maybe an option to have comments on some posts.
I thought the same, but on the other hand, these are the things that are usually just "one page" away. This concrete page / blog post doesn't even have a link to the root section where one would seek other posts and categorization you mentioned. This link could probably be safe to include in all pages and consider it "stable", but I don't mind it is not there, I'm used to traverse location using URL bar. I always thought it would be great if such features would be provided by browser natively and "common" users were aware of them, so that "common" pages could remain this gorgeous simple static capsules.
It might not be suitable in every scenario, but JavaScript or an IFRAME could be viable, especially now that Googlebot will render and follow links rendered via JavaScript.
Sounds like a maintenance headache five years from now. I am old enough to appreciate the hand-made, bespoke internet from the 90s. It really does have its own charm. But for practical purposes I want consistency.
Also, like I assume everyone else does, I version my source documents and html using git. So I can see what it looks like at any point in time if I want to take a trip down memory lane.
I'm old enough to have written hand-made, bespoke Internet in the 90's, and there's a reason I don't write HTML any more. Had enough of it then. I write in plain text now and process it.
I prefer* to write my own blog posts as HTML. Why add a middle man if the end result is html?
*However HTML is not easy. I get it. Most of us here can write it with one hand tied behind our back. But the problem with html is that it gives me choices. Writing is hard enough that I have to use a minimalist text editor just to not be distracted.
When I write and think of a trivia fact, HTML tells me that I can nest it as a side comment in a separate div. In fact, I should use an <aside> tag and style it differently. It would be nice if it could hover on the side of the page without breaking the flow of the content. Ah mobile. I should have it render differently on mobile. Maybe it should be collapsed at first, but then the user can click to expand it...
I use markdown so I only have to think of titles and bulleted lists. If I have to add something complex, I'll turn to html, but it is not my default.
In the case of writing, I don't want to think about html.
HTML is not hard its just makes little sense to write a website without some form of templating. If I add a navbar to my website the HTML way to do it is to copy and paste the navbar in to 100 different html files. And then when I want to add a new link to the navbar I have to copy it back in to those 100 files.
Or I could use a static site generator and put it in the navbar file and run a command to have it inserted on every page.
The author of this blog has dealt with this issue by having no site navigation.
Server Side Includes are still exists. It is in many ways surprising to me that SSI never for more usage that it has, it solves a lot of small scale problem, for minor websites.
mrweasel indeed, exactly what I was thinking. SSI for the head, top, nav, bottom... unique content included. Update to the template is one simple edit.
Taking it a step further with the likes of nginx, cache the response, compress it. done.
So do I. And I wouldn't say that HTML is difficult to write, at least, it was once made for this. However, the deprecation of <b> and <i> is a bit of a hassle. (<b> is still an abstract concept like <strong>, but the latter is 5 characters more – and this twice, once for the opening tag and once for the closing part.)
For a blog, you've probably a basic style sheet for this already and adding to this on the fly in order to express your ideas is a nice option.
- Classic html: <b> and <i> mean bold and italics.
- Then the semantic folks took over, deprecated these tags, and say you have to use <strong> and <em> instead.
- Now with HTML 5 the <b> and <i> are brought back, but they don't mean bold and italics. They have new semantic meanings that sound like hilariously over-engineered attempts to pretend that they have some higher purpose that fundamentally really just means bold and italics.
What I think the committees must have missed is that presentation and semantics are not separable; in many ways the presentation is the semantics. Bold text itself gives meaning to the boldness, inferred by readers and authors, based on how the boldness looks to them.
Ah, yes, I just noticed. Wasn't this marked as deprecated in HTML 5.2? Now the Living Standard explicitly notes that <b> isn't necessarily rendered as bold (because it might be overridden by style sheets), by this loosening the connection to typographical appearance.
As for the over-engineered part: I usually try not using spans or similar in markup to separate short terms or phrases, where there's a change in meaning or the level of information, rather using emphasis (<em>) for this. 99% of the time I end up styling this as italics. Because, this is really what typographical emphasis is all about.
P.S.: And I always failed to recognize the crucial difference in meaning of "bold" and "strong". Both in speech and type these are quite the same concept to me, "bold" may be even be a bit more general and specific at the same time. (E.g., "bold" is not shout, but it's about a general concept of raised importance, which isn't clearly expressed by "strong".)
As a sidenote, as I now recall it, I was just too shocked by the phasing out of framesets in order to recognize the "undeprecation" of <b> and <i>. (I really fail to see where it helps, if future browsers fail to present historic documents in appropriate and meaningful context. Even more so, when 99% of these structures could be easily translated to semantic regions and landmarks, like header/banner, main, footer, aside, etc, even navigations could be inferred with ease by screen readers. Moreover, screen readers now have to support dynamic updates of isolated regions anyway in order to cope with JS frameworks. There was probably never a time, when frames would have posed that small a problem than at this point in time.)
>Now the Living Standard explicitly notes that <b> isn't necessarily rendered as bold (because it might be overridden by style sheets), by this loosening the connection to typographical appearance.
This is some pretty selective reasoning by the working group considering they don't provide a warning for all other elements (not even <strong> or <em>, which can also be overridden by stylesheets). I could toss a "p { display: none !important; }" into my stylesheet, yet the <p> tag doesn't have a warning that text inside it might not be displayed. Seems like they're grasping for reasons to discourage <b> and <i>.
<b> and <i> were never deprecated. They are perfectly valid in HTML 4.01 and XHTML 1.1 and were basically treated like <span> except with default styling.
No semantic web person would have said <strong> and <em> “replaced” <b> and <i> except for the specific cases where <strong> and <em> are semantically appropriate. For example, italicizing ship names with <em> has always been wrong.
But boldness might not look like anything -- the user agent could be outputting to a brail reader for all the developer knows. This is the purpose of the redefinition.
Braille has typeform indicators that appear before the text to indicate that it's bold, italic, underline, etc. It is written differently in the sense that there's a character prefix.
Both <b> and <i> live on for traditional typographic use that doesn't imply emphasis (or a heading). <i> with a lang attribute (and, optionally, a class to indicate it's a foreign usage), f'rinstance, is the right way to italicize foreign words. And, depending on the text, those italics may be instances of <cite> or <dfn> or some such. You can - and should - pack a lot of useful metadata into HTML tagging; it can be of a lot of help to readers when typographic conventions change (as they always have throughout history).
However, if we're using – as it happens – <strong> and <em> as direct replacements, we do not gain much on the semantic side of things. "italic" is still an abstract concept to express emphasis, by convention expressed in type by a cursive script font and in handwriting by a single underline. (A double underline would be equivalent to bold and a tripple one to small caps. An alternative method used in traditional typesetting to express the same hierarchy is letter spacing by various degrees. There's nothing in HTML saying that <i> shouldn't be rendered using letter-spacing by a browser. In early browsers, it was a regular feature that the related display styles could be set by the user.)
Edit: I strongly advocate the use of semantic tags like <cite> or even <q> (instead of using typographical quotation marks), lang-attributes, etc. This is only about genuine marks of emphasis (as in speech).
Edit 2: I just learned about the "undeprecation" – so read cum grano salis (<– example of a lateral change of context expressed by the use typographical emphasis).
Here's the blog in question [1], entirely written in HTML and a bit of CSS on the fly. That is, the body is written by hand and mangled into the blog using a few definitions in a JSON-like data file. Apparently the result isn't too bad as some posts have been featured here. (Some of the posts are rather extensive and are exceeding simple notes.)
I hear you, but the benefit of HTML (especially in a WYSIWYG editor like Seamonkey that doesn’t support CSS) is that your style sheet and basic page layout is already done. When you start writing, you very rarely need to adjust it.
Checkout motherfuckingwebsite[1], bettermotherfuckingwebsite[2], and thebestmotherfuckingwebsite if you want to see what can be done with pure html and minimal css.
I strongly disagree, 1 looks dated, aesthetically unpleasing and unreadable because sentences stretch to the whole screen.
It is basically a text file in a basic font being rendered on the screen. You might as well use notepad instead.
What's the point of not using CSS to make it look at least half decent? What does this improve other than the elitist standpoint that websites should be no more than an unstyled text document because this is what it was 25 years ago?
I truly do not understand the constant pushback against things looking aesthetically pleasing on HN. Not everything has to look like a console terminal spit out some text.
Because 3 doesn't look half decent, it looks a mess from the geocities age. 2 is better than 3, its worst feature is the terrible contrast, and the tiny width that's barely wider than an iphone in portait mode.
I love this as a pure thought exercise/experiment — and it reminds me a lot of the types of websites we used to build 20 years ago.
That said, once the templating/boilerplate stuff comes into play, the author is like half a step away from trying to recreate PHP, which originally stood for Personal Home Page. Like, he’s even using PHP for his RSS feed!
Like, I’m not opposed to these thought experiments in the slightest — but I can’t help at the irony that most of these “simple” website engines (not talking about this approach specifically) almost always lead us back to our earliest CMS implementations.
I’m in my 30s. I literally grew up on pure HTML. But as much as I abhor how bloated current CMSs are, I also shudder when I remember how hard it was to create and manage HTML (and later, HTML and CSS) files by hand to update/maintain anything but the most basic website.
Yes it's funny when he writes "If you find the previous step too much work, write a shell script that copies the directory and removes the old content for you.". I can already see the next thought: "Well I have this script processing all my directories... may as well use it to do simple string replacement too, that way I can insert a link to the latest blog post", etc. Basically just PHP but with a more complicated custom setup.
It's still a script though. If there's a need for a script at all, one may as well do `php index.php > index.html` or use a static site builder as that will give a lot more flexibility.
Simplifying the process of creating static sites and avoiding existing tools has been tried many times and none of the solutions is really satisfying. In the end, most require "just a small script", but which is likely to grow in complexity over time. I guess creating static sites is not that straightforward so it makes sense to use existing tools to keep things simple.
Somebody else in this thread said it right, I think: the primary reason that PHP, CMSs etc. got popular is that people wanted the same header on every page, with the same links, instead of just a “back” link. I really have no need for that, which is why pure HTML works for me.
Of course, if I want an RRS feed, I need some dynamic generation. But I also think it stops there.
I totally understand that this works for you — and like I said, I’m not discounting this as a fun thing to do.
But I guess my broader point is that if I have to create a shell script to quickly generate the boilerplate file stuff for a new post/page, I’m _basically_ already doing what early PHP was designed to do. And at a certain point (and again, I’m not talking about you specifically, just the movements around this so-called simplicity more generally), we’re reinventing stuff that was literally created 25 years ago for these very reasons.
I’m hardly innocent of this either. As much as I hate them, I fantasize about and start working on a new CMS or SSG at least twice a year, only to inevitably abandon my efforts once boredom/frustration sets in.
> But I guess my broader point is that if I have to create a shell script to quickly generate the boilerplate file stuff for a new post/page, I’m _basically_ already doing what early PHP was designed to do.
I don’t think my proposed shell script (which I don’t even use) adds any complexity. (See my other comment [1] about it.)
Of course, I agree that it can go too far, and yes, sometimes we’re reinventing old stuff, but sometimes, we’re rather discovering that something that happens to be old actually works pretty well, if not better than whatever modern method we were using.
My thoughts as well. It’s not that people don’t realize you can write HTML in HTML - it’s just, we’ve long understood the caveats; it has poor hygiene. The only practical way to avoid needing duplication is scripting or frames, which is not ideal for the end users.
I think for usability and accessibility alone, we moved to scripting instead.
Frames were also often used for layout purposes, and when we all accepted the idea that markup and presentation should be separate, CSS was a much better solution.
If you write in HTML and know your elements well then it all becomes easier and your HTML gets more feature rich. It lso encourages you to break down paragraphs which are okay for book style writing into shorter web style paragraphs.
What I find really helps for writing HTML in HTML is to use sections, articles, headers, blockquotes and so forth to structure your writing. WYSIWYG editors never allow you to do this properly. They are 'what you see' rather than 'what you mean'. There is a difference.
Blocks of text should be like procedures in code, in a section, aside or other HTML sensible element and with a header at the top.
If you enforce this writing style it becomes much easier to have super-neat documents. It is just a pity you can't do this with WYSIWYG or some markdown thing. Really HTML is the ultimate.
> It’s more fun that way. Look at this website: if you read any previous blog post, you’ll notice that they have a different stylesheet.
You know what? This is _great_. I think the idea of each page having it's own self-contained snapshot of where the site was at when you read it is a great idea.
As I understand it, the original reasons for including some indirection in the production of HTML were: (1) templating - i.e. injecting boilerplate at compile or run time. This would include standard headers, footers, <head> elements, etc.; (2) navigation and tags - both of which are highly laborious to maintain manually, and (3) RSS feeds.
I would be _really_ interested to see someone use modern web platform features like HTML imports, web components, JS modules, or something else to achieve the above features _at run time_ while enabling one to write in nearly pure HTML.
And lose static rendering, all the accessibility and ability for simple parsers to understand and/or store the page. All that comes to mind when web components come up is “what we’re they thinking?”.
Browsers have native, built-in components like <select>, <select multiple>, <input type="button">, <input type="date">, etc. Styling and behavior of those is self-contained, with a public API.
The idea behind web components was to provide a built-in way to build your own self-contained components - something people have been doing for a long time with tons of <div> elements. But it's always been a challenge because doing that well requires adopting some consistent approach (usually through a library) that builds its own mechanisms. And there are many different libraries, and they all have different approaches. Wouldn't it be better to provide a standard mechanism for defining custom components and abstracting their styling/behavior?
Web components and other modern web tech could eliminate the need for libraries like React to build complex web apps with custom controls.
> Web components and other modern web tech could eliminate the need for libraries like React to build complex web apps with custom controls.
But they won't, for the reasons presented here. They solve only the lower half of the problem. They're perfectly fine for building self-contained UI controls/elements, but create a top-level `<BlogArticle>` component to mimic the composability of modern frameworks and you lose all I mentioned above.
> But they won't, for the reasons presented here. They solve only the lower half of the problem.
Right, I wasn't arguing for use of web components for anything like <BlogArticle>. That's way outside their scope. But with web components being available, libraries like React would be optional/redundant when custom components are involved (like in complex web apps).
But I still think it would be fun to see someone hack modern web tech (even web components) to build modern blog features with pure HTML (+ sprinkled JS/CSS, as needed).
> libraries like React would be optional/redundant when custom components are involved (like in complex web apps)
This is where we disagree - this is not true at all. In complex web apps, what you need the most is high-level components such as `<BlogArticle>` that allow you to tame complexity, isolate reusable pieces of UI that might be as large as a page itself, and not just localized elements. As you acknowledge, WC doesn't even aspire to fill this role, meaning you'll still need higher level frameworks on top of them, which significantly diminishes the value of being built-in.
Have you ever tried to build a reusable multi-select combo box? That’s not easy without some framework. Event and state management can become a real headache.
But higher-level items like articles/comments are so much easier without a framework - I just don’t see how React/etc. is necessary given you have a decent templating library and a module loader that has CSS, JSON, and string plugins.
But I went a bit further, I do write HTML by hand in vim (plus some python scripts when applicable). Writing HTML is not that bad once you get used to it. My pages are not plain text anyway. They are mostly interactive things written in JavaScript. Since I spend considerable time coding, it doesn't bother me to spend five minutes putting tags here and there, too.
Coming up with the design decisions takes much more effort than typing in the actual tags.
And yes, I do have an RSS. It's also updated manually since it's again an effort too small to automate.
There's some irony in ditching all the things in favour of none of the things when actually the likely answer is some of the things...
But the whole static site generator thing is hilarious when you look at it, and classic developer territory. "I'm gonna install a gazillion tools to write some words on the web" - fine if you're learning stuff by doing it but for god's sake don't go suggesting it's "easy"...
It can be easier than installing and maintaining a cms and definitely is easier (maybe not simpler) than writing everything out in plain HTML and then gluing it every time you need to change the footer.
Depending on the generator you chose, it might be a single binary rather than a gazillion of tools.
I spend most of the time on writing, editing, then programming, and only then typing the text and doing the layout. The last 5% of that could be automated, but then I wouldn't be able to write things down from my phone while traveling, or directly from my CHIP when exploring ARM-related things.
And this is the really important point. If developers spent nearly as much time thinking about the important bit: their words / grammar / spelling / structure - as they did about which static site generator to use - we'd see a monumental improvement in communication.
There are so many developer articles / blog posts which are just terrible - sometimes because of language issues, yes, but more often than not just poor writing and badly thought out arguments. But, you know, it's ok because they're using TheLatestStaticGenerator :-)
I use hugo (single binary) with Github and Netlify. When I push to the master branch, Netlify automatically builds the website and publishes it on my domain. Sure, not as accessible as Medium, but it’s pretty simple.
I do something similar for my own blog, purely because I enjoy having the flexibility of styling every blog post in a way that's appropriate for that specific page (i.e. a blog I did on photography[1] vs a blog I did on gaming[2])
That said, I'm a man of convenience. Mixins are too much of a niceity to throw out so I wrote some small apache cgis to dynamically compile pug (aka jade) and stylus. I typically write directly on the server just using vim. Certainly not the work flow for everyone, but I quite like the experience.
Server Side Includes are unfairly neglected simply due to age.
Another way, seldom mentioned, is JavaScript. Just include some script tag, which inserts the HTML at the top, bottom, or wherever. An advantage it has over Server Side Includes is that it is cached.
Wouldn't it be simpler to just have a static site generator. Combining templates is what they do best and then it keeps the server side very simple, its just static files you can dump on any web host.
The article mentions GitHub will host Jekyll websites for free. Is that a common misconception? GitHub will host absolutely anything for free, you’re not limited to Jekyll at all. It’s just GitHub has some extra support for it.
But I’ve never felt any need to take advantage of the Jekyll features. Pushing html of any sort to the gh-pages branch is trivially easy.
I think people mention Jekyll especially because it’s supported directly by GitHub – that is, they generate it for you (which is a bad idea, really, because of the differences between the various versions of Jekyll).
>Using a static site generator means that you have to keep track of two sources: the actual Markdown source and the resulting HTML source. This may not sound too difficult, but always having to make sure that these two sources are in line with each other takes a mental toll.
I write in asciidoc and I find this trade off is well worth it. It means I can write more easily, I can future-proof my documents (if html versions/practices change, I just export them again) and I can output in multiple formats instead of just publishing to the web.
I don't understand what he means by "keeping track of two sources". I have one source: asciidoc documents. I output that into one format (today, typically) HTML. I have a script that just adds a header/footer to each one and sync to s3 bucket.
If asciidoc dies I can convert them to reStructuredText, Markdown, or whatever format is popular next. My mark-up is very simple, headers, bold/italic/underline, bulleted and unbulleted lists, etc. So my "source" documents are simple to convert between formats.
I find this gives me the best combination of ease-of-writing, flexibility and future-proofing.
I've written a ton of notes for my university classes in HTML using Emmet. Spinning up a list with ul[tab]li[tab] is pretty easy and fast.
The main abbreviation-expanding stuff is enabled in VS Code by default, and you don't need to read all to documentation to get started, just the basic few things.
It's funny that people write a whole blog post about sth. which was normal in the early years of the web. I host my private programming website and travel blog, so I wasn't forced to change my behavior. I always write HTML code by myself.
The way to frameworks or generators was caused because of the site look changing. Doesn't matter on which page you are, you have always a menu provided with lots of links. The "Back"-button is not seen today anymore. How I miss these days guys, this is why I love exploring neocities from time to time :).
Yes, I do that too. I do have scripts that do maintenance jobs, like changing the footer now and then. But they don't work with templates, they use the HTML input and just process it like any other data.
There was a time when you had to know how to write HTML - also FTP - oh and Fax letterheaded paper to register a domain
Looking back now - MS Frontpage was way ahead of its time dealing with this abstraction of generating html - you saw the source change which the browser (unfortunatley just ie - but that's what we had) would render when you made an edit
I think we all look back fondly to the early days of HTML sites where everything was pure, but let's not fool ourselves into thinking it was the best setup for things like blogs. If you want to be a purist, do so by leveraging newer advancements like web components. Suggesting people copy and paste HTML/CSS for every post is a really bad suggestion for anything that isn't a weekend project that you will never touch again.
And also, isn't a HTML IDE the equivalent of a static site generator except lacking any of the actual features that make generators appealing?
What I take away from this is the idea of each post having its own identity and feel (each post with its own unique stylesheet). Contrast to a blogging platform like Wordpress or a SSG like Jekyll where every post is generally formatted and styled exactly the same way. The author's blog posts are now these unique time capsules.
Obviously a blog at scale isn't going to do this, but I do think there's value in what the author is trying- where each post is styled and formatted independently with care instead of unceremoniously dumped in with a one-size-fits-all stylesheet.
To what end? The inevitable outcome is that you have a hodge-podge of HTML and styles and if you want to replicate one you have to backtrack through a plethora of random posts to find the one that implemented it. I see zero value in a time capsule because it's not one. If he crafted every page from scratch - maybe, but to his own credit he's copy and pasting. That's nothing to aspire to.
The thing is, you don't just copy-paste stuff. You make it better every time you spend some effort. Consistency is way too overrated. Progress is what you should aspire.
One problem I have with this is it violates the DRY (Don't Repeat Yourself) principle. The author embraces temporal inconsistency, but I'm not a big fan of it. Maybe inconsistency for content is okay, but for consistent DRY style I think I'd at least use CSS even if I didn't use a CMS.
It violates that principle but on the other hand it embraces the RY (Repeat Yourself) principle which has its own set of pros and cons, the major pros being simplicity of implementation and reduced risk of retroactive breakage. I’ve found this principle very useful also in software design. The DRY principle gets all the attention but we tend to ignore the question of when it’s actually better to repeat oneself.
I have 3 principles. The first one is KISS. The second one is DRY. This third one is that there is only 3 principles. DRY is less important than KISS. Sometimes, RY is a good way to simplify.
>There has be some reason why people don’t write their personal websites in pure HTML. Well, it’s simple:
>>HTML is unpleasant to write.
>This is the only real reason.
This seems like a huge overstatement. His point about each page being a snapshot in time strikes me as a bug, not a feature, but to each his own. My biggest qualm besides this is that working without a template will make nontrivial isomorphic rendering and rehydration way more developer-time intensive, and I think we should be embracing this strategy for any behavior that can be implemented both with and without page reloads -- even if you don't like shwanky JavaScripts, subsequent page loads will be faster if you just have to update the necessary changes rather than the whole page. Finally, raw HTML isn't the cleanest format to write in. I don't want to be tied to a WYSIWYG, I'd rather be able to access my editor from anything with an ssh connection, and I don't want every text file I deal with to start with a bunch of repetitious boilerplate (headers and such). I'd rather write in a cleaner format for data and let the computer deal with fitting it into the template.
I don't think that HTML being unpleasant is a good reason not to write in it. If you're writing in something that compiles to HTML you should probably understand the model directly underneath you.
> working without a template will make nontrivial isomorphic rendering and rehydration way more developer-time intensive
What are you talking about? This is static HTML and static CSS. There’s no isomorphic rendering ir rehydration. It already is “rehydrated”. And it will take significantly less time to download than for dozens of kilobytes of JS to do the same thing.
The beauty is that you can have it request the JavaScript after the HTML, so those kilos of JS don't slow down first render and by the time a user clicks something you're probably rehydrated and ready to request subsequent pages without a full refresh. On the offchance you aren't rehydrated by the time of a click you just fall back to the normal behavior of an anchor link. All the delivery/render speed of static HTML with the subsequent page load times of an SPA. I could see not liking this approach if you're trying to minimize bits downloaded, but if you're concerned about user experience I would idealise using this pattern on all internal links (pretty much -- there are weird exceptions like embedded content that refuses to run after initial render for security reasons). This approach is more complicated, but also more performant due to less overall DOM manipulation.
Step 1: client requests page.
Step 2: static HTML is delivered.
Step 3: static HTML begins rendering on client.
Step 4: client requests JavaScript attached to bottom of HTML document.
Step 5: JavaScript is delivered.
Step 6: JavaScript begins execution.
Step 7: JavaScript goes through the document replacing normal anchor links with elements that have click events which request only the content, and not the template, for subsequent pages[0].
Step 8: user clicks a "link."
Step 9: client requests content.
Step 10: content gets delivered.
Step 11: client updates part of the page and the URL bar, but doesn't cause a whole page refresh.
[0]Exception: you can give yourself hooks for a template change if you want, and even a significantly different layout on the same site will often have some common factors like footers and background.
Users don't like waiting for pages to load. Rendering fewer new DOM elements takes less time. Thus, user experience is improved by not doing a full page refresh when it isn't necessary.
>How do you not sacrifice initial load speed if on initial load you need to load extra Javascript?
Add the "defer" attribute to the script tag. This will cause the script to be ran after initial page load, and nothing will be waiting on it. This (along with NoScript, browsers that don't fully implement JS, etc.) is the reason for including anchor link fallbacks in the initial HTML payload. If the user clicks a link before the script is ran, the site behaves acceptably.
>How is subsequent speed increased? In case of static pages everything but content is already cached.
Let's say a user is browsing a blog they've never been to before, going between posts. There's a blogroll on the side of the layout that doesn't change between posts. Every time the browser requests a new static page it has to request this blogroll (as well as everything else) and render it all over again. Any images, scripts, and stylesheets used between pages will be cached, but the browser won't know which bit of text was for the blogroll versus the content; there's no real distinction in the DOM. Even for the elements that are cached, they'll be rendered all over again. The request time is practically nothing (just a cache check), but it's more time consuming to have the DOM render a cached image than it is to do nothing to an image that is already rendered and sitting in front of the user. Changing less is is less expensive than changing more, in terms of render time.
It was flamed pretty badly when first introduced to HN, but I still think Steve Howell's Shpaml (http://shpaml.com/) completely removes the pain of writing angle brackets and keeping track of closing tags. Plus, it makes writing arbitrary XML really easy, just the thing for writing DocBook, I'll bet. And no need for a specific editor, just use your favorite.
I'm definitely going to look at Emmet, though. Thanks to all for mentioning it!
Would love to do this.
The reason why I'm going to stick to generating from markdown is mainly to have markup that other sites can parse, in this case, microformats2. Writing structured data by hand is brutally prone to error.
Otherwise I adore the idea and I find it lovely! :)
"Case in point about the downsides, the cv link is correct on the author’s homepage. It is not correct on this page. That’s an easy mistake to make, and I’ve definitely made versions of it. However, it’s much more pleasant to fix when you can fix it everywhere by updating a template."
If HTML had some kind of templating functionality so that you could trivially reuse, say, a header or a footer then lots more people would write HTML in HTML.
But, uh, kinda feels like the author's "reasons" aren't balanced in terms of comparing "maintaining a website by writing HTML directly" to "maintaining a website by using a static-site generator".
e.g.:
"But how can I then keep the style and layout of all my posts and pages in sync?"Simple: don’t! It’s more fun that way. Look at this website:if you read any previous blog post, you’ll notice thatthey have a different stylesheet. This is because they were written at different times.As such, they’re like time capsules.
...But if someone wanted to maintain/update consistent styles across a website with only HTML files, then this would be a point in favour of a templating system over writing in HTML directly.
Finally, you constantly have to work around the limitations of your static site generator.
Let's say you want your site to have a specific directory structure,where pages are sorted under various categories.With Jekyll, this is practically impossible,and even though you technically can get it working if you really try,it is only through much effort and with a source code that isorganized much more unintuitively than if you just wrote it directly in HTML.
While it's fair to say that you have to learn the idiosyncracies of a SSG to use it,
I'm not sure this is a fair criticism of Jekyll.
Seems that page output is determined by a 'permalink' key.
https://jekyllrb.com/docs/permalinks/
(Found through a quick search for "jekyll structure").
I agree that some sites have a reason for wanting to keep the look of all pages in sync, I think it's fair to point out (because most people won't think of it), that this isn't really all that important for a basic blog. You don't really need all your posts to look identical. There are some things, like what url the home page links to, that need to be kept in sync everywhere, but that isn't hard to do.
It absolutely is a fair criticism of Jekyll, because I have experience with it. I want it to work without having to manually set some key – I want my posts put in specific subdirectories by ... actually putting them in specific subdirectories. Which is impossible with Jekyll.
As yet another person authoring in HTML, a couple of things I found rather helpful are XSLT for a bit of processing (generating indexes, atom feed, etc) [1], and though I generally dislike WYSIWYG editing, as mentioned in other comments, it can indeed be helpful to read documents with many inline links; in emacs, I liked how org-mode handles it (only hiding links, but nothing else), and wrote something similar for HTML [2,3].
The author really would benefit by taking at look at SGML (on which HTML and XML is based). SGML has entities (for pulling in common content at multiple places), context-dependent transforms (for producing navs, link decorations, etc.), and short references (for parsing custom Wiki syntax such as markdown or CSV into regular markup).
Hm, I’m interested, but isn’t SMGL practically a different language? Apart from syntax, are those concepts translatable to HTML? I mean, can I use it to write web pages?
SGML is a markup meta-language, meaning it takes a vocabulary (a Document Type Declaration grammar for HTML) as input, along with the actual HTML source markup. See [1] for a HTML5 DTD which has all the necessary declarations for HTML's element omission/inference rules, enumerated attributes, etc. Apart from element inference and other markup shortforms, this will be familiar if you know XML (which is a subset/profile of SGML), but SGML has a number of additional concepts for content authoring/page composition such as link processes (an additional type of declaration set that can define an automaton for assigning attribute values based on context, and can be pipelined to produce "views" of documents such as a table of content or nav) and short references (which are element-context specific rules for replacing custom text sequences into other text or markup tags). My SGML implementation also has templating which is an additional form of parametric entity inclusion based on concepts from HyTime, without adding any new syntax to SGML.
As to writing practical web pages, I'm working on it, ;) The pages/blog you see on sgmljs.net are completely produced by SGML.
I would agree somewhat with pure textual matter. Incidentally, the one thing where SGML doesn't totally suck, as it was pretty much invented for this.
It still has a pretty writing experience if you're going beyond just paragraphs and the occasional emphasis. Take inserting hyperlinks as an example, I've yet to see a UI that makes that easy enough, highlighting text and then invoking a pop up window is often a bit wasteful and harder to automate (e.g. inserting the current selection or the top-most browser's content). At least with a WYSIWYG editor the content won't be littered with a huge blob of computing ephemera, but having the URL hidden also means that you can't just grab it if you want to repeat the link.
Now the problem is that modern HTML authoring often isn't about regular text, but about creating some kind of hierarchical structure that serves as the content for some pretty involved navigation or data manipulation histrionics. Which means plenty of elements, plenty of nesting, with comparatively little content. And that runs into all the areas that made XML the horror that it is.
In a perfect application of "This is normal now", people are actually quite content with being in this burning room of tags and attributes, as long as the IDE helps creating this mess. JSX, Emmet, templates -- it really reminds me of the early days of Java, where everything was just solved by the IDE or code generation tools (getters/setters, DI etc.).
I have been doing this for more this for 24 years now. Even developed my own editor (based on the Crystal Edit controls), which has navigation. Pressing F5 on a link, will open the file (when it is local) at the given location (following anchors) or send it to the default browser (when it is external). It has syntax highlighting with checks on matching tags and inline JavaScript. See: http://www.iwriteiam.nl/MySample.html
I also wrote a program to check the consistency of all HTML files on my website. It checks if all internal links are correct and even does some checking on reverse links. Furthermore, it also has a mechanism for maintaining tags with blog entries. It processes the FTP log to determine which files needs to be uploaded and places these in an upload directory (while adding a BASE HREF to the HTML files).
My personal website is generated with Jekyll. I use Jekyll because it's better for links, sections, and other automated things like "related posts". It would be a tiresome task to do it all manually.
I respect this guy anyways. He wanted the simplicity and it's ok. I've see one of his posts has no CSS styling, and this is product of manual editing, you lose consistance.
The only caveats I see on this websites is, when they setup the width of the content. I love fluid pure HTML websites.
My website is a combination of nice HTML + Functional CSS, all managed with Jekyll + Markdown, you can check it. http://minid.net/ it's blazing fast, 100% fluid.
I switched to this years ago with some Python code which uses the data embedded in the HTML source to generate things like feeds or re-applying templates. I've mostly neglected it for years but am generally happy with the approach of developing templates interactively and then having a script which just pulls values across using selectors (e.g. take [itemprop="articleBody"] from the original page and use its children to set the contents of [itemprop="articleBody"] on the template). This was nice for allowing you to do anything you want at any time without having to fight a templating system or CMS structure.
This reminds me of a microblog I used to keep; I had one `blog.html` page, and added to it with an `<h2>` for the date and a few paragraphs of text... good memories :)
I'm starting to think that's a good idea as a supplement to a "real" blog - sometimes there's just not enough oomph in a thought to carry it through to a full post but would justify a couple of paragraphs.
Of course, working out how to make this convenient to use is going to be the tricky bit...
Adding a new note to Dercuano just involves writing a new Markdown file in the markdown directory, so as with HTML, there's no threshold. Actually, there's less of a threshold: with HTML nowadays, I have to add stylesheet links, and meta tags so browsers don't interpret it as ISO-8859-1, and more meta tags so phones display the text at a readable size, and a <script> tag to add the table of contents... with Dercuano, I just start writing text. By default, the title is generated from the filename, or from the <h1> if the Markdown starts with one.
Why not just copy and paste? Well, aside from reducing the download size, sometimes I want to improve something for all the pages at once. Fonts, for example. There's always the risk that that will break some existing page, but I think the grain size where that's the dominant part of the tradeoff is somewhat larger than the million-word size of Dercuano.
I mention this not because I want you to use the Dercuano software (although you're welcome to and it's in the public domain so you're legally protected) but because I want you to know that writing your own is an easy weekend project.
I feel like a happier medium would be to use a Makefile or something to automatically concatenate a shared header + content + shared footer. Addresses the concerns about maintaining the bits and pieces shared among multiple pages, while at the same time avoiding a lot of the complexity of a full-blown static site generator.
I don’t think this is as simple as you suggest (because I’ve tried things like it). First, you have to keep track of which documents need the same header/footer (unless all are exactly the same). Second, you have to extract the title from the content somehow. There are a bunch of little steps like these that make it more complex (in my mind) than just writing HTML directly.
> First, you have to keep track of which documents need the same header/footer (unless all are exactly the same)
That should be pretty easy to program into a Makefile.
> Second, you have to extract the title from the content somehow
Or just "cat headerfront $PAGE/title headerback $PAGE/body footer > $PAGE.html".
Like yeah, there's some upfront complexity, but nowhere near the complexity of manually editing these things on each and every page. Once you have these programmed in and scripted out, you get 90% of the useful stuff a SSG does for you with 10% (at most) of the moving parts.
There are bits that are pretty hacky, but being able to put anything directly into the html, just the way I want it, is much better than messing with various converters. Simple HTML is really fine to write in.
I once made an app for this - http://hammerformac.com. It was a delight to write simple HTML with hot reloading, then add other bits when I needed to. It was a hard sell to developers who like shiny new frameworks, but people who used it really loved it.
Step1: Write pages in pure html
Step2: Organize your html into folders that are similar
Step3: Write some scripts to generate more folders easily
Step4: Write some sed scripts to make changes across files
Step5: Release your scripts as your own site generator
Or you could just spend 15 minutes trying to patch your existing site generator to suite your needs. This article sucks. Downvote.
You're messing interface with implementation details, and i think it's not good for maintainance in long term.
Keep interface in another format/language is a good practice.
HTML is just the compiled output (or implementation details), which we should not keep aware of.
I've given up trying to get people to understand that writing fundamental code and markup is far easier than CMS/library/framework hopping. My company created two large web sites that you visit at least once a month and no one believes it's HTML and CSS is all handcrafted. To this day I'm still told what we do is impossible even when I showed them the source.
That was years ago when I did that. I won't do it again.
Do not think we still don't do it. My line was to state that I have not bothered to explain anything to anyone for years, not that we haven't done it for years. We would never use code generators cause we're professional programmers and know how to create our own output.
While I do not have issues with using some well-developed library, depending on whose library it is, we are more likely to only use it till we develop our own.
BUT! I don't like to write formatted texts in HTML sources - too hard to consume it while writing.
Markdown (a.k.a. poor man WYSIWYG) is not that good either. Still palliative.
So I did html-notepad ( https://html-notepad.com ) - WYSIWYG editor. For myself and my wife primarily.
I am writing stuff for publishing in it and pasting to WordPress and other destinations.
WYSIWYG HTML editing cannot be used for web site creation/design. But for formatted texts, content islands in HTML, it is perfectly adequate.
Yet.
While 90% of formatted text editing tasks can be done in WYSIWYG, there are tasks that are more convenient to do in source form. I use source view in html-notepad with sync of caret position between source and WYSIWYG view - quite handy if you would ask me.