Dissonance is when two notes clash.  Harmony is boring.
Dissonance is when two notes clash.  Harmony is boring.
Dissonance is when two notes clash.  Harmony is boring.
Dissonance is when two notes clash.  Harmony is boring.
View All >>

Last time I remade my blog site was about 3 years ago. Here I am again, having a little fun. My site in general was due for an overhaul.

I'm going for a more modern-but-professional look, nothing too fancy or flashy. If you want flashy you can always visit my hobbyist portfolio.

My previous blog site was a simple PHP application, separate from the landing page. This one is served by a Go backend. Actually, when I first wrote this, it was a single-page React app (see "site" which is the original iteration in the repository). I really like SPA apps—just the idea of a complete little app that runs entirely on the client side. Very cute. In this iteration everything was handled by the client, e.g., it would fetch blog articles in Markdown format and render them client-side.

However, I have some disturbing memories about Google and indexing SPA apps. Randomly, very randomly, even the simplest of apps would end up as a blank page. No error or reason why, it's like the Googlebot just decided to not behave that day. That would cause pages to disappear from Google. I'm not sure if that's still a problem today, but it was just a couple of years ago.

Rewriting in Go

Anyway, that's a good reason to have more fun, right? Rather than a typical React app—I mean, don't get me wrong, writing React apps is tons of fun, but it's nothing new. So, rather than a React app, I migrated everything to a Go backend. It's also neat to see it running on shared hosting.

You can't directly run a server application/backend on shared hosting since the app usually needs to be persistent. In this case, Go ships with a fastcgi library to set up a FastCGI interface easily. That's neat. I have a long-term Dreamhost plan for shared hosting, and it supports running .fcgi files directly. I tested and confirmed it's configured properly, spawning a process and keeping the process alive, so long as requests keep coming in.

Templ

When first fudging about with the Go backend, I was using the standard library's templating. It's very cool that Go includes a powerful template system as a standard library, but I could not for the life of me figure out how to have a subtemplate as a "pipeline", or an argument to another template. It turns out that you can't?! I was pretty peeved to waste multiple hours only to find that a basic functionality like that was not out-of-the-box. Take for example:

// Define new template
<MainWrapper>
   <div class="mywrapper">
      {children}
   </div>
</MainWrapper>

// Use template in another template.
<MyInstance>
   <MainWrapper>
      <h1>{title}</h1>
      <p>{content}</p>
   </MainWrapper>
</MyInstance>

There's no way to have a {children} insertion like that from another template unless you jump through hoops or have a simple string. So, I browsed about for alternatives.

Templ came up, something that provides similar behavior of React's JSX: inline HTML within your code, and it compiles down to normal Go code. I really like that approach, though I know it can be hairy to get the tooling right. I found the tools to be a bit rough around the edges, like go fmt wouldn't work well with it, or I'd get error popups from the VS Code extension, but I still really like the concept. I hope to see it improve over the years with more quality of life features, like with attributes instead of special syntax. The closer we are to normal Go and HTML syntax, the better, in my opinion. It just really, really eases the learning curve with templating.

It wasn't too hard to migrate the React templates to Templ, just a few naming scheme and function changes here and there, and voila – server-rendered pages. Here's a side-by-side comparison between a React component and a Templ template, for your curiosity:

templ blogSection() {
   {{ index := blog.GetBlogIndex() }}
   <section class="blog-section content-section relative" id="blog">
      @sectionHeader("blog")
      <a href="/blog/index" class="text-normal font-bold absolute right-0 top-2.5">View All &gt;&gt;</a>
         for i := 0; i < 5; i++ {
            if i >= len(index) {
               break
            }
            @blogExcerpt(index[i])
         }
      <p class="font-bold"><a href="/blog/index">Blog Index &gt;&gt;</a></p>
   </section>
}
export function Blog() {
   const [index, setIndex] = useState<BlogIndexEntry[]|undefined>([] as BlogIndexEntry[]);

   useEffect(() => {
      getBlogIndex().then((index) => {
         if (index) setIndex(index);
      });
   }, []);

   const excerpts: JSX.Element[] = [];
   for (let i = 0; i < 5; i++) {
      excerpts.push(<BlogExcerpt key={i} data={index ? index[i] : undefined}/>);
   }
   
   return <section className="blog-section content-section relative" id="blog">
      <SectionHeader name="blog"></SectionHeader>
      <Link to="/blog/index" className="text-normal font-bold absolute right-0 top-2.5">&gt;&gt; View All</Link>
      {excerpts}
      <p className="font-bold"><Link to="/blog/index" className="">&gt;&gt; Blog Index</Link></p>
   </section>;
}

Deploying to Dreamhost

Another minor complication is that Dreamhost can be unreliable with tools. I'll see it randomly terminate a compilation process sometimes because it's feeling fussy. That's no good. (Maybe I was doing something wrong, but it left me uneasy about anything other than serving web pages.)

Rather than compiling the Go app on the Dreamhost server natively, I decided to cross-compile (or "psuedo-cross-compile"? since I'm not running the compiler on Windows) locally and then copy the artifacts over. I do that with a small Dockerfile to run the compilation steps on Ubuntu, and then a docker-compose file to copy the artifacts into a mounted volume. Then I can copy those to the server directly.

An .htaccess file handles routing the backend requests to the FastCGI program. That's the only part that needs to be set up manually, since there is some other configuration in there. The rest of the artifacts are easy to deploy with automation.

Debugging

So how does this work with local debugging/testing? With React apps I have a basic test-server app that I use for debugging. It's included in my React/Tailwind/Typescript Bundle.

In this case, we have a little more to handle, such as the FastCGI server backend. I updated the test-server app to include a FastCGI reverse proxy. Any requests that are meant for the backend are routed accordingly. It uses the https://www.npmjs.com/package/fastcgi-client library, which, by the looks of it, is pretty new. Thanks to the author for sparing me from an ugly protocol.

I have a few npm scripts to run the test-server, tailwind, and webpack (watch mode), and my VS Code launch configuration includes templ generate as a prelaunch step to make sure the templates are up to date when restarting the backend locally. Ideally that should be a pre-build step, not a launch step, but I'm not too expert with VS Code shenanigans.

Concerns?

I wouldn't really recommend this path (Go backend) for shared hosting. In general, uncharted paths are usually not a great approach to something reliable. I'd consider this an experimental environment. I haven't seen any problems, but I feel slightly uneasy that a request will be terminated randomly due to it not being a normal/battle-tested operation.

I'd say PHP may be the best language for shared hosting, given its copy-and-paste deployment and mature ecosystem. It's so easy to work with. (But, PHP, right? Yuck!)

For personal projects I like to step outside of my comfort zone, learn a new thing or two while having fun. The final result stack is Tailwind, Typescript (minimal scripting, no React), Go, and Templ. Pretty neat! Check out the source on GitHub.

That Poor Rug

Rug? What rug?

So, I know personally that JavaScript dates are hairy. While writing my new blog site (this site), I wanted a function to convert a yyyy-mm-dd format into a plain English format.

Here's my AI prompt:

show me code to translate a "yyyy-mm-dd" string such as
"2024-12-08" to a human readable string such as
"December 12, 2024", in javascript

Not the cleanest prompt, I know, but the question here is not vague. ChatGPT happily spat out a function for it.

function formatDate(inputDate) {
   // Parse the input date string
   const date = new Date(inputDate);

   // Check for invalid date
   if (isNaN(date.getTime())) {
      throw new Error("Invalid date format. Use 'yyyy-mm-dd'.");
   }

   // Create an options object for formatting
   const options = { year: 'numeric', month: 'long', day: 'numeric' };

   // Use Intl.DateTimeFormat to format the date
   return new Intl.DateTimeFormat('en-US', options).format(date);
}

Notice what's wrong? Claude also did the same thing, forgetting to set the time zone.

function formatDate(dateString) {
    const date = new Date(dateString);
    return new Intl.DateTimeFormat('en-US', { 
        year: 'numeric', 
        month: 'long', 
        day: 'numeric' 
    }).format(date);
}

To fix the code you just need to add this to the options:

timeZone: 'UTC'

Telling the LLM to fix it works well enough too. Otherwise, the function would return an incorrect date (one day behind) if the local time west of UTC. Basically this would happen:

>> console.log(formatDate('2024-12-08'));
<< December 7, 2024

Okay, so the rug. The poor rug. Every time little problems like these pop up, AI vendors do their best to sweep it under the rug, fine-tuning their models to hide it.

AI vendors are sweeping multitudes of common problems under the rug to make their models appear better in benchmarks and the public eye. Is this helping the models? I'd say no. Tuning specific subjects is inadvertedly going to affect other subjects, much like adding noise to a clean signal.

It looks good on the surface, for example, OpenAI scoring new records with ARC-AGI-Pub, but the scope is so narrow and focused like we're ignoring the broader goal. I think the best results are when language models are in a more "natural" state with broad, non-specific tuning.

I don't have a proposal of how to achieve the next order of reasoning, but the current direction is a bit concerning.

Hopefully we'll see more progress in the near future, but I wish vendors would be a little more honest about the state of AI.

No doubt you have noticed the excitement surrounding LLMs and their ability to solve problems. Everyone wants to shape them into disruptive products that will revolutionize old processes, especially in the software engineering domain.

But can LLMs really get past the wall of reasoning to write good code? Honestly, I don't know for sure. There are big bets on "agentic" workflows breaking the reasoning boundary, but with my programmer background it's easy to remain skeptical. Here is one way to look at it: when you ask an LLM to multiply two very large numbers, such as 7380580207762439311 and 237196197329347341, no doubt an LLM by itself will get it wrong, because the answer is not written somewhere. However, today's models -do- get it right by including training to generate code scripts, basically using a calculator.

In other words, basic arithmetic is a simple enough concept for an LLM's training to handle, to translate the question into code to execute. So, that level of problem you could consider to be "solved."

Now increase the complexity—increase it by a lot. That's software engineering. The people behind powerful AI models will tactfully ignore how complex software engineering is while promoting their product.

While chain-of-thought and other prompting techniques can help you dredge up real answers from the depths of an extremely complex model, it's easy to overlook the basics: it can't find answers that aren't there. No matter how much power is thrown at an LLM with hundreds of agents running in parallel, there is still the base problem.

Consider the basic arithmetic question again. How much would be needed to brute-force an answer if the model could not generate code? If it was not given a technique to solve that sort of problem? In addition, it needs to verify if the answer is actually correct.

Software engineering is a much harder problem than basic arithmetic. How would you go about making a technique to solve it? How can LLMs leverage said technique? Is it an impossible problem? I don't like to say anything is impossible, but I would think this would not be solved for the considerable future.

I would like to be proven wrong. Why? Because programming is such a ridiculously time-consuming craft. I've sunk hours upon days upon years of my life into piecing logic together, achieving correctness in so many bizarre scenarios. Meanwhile, hardly anyone can appreciate that effort because the real work is invisible to the average person. If there's going to be a future where it would be easier to create as a developer, I'd love to see it.

I don't want to discredit the hard work going into changing the future of software engineering. There have been some very real advances. For one, I absolutely enjoy the time saving we get from tools like GitHub Copilot, autocompleting much of my modern code. When Google says that 25% of their new code is being written by AI, I'd bet it is from autocomplete. LLMs and humans go great together.

As for your regular Joe being able to create full business applications from a feedback loop, I wouldn't bet on that being viable for quite some time. It's certainly achievable to "flavor" applications, to tailor them with prompting, but, beyond that, it gets very, very tedious.

Not being correct 100% of the time is a fudamental concept of LLMs, and in a domain centered around correctness (software engineering), it's easy to disagree with what AI vendors are predicting for the "next 5 years."

Two terms that I like when building software. Let's define what these two mean for me. This is a bit of a brain dump on some of my thoughts.

(1) Robust Software

Robust software is basically software that -works- as you expect, and when it -cannot- work, it tells you why. Emphasis is placed on the latter, because the importance of recovery is easy to overlook when it comes to a feature specifiction.

When something fails, it shouldn't be silent. Working as a support engineer for a few years has made me value debug logs a whole lot more than before. There are many, many ways a program can fail. Despite the failure being "unexpected", programs should avoid saying just that. That's just asking for support tickets.

Did an "unexpected" network failure occur? Tell the user to check their network. Get specific. Those specific details can greatly help someone knowledgeable. A key with robust systems is that you can easily tell what is wrong in an "unexpected" failure. I like to call rare exceptions as "controlled" when we do handle them gracefully. A controlled failure is when you have encountered a rare scenario, but you have some knowledge where the exception came from. You can guide the user how to fix it. Even better if the system can try to recover automatically.

I have seen a lot of code that skips over exception handling (especially in notorious JavaScript). It's not the end of the world for an expert, but it is going to be nearly impossible for a normal user to get past the issue without help. I guess we could say, essentially, a low rate of failure combined with ease of recovery makes your system robust.

(2) Flexible Software

When developing a software specification, it's very, very easy to end up with a naive design. One of the tips offered by The Pragmatic Programmer (David & Reid) is "Think! About Your Work". It takes time to think. I'd say you are trying to predict the future. You don't -really- know how exactly your users will use your product--feedback is a critical part of development--but you can try your best to predict use cases to make your product flexible.

Already have a spec? Don't follow it blindly. Be mindful. Your manager might actually encourage -not- being mindful, since the additional "thinking" is costing them time and money. It might not be part of the contracted budget to think (loosely termed); however, I do not like that methodology. I'd say being stingy with contracts hurts the company's future. Sure, you need to set boundaries with clients, but don't go sacrificing quality for no good reason. At least communicate potential problems you have identified with flexibility, maybe the client will broaden the contract. At the very least, they will very much appreciate the communication.

Flexibility isn't just baked in the features, either. It's a fundamental principle of good software development, e.g., "Don't depend on concrete classes". I worked on a client project a couple years ago, implementing a microservice. It came up on a call how they wanted to use an SMS service that was not in our original specification at all. As a decent programmer, we expect the system to evolve; we keep flexibility in mind during development. We could tell the client that their service could be supported easily. After all, I had kept the connectors as flexible interfaces, predicting that someone might want to use a different service than we originally supported. Happy client.

Flexibility aligns with many other good development practices, e.g., reusability, extensibility, maintainability, etc. I like the term "flexible" because it's simple, something non-engineers can grasp easily. The more flexible something is, the more likely it can be used for a long time.

What kind of executive doesn't want to leverage the latest technology to cut operational costs and drive their company into the future lane?

I've spent a bit of time over the last year considering what is and what is not great about LLMs. What is great is assistance - letting the model help you find something, where the input is arbitrary. That's a no-brainer, given the massive popularity of ChatGPT for solving problems or getting details about something.

What isn't great is automation, where the model works by itself. It's a tricky domain, and I'm sure there are decent use cases, but such use cases are sparse. Often people aren't upfront about the risks (especially vendors).

Here's an example, security questions. Users can't always remember what exact format they've set their security question answer to. For example, "What was your first job?" and they entered the full job title and company. They can't remember if they just put the job title, the company, or both.

So, your junior engineer gets an idea to use an LLM to fuzzy-match the input to see if it's correct. Example prompt:

You are a security question checker. Compare the
RealAnswer and Input to see if it is correct for the
given Question. The Input doesn't need to match the
RealAnswer exactly, but the concept/answer should
essentially be the same (fuzzy-match). Respond with JSON
output with a single boolean field "matches".

<Question>What was your first job?</Question> 
<RealAnswer>Accountant at FirstFinancial</RealAnswer> 
<Input>Accountant</Input>

So in this case, if they enter "accountant" or "firstfinancial", it will return true. Great, the engineer has made the process more intelligent and intuitive!

What non-engineers might not consider here is all of the caveats. Firstly, you might encounter input that breaks the model, where the LLM doesn't even return proper JSON. In that case, you want to fall back to traditional fuzzy-matching tactics, e.g., how many words match, ignore letter case, etc. (I prefer JSON output, because it's easy to see if a valid JSON object is present, and then to parse the expected field.)

Worse, you've added a security hole. An important principle with LLMs is that you have complete access to any possible answer or system connected to the model. Prompt injection. If the input is unrestricted, you could extend the prompt. Even if the input is limited to say, 20 characters, you can probably still hack it by entering a special string of characters to break the model. In this case, you'd have a special string that always makes it return "true", no matter the question or answer.

Essentially, it's a bad idea all around. This is just a simple case, but it has many parallels in real software design. The way I sum it up is simple:

  • Functions return expected output for expected input. return expected output for expected input. return expected output for expected input. return expected output for expected input.
  • LLMs return unpredictable output.

That isn't to say LLMs are useless. They have their uses, but the fundamental principles behind them must be respected.

When you add a human behind LLMs, the value becomes much more apparent. Treat it like a horse; without a rider, it could go anywhere. With a rider, you can go fast towards a point. With an expert rider, you can go very fast.

So here's a strong use case: Support. It's easy to explain why this use case is so strong.

  1. You have a human behind the LLM. It's the customer.
  2. They are using the model the same way the average user is getting help from ChatGPT for arbitrary information.

It's no question that some people just want to "talk to a human" just because they don't want to dig through your knowledgebase.

Example, when you're at the store, do you want to look up the store website to search for an item to find the right aisle? Or do you want to ask the nearest employee? Most people are conditioned towards the latter. It's just intuitive.

That's a support case. Now imagine if there were devices around the store where you could push a button and ask where something is--this is very possible with LLMs. Customers should appreciate the fast help. If the AI fails or the customer otherwise gets an incorrect answer (which shouldn't be too common) they can fall back to an employee. It's not the end of the world. It's not something they can break or take advantage of. They should already have the expectation that the "computer" might be wrong.

We see a lot of use cases already with support sites, even before LLMs were popular, to try and guide the customer towards self-help. LLMs make self-help all the more accessible. ChatGPT is basically a support site. The front page has the header, "What can I help with?" It's fine-tuned to act like a support agent, just without a specific domain.

The key here is that you're mixing LLMs with a human, and the human doesn't need to be your employee. So long as you keep the principles behind LLMs and prompts in mind, you can make some neat systems that your customers will really like.

Blog Index >>

Picture of me

Venice, Italy

Hey there! I'm Mukunda Johnson, a seasoned self-taught developer. None of what I know today was ordered through a university or CS class. Programming is just something I've always enjoyed.

Oddly enough, my interests are pretty bizarre to my family. I was home-schooled, and my family's trade is construction work; my youth involved a lot of that. I've built two houses from the grass up, living in the second one for the past several years.

Despite the disconnection, I've spent nearly my entire life toying with computers. I have an extensive history in fun projects. I say self-taught, but I wouldn't discredit all of the amazing people in the developer community that have contributed to my knowledge over the last 25 years.

For my professional life in tech, I've worked with many clients, from individuals to small businesses to enterprises; a lot of remote work recently, with the last role being with Crossover. I've grown very competent with a broad range of technologies. I enjoy working with clients to reach practical solutions, and they usually appreciate the thorough and proactive approach I take to consulting.

If you're curious about my name's origin, it's inspired from ISKCON culture, a branch of Hinduism that sprouted in New York in the 60s. The translation of Mukunda is giver of liberation, and my middle name is Das, which indicates I'm the servant to the giver of liberation (God). I'm very open-minded and avoid religious comparisons or conversation for the most part, but some core values of ISKCON are vegetarianism, sobriety, and ethical living.

For fun, if I'm not working on some odd project like this landing page, I may be playing World of Warcraft. I enjoy raid-leading and performing with the top 0.5% of players worldwide. It helps keep the brain refreshed. Most of my friends who I relate with have been "online," and that trend still continues. Other things I enjoy are writing, travel (when money and inspiration permits), and keeping fit. I've made it more of a priority recently to stay healthy.

A handful of neat endeavors of mine. Much of my professional work is proprietary and/or can't be shared, so these are mostly personal projects.

#golang #typescript #react
2025
thumbnailA fun collaborative canvas with infinite resolution. Not finished yet.
#golang #k8s #typescript #react #nestjs #chrome
2024
thumbnailA SaaS application. Golang container backend. React/Typescript client and Chrome extension. NestJS SaaS/infrastructure management backend. Still growing.
#golang #typescript #react
2023
thumbnailAn anonymous chat server. It's a rite of passage for a programmer to write a chat server.
#csharp
2022
thumbnailA handy personal tool to track time spent on tasks to chart in a CSV later. I wrote this when I needed to better manage my time in a flexible role and manage SLAs; also to practice C#.
#python #openvpn
2021
thumbnailHonestly I don't remember much about this. I wanted to simplify creating openvpn profiles, and openssl is a very deep rabbit hole. Here's a blog article.
#python #email
2021
thumbnailThis is a tool I made to simplify reproduction of issues with email networking. A smtpyfile contains delivery parameters and email content, basically a test case for your engineering team.
#javascript #glsl #html
2020
thumbnailThis is a WebGL application I made to demonstrate expertise in web development while also showing my hobbyist projects. It uses no libraries and is written from scratch.
#javascript
2020
thumbnailAn implementation of Conway's Game of Life.
#sourcemod
2014
thumbnailA tetris game that runs inside of Counter-Strike or other Source games. Featured on Kotaku.
#sourcemod
2013
thumbnailA Mario game that runs inside of Counter-Strike or other Source games. Featured on PC Gamer. Extremely cool how this works internally - a completely server-side hosted game-within-a-game that had no intention of supporting such a thing. Smooth side-scrolling and all!
#assembly #nes #c
2009
thumbnailA ridiculously fun project that mixes PCM via carefully crafted code. The CPU cycles were hand-counted to time the output of each sample. The sequencer also supports other NES audio channels and extension chips.
#assembly #snes
2009
thumbnailProgramming the SNES by yourself is not for the faint of heart. It was no wonder that the active developer community for this console could be counted on one hand. This was a fun project, complete with audio support from my snesmod library. Music is from various friends in #mod_shrine EsperNet. This game is published via the Super 4 in 1 Multicart.
#assembly #snes #c++
2009
thumbnailThis is a premium SNES audio library that supports streaming audio from the SNES processor to the SPC coprocessor while playing rich Impulse Tracker music. Only a few commercial SNES games like Star Ocean have that functionality.
#c #gba
2008
thumbnailA fun GameBoy® Advance game.
#arm-assembly #gba #nds
2008
thumbnailA comprehensive audio engine for the GameBoy® Advance and Nintendo DS. It supports several tracker music formats and software mixing. It can extend the Nintendo® DS's 16 audio channels with additional software channels. Written entirely in ARM assembly.

You can visit my old projects page that contains some other fun things. My Hobbyist Portfolio also shows many of my old projects.

Have a virtual business card. 🤝

QR Code for mukunda.com
Development • Consulting • Freelancing
Mukunda Johnson
Software Engineer

Resume and references are available on request only.

Find me on: LinkedIn | Twitter/X | GitHub