tegaki

These are my ramblings

a number

It sounds too good to be true right? Gone are the days of having to use a database or SaaS just to store a number!

Let's start with what's already done for us, I use NGINX, and NGINX already collects all the information needed to generate a visitor counter. It has an access log where it also logs IP addresses. If we scromble this a bit, and sort & filter out only unique entries, and count the lines, we should have a fairly ok representation of visitors for that day! Here is a line that does this for yesterdays traffic.

sudo cat /var/log/nginx/access.log.1 | cut -d" " -f 1 | sort -u | wc -l

Now this is only good for one day of log data, and the way NGINX is set up by default is that it rotates these log files on a 14 day basis. So each day gets its own file, where the current day is access.log, and yesterday would be access.log.1... and so on.

people working hard on the computer

Now to preserve this data, we need to store a number, and the best way to store a number on a server is to store it in a file! So let's do just that!

Below is the first of two scripts that together create a wacky system for storing your visitor counter on disk! I called it /home/MYUSERNAME/visitor_counter/fbbtbot.mjs.

#!/usr/bin/env node

import {readFileSync, writeFileSync} from 'node:fs';
import { exec } from 'node:child_process';

const fbbtbot_file = '/home/MYUSERNAME/visitor_counter/from_big_bang_to_beginning_of_today';

exec('cat /var/log/nginx/access.log.1 | cut -d" " -f 1 | sort -u | wc -l', (err, stdout, stderr) => {
    if (err) {
        console.error(`really bad things happened: ${err}`);
        return;
    }

    const visitors_yesterday = parseInt(stdout);

    const visitors_cum_string = readFileSync(fbbtbot_file, 'utf8');
    const visitors_cum = parseInt(visitors_cum_string);

    writeFileSync(fbbtbot_file, `${visitors_yesterday + visitors_cum}`);

    writeFileSync('/home/MYUSERNAME/visitor_counter/runs.log', `fbbtbot ${new Date(Date.now()).toISOString()}\n`, {flag: 'a+'});
});

Now, since it's not a very smart piece of code, and laziness sets in really soon in projects like these, ensure that the file it tries to write the counter to exists before the program tries to run.

echo 0 > /home/MYUSERNAME/visitor_counter/from_big_bang_to_beginning_of_today

Now we only have half the puzzle solved! the other half is this second script (which I decided to call /home/MYUSERNAME/visitor_counter/freshcount.mjs):

#!/usr/bin/env node

import {readFileSync, writeFileSync} from 'node:fs';
import { exec } from 'node:child_process';

const fbbtbot_file = '/home/MYUSERNAME/visitor_counter/from_big_bang_to_beginning_of_today';
const fresh_file = '/home/MYUSERNAME/visitor_counter/from_big_bang';

exec('cat /var/log/nginx/access.log | cut -d" " -f 1 | sort -u | wc -l', (err, stdout, stderr) => {
    if (err) {
        console.error(`really bad things happened: ${err}`);
        return;
    }

    const visitors_today = parseInt(stdout);

    const visitors_cum_string = readFileSync(fbbtbot_file, 'utf8');
    const visitors_cum = parseInt(visitors_cum_string);

    writeFileSync(fresh_file, `${visitors_today + visitors_cum}`);

    writeFileSync('/home/MYUSERNAME/visitor_counter/runs.log', `freshcount ${new Date(Date.now()).toISOString()}\n`, {flag: 'a+'});
});

You should now be able to run both of these scripts (as root nonetheless) and see that they do something! πŸŽŠπŸŽ‰

sudo ./fbbtbot.mjs
sudo ./freshcount.mjs

wizard at a computer

Next up is to make sure that these scripts run as root on cronjob timers! I decided that I wanted the freshest visitor counter, so i run the freshcount script every minute, and the fbbtbot script first thing in the morning 1 minute after the day starts! (this is to make sure that the nginx stuff has a hot minute to do its logrotation).

sudo crontab -e
1 0 * * * /home/MYUSERNAME/visitor_counter/fbbtbot.mjs
* * * * * /home/MYUSERNAME/visitor_counter/freshcount.mjs

Incredible right?! Now you should be able to see some files with some numbers in them, and the file named from_big_bang is your up-to-the-minute semi-accurate visitor counter!

Now what?!

Well, now we have to expose this visitor counter through NGINX to the rest of the world! (as well as to our own clientside scripts running). To do that I used the following entry in my nginx config file (/etc/nginx/sites-available/anyfilenamegoeshere).

    location /visitor-counter {
        default_type text/plain;
        alias /home/MYUSERNAME/visitor_counter/from_big_bang;
    }

visual depiction of nginx configuration files

Tada!


We're getting really close to the end-goal now! Just a little bit of HTML and javascript and we're there! Hooray! πŸ‘πŸ‘πŸ‘πŸ‘

<html>
    <body>
          ... lots of other stuff...

          <footer> <!-- Remember to use semantic tokens! -->
              <div class="visitor-counter">
                  <p>Visitors: <span></span></p>
              </div>
          </footer>
    </body>
</html>

and the JS snippet to go with it looks like this:

(async () => {
    let visitorCount = await (await fetch('/visitor-counter')).text();
    document.querySelector('.visitor-counter span').innerHTML = visitorCount;
})();

And now it should all work! πŸ‘


Wacky words by

Tegaki

anonymous guy in front of a laptop

β€œI want to contribute code to this project, but I want to keep those contributions separate from my IRL identity. But git is big and complicated, and if something is misconfigured the risk is huge (I'll mix up my identities!), and now I don't really feel like thinking about it anymore... those contributions weren't that important anyways...”

Does this sound like you? It sure did sound like myself a few years back! nowadays I can at least say I've arrived close enough to git zen and git mindfulness that I no longer let the fear of β€œmessing up my git/ssh configuration” get in the way of contributing!

My ideal solution would to be able to quickly clone any repository as any identity on any machine, have minimal manual intervention in the process, and when doing a git commit not have to worry about leaking any information from the wrong identity. It's a lot of moving parts in a potentially messy setup, and my current setup, while not perfect, at least gives me peace of mind for the most important aspect (in my opinion): not mixing my identities.

All you need to think about is scoped to the following 3 configuration files.

  • .gitconfig (in $HOME)
  • .git/config (per repo)
  • .ssh/config (in $HOME)

Each one will be discussed in its own section.

$HOME/.gitconfig

This is your main git configuration, the one you manipulate with --global, the one that git uses when it has no specific instructions about what to do with your git commands.

$ git config --global user.name "John Doe"
$ git config --global user.email "johndoe@example.com"

What I do here is actually not set any identity in the global config! I set my values to empty strings (to be overridden at a per repository level). This gives me a fail-safe system, where a failure to specify will result in not associating with any identity.

$ git config --global user.name ""
$ git config --global user.email ""

In fact, it's probably worth taking a look at what's in your .gitconfig right now!

cat ~/.gitconfig

Just scan it with your eyes! it's probably not that long, and if you see anything that's tied to any of your identities... remove it! 😎

.git/config

This is the place we want to be! The closest your git config can get to the code it relates to.

Now I'm a lazy guy, I haven't made a fancy script or anything to set my per repo configuration. Currently I rely solely on my shell's history (ctrl+r) to fetch the relevant command (it's the same as the commands above, just without the --global parameter.) It works great!

But this means that as far as git is concerned, there shouldn't be any way for information to cross the repo boundaries... great!

$HOME/.ssh/config

Code contributions are not only associated with the version control system you happen to use, but also in the channel of communication (and if you want to be really paranoid, this includes the IP of your commit origin as well). I don't usually take my attribution worries it to the network level, but you might, so it's worth mentioning.

SSH can also be a potential can of worms in terms of configuration, but the way I keep things in order here is to generate separate SSH keys for each of my identities, and tell git to associate to a specific SSH key at the time of the initial git clone. This can be done very cleanly through ssh config aliasing.

Let's say I just created an account on github and I call this identity alice, and I'd like to generate a new SSH key to use with this new separate account.

$ ssh-keygen -f ~/.ssh/alice

Then after I upload this to the accounts 'ssh keys' section in the account settings in the github Web UI. I would now create a section in my .ssh/config.

Host alice.github.com
    User git
    HostName github.com
    IdentityFile ~/.ssh/alice

NOTE: The alias can be anything, and though it here might look like a subdomain, it's not. I just like to keep the original domain in the alias for simplicity, and to have a convention that's easy to remember. When I want to SSH into my most frequently used machines, I of course make those super short to support my laziness! ;)

Now when I want to clone a repository, for example mastodon. I change the clone URL (SSH URL in github Web UI) to look like the one matching my alias.

$ git clone git@alice.github.com:mastodon/mastodon.git

Then I do the (unfortunately) extra step of deliberately associating it with other alice handles (email and commit display name).

$ git config user.name "Alice"
$ git config user.email "alice@example.com"

I only have to do this once per identity-resource pair... which so far works out fairly well, and I then synchronize this file between machines I own (using something like rsync). I make sure to generate new ssh keys for each identity on each machine (so I can revoke a single one), and the configuration will still work across all machines as long as I keep the name of the ssh-key file the same! Nice!

That's it! You now have a fairly simple system for managing multiple identities for code development! πŸŽ‰


Wacky words by

Tegaki

Hello there! I thought I would try to summarize what I did to get this very instance of writefreely up and running on my server host!

Be warned, I don't consider this a very simple guide to follow, it's also very opinionated, as it's how I currently like to set up my hosted services.

This is also a living document, as I will try my best to make changes in the event that you or someone else submits a comment on it if something is unclear! Submitting comments on this is done via the Fediverse! (search up the URL to this article on your favourite federated / ActivityPub platform! πŸ™Œ)

Setup

First I should mention that I have a particular way of setting up my servers, I like to try and containerize as much as possible! (with a few exceptions). This way I can cheaply experiment and host as many things as I want on a single VPS, without any piece of software stepping on the toes of another!

I also like to have my data close to the consumer of that data, so when I pull in a repository of software, for example shimmie2 for my image booru (hosted at booru.drawsdraws.com), or mastodon (hosted at mstdn.drawsdraws.com) it lives alongside a data/ directory as shown.

.
|-- booru
|   |-- data
|   `-- shimmie2
|-- mastodon
|   |-- data
|   `-- mastodon
`-- writefreely
    |-- data
    `-- writefreely

The data/ directory then becomes where i mount my docker volumes. Here is my modified writefreely docker-compose.yml (if you want, you can manually diff this file against the official one.)

version: "3"

networks:
  external_writefreely:
  internal_writefreely:
    internal: true

services:
  writefreely-web:
    container_name: "writefreely-web"
    build: .
    # image: "writeas/writefreely:latest"

    volumes:
      - ../data/web-keys:/go/keys
      - type: bind
        source: ../data/config.ini
        target: /go/config.ini

    networks:
      - "internal_writefreely"
      - "external_writefreely"

    ports:
      - "8080:8080"

    depends_on:
      - "writefreely-db"

    restart: unless-stopped

  writefreely-db:
    container_name: "writefreely-db"
    image: "mariadb:latest"

    volumes:
      - "../data/db:/var/lib/mysql/data"

    networks:
      - "internal_writefreely"

    environment:
      - MYSQL_DATABASE=writefreely
      - MYSQL_ROOT_PASSWORD=CHANGEME
    restart: unless-stopped

The most notable changes are that all the volumes (where appropriate) have been prefixed with ../data/ so that data stored by the different services is neatly contained within the data/ directory! πŸ™Œ

NOTE: I also commented out the image property and chose to instead build the docker image from source. It's not necessary for following along with this write-up to get things to work, so I'll leave out the specifics.

Due to the way that docker volume binds work, you'll have to create the file ../data/config.ini as an empty file in order for docker to mount it.

touch ../data/config.ini

Next, you should now set the ownership of the files under /data to match the user id of the correct owning user inside the relevant docker container. This is important when hosting these docker volumes on a shared filesystem like this.

You can check the current owner/permissions of any file with:

ls -la ../data/config.ini

To set the correct ownerships for what these docker containers expect, run the following:

sudo chown -R 2:2 ../data/config.ini
sudo chown -R 2:2 ../data/web-keys

If web-keys doesn't exist on your filesystem yet you can create an empty directory for it as well. It should get created automatically when running the container via docker-compose. but if you're following this guide verbatim from top to bottom, it might not exist yet here, so run the following mkdir command before trying to change the permissions of it as shown above.

mkdir ../data/web-keys

NOTE: the user id 2 will not make sense for your host operating system! don't worry if it looks like it belongs to some completely unrelated user on the host. Files are mounted verbatim into the docker container, and this includes the file permissions as well.

Now you have some options, you can run the interactive config generator by running the docker container with its entrypoint set to /bin/sh and running commands manually (such as the command cmd/writefreely/writefreely config start). But I think I will just post my (masked) version of the ../data/config.ini file as that seems to be the simplest for now :)

[server]
hidden_host          =
port                 = 8080
bind                 = 0.0.0.0
tls_cert_path        =
tls_key_path         =
autocert             = false
templates_parent_dir =
static_parent_dir    =
pages_parent_dir     =
keys_parent_dir      =
hash_seed            =
gopher_port          = 0

[database]
type     = mysql
filename =
username = writefreely
password = CHANGEME
database = writefreely
host     = writefreely-db
port     = 3306
tls      = false

[app]
site_name             = write
site_description      =
host                  = https://write.drawsdraws.com
theme                 = write
editor                =
disable_js            = false
webfonts              = true
landing               =
simple_nav            = false
wf_modesty            = false
chorus                = false
forest                = false
disable_drafts        = false
single_user           = false
open_registration     = true
open_deletion         = false
min_username_len      = 3
max_blogs             = 1
federation            = true
public_stats          = true
monetization          = false
notes_only            = false
private               = false
local_timeline        = false
user_invites          =
default_visibility    = unlisted
update_checks         = false
disable_password_auth = false

[oauth.slack]
client_id          =
client_secret      =
team_id            =
callback_proxy     =
callback_proxy_api =

[oauth.writeas]
client_id          =
client_secret      =
auth_location      =
token_location     =
inspect_location   =
callback_proxy     =
callback_proxy_api =

[oauth.gitlab]
client_id          =
client_secret      =
host               =
display_name       =
callback_proxy     =
callback_proxy_api =

[oauth.gitea]
client_id          =
client_secret      =
host               =
display_name       =
callback_proxy     =
callback_proxy_api =

[oauth.generic]
client_id          =
client_secret      =
host               =
display_name       =
callback_proxy     =
callback_proxy_api =
token_endpoint     =
inspect_endpoint   =
auth_endpoint      =
scope              =
allow_disconnect   = false
map_user_id        =
map_username       =
map_display_name   =
map_email          =

The only thing that should really need to be different for your instance would be the host variable under [app].

Next, ensure the database exists, and that it has the correct user with the correct password (as described in the configuration above).

docker-compose run --entrypoint /bin/bash writefreely-db
mysql -u root -p
<enter MYSQL_ROOT_PASSWORD from docker-compose.yml>
# CREATE USER 'writefreely'@'%' IDENTIFIED BY 'CHANGEME';
# GRANT ALL PRIVILEGES ON *.* TO 'writefreely'@'%';
# FLUSH PRIVILEGES;

Now, initialize the database.

docker-compose run --entrypoint /bin/sh writefreely-web
cmd/writefreely/writefreely db init

Then, generate the keys.

docker-compose run --entrypoint /bin/sh writefreely-web
cmd/writefreely/writefreely keys generate

Next, generate the CSS files (I don't know why this is not in the β€œcommon execution flow” of the application but it should be done regardless!)

cd less/
./install-less.sh
make

I use Nginx as the entry point to all services running on my host(s). So for this I follow the reverse proxy guidelines when I generated my config.ini above. My nginx config section is actually identical to the one supplied by the official guide, with the following exception of pointing to my static content.

...
location ~ ^/(css|img|js|fonts)/ {
    root /MY/PATH/TO/writefreely/writefreely/static;
    # Optionally cache these files in the browser:
    expires 1d;
}
...

Lastly, you should be able to run all the relevant containers with the single command:

docker-compose up

If anything went wrong, leave a comment on the fediverse! (or search the official forums)


Wacky words by

Tegaki