Coding for this blog
tldr: simple bash scripts for automating things for this blog
So, I have been paying for another personal website for years and always thought of writing a blog. What happened was that after setting up Jupyter Notebooks rendering for the blog and everything, I lost interest in all those details.
All I want when I write is a single-page, distraction-free environment for me to think and pen down without getting bogged down by the details of website interactions all the time.
Granted, for some, all these interactions with the product are a journey similar to signing personal checks or crumpling some dozen physical draft papers before finishing the article. But not for me
And Writefreely is precisely that: a simple platform focused on getting my thoughts out without all the distractions.
So, it seems simple; why more coding then?
I agree. The part of writing here is more relaxed, but after writing, the steps of saving, editing, and link name changing on the website are all boring to me.
I prefer the files of all articles on my local device and a simple one-line command to create posts or edit posts on the website. The added benefit is it's just one more code to push the same files to the new website, even my own, if the current website closes shop, as writefreely is open source anyway. More generally, having the files in my system allows me to track changes with git, post-process the content any way I like automatically before uploading, and feed it into local LLM to answer and tinker with things occupying my mind.
Of course, we can even go another step in monitoring the file status of this folder using inotify and push the updates automatically, but let's not do that in lieu of server load on unnecessary saves.
As I was saying in diary, I got Adept's Visual AI model access that can automate some of these issues directly on the website, but it felt overkill, and they may ask me to pay again, so going back to good old bash scripts.
So, three scripts are doing lazy work. Credits to my coding buddy GPT-4
ofc
Create a post
Update post
Images add
Create post
Well, here you go, pretty self-explanatory
#!/bin/bash
if [ $# -ne 2 ]; then
echo "Usage: $0 <body_file_path> <slug>"
exit 1
fi
body_file="$1"
slug="$2"
if [ ! -f "$body_file" ]; then
echo "Error: The file to post '$body_file' does not exist."
exit 1
fi
body=$(<"$body_file")
json_payload=$(jq -n \
--arg body "$body" \
--arg slug "$slug" \
'{body: $body, title: "", slug: $slug, font: "norm", lang: "en", crosspost: []}')
#change link to your blog
response=$(curl "https://rant.li/api/collections/boson/posts" \
-H 'authority: rant.li' \
-H 'accept: */*' \
-H 'accept-language: en-US,en;q=0.9,en-GB;q=0.8' \
-H 'content-type: application/json' \
-H 'origin: https://rant.li' \
#fill your account cookie
-H 'cookie: ...' \
-H 'user-agent: Mozilla/5.0 (Linux; Android 11; Pixel 5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.91 Mobile Safari/537.36' \
-X POST \
--data-raw "$json_payload" \
--compressed)
echo "$response" | jq '.code, .data.url, .data.id, .data.created, .data.updated'
postid=$(echo "$response" | jq -r '.data.id')
slug=$(echo "$response" | jq -r '.data.slug')
# making mapping file for update script
# slug is just urlname and postid is writefreely's internal mapping for posts
echo "$slug $postid" >> mapping
Update Post
#!/bin/bash
if [ $# -ne 2 ]; then
echo "Usage: $0 <body_file_path> <slug>"
exit 1
fi
body_file="$1"
key_doc="$2"
postid=$(grep "^$key_doc " mapping | awk '{print $2}')
if [ -z "$postid" ]; then
echo "Key not found in mapping file: $key_doc"
exit 1
fi
if [ ! -f "$body_file" ]; then
echo "Error: The body file '$body_file' does not exist."
exit 1
fi
body=$(<"$body_file")
json_payload=$(jq -n \
--arg body "$body" \
'{body: $body, title: "", font: "norm"}')
response=$(curl "https://rant.li/api/collections/boson/posts/$postid" \
-H 'authority: rant.li' \
-H 'accept: */*' \
-H 'accept-language: en-US,en;q=0.9,en-GB;q=0.8' \
-H 'content-type: application/json' \
-H 'origin: https://rant.li' \
-H 'cookie: ...' \
-H 'user-agent: Mozilla/5.0 (Linux; Android 11; Pixel 5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.91 Mobile Safari/537.36' \
-X POST \
--data-raw "$json_payload" \
--compressed)
echo "$response" | jq '.code, .data.slug, .data.id, .data.created, .data.updated'
Images add
Ofc, you can use Imgur and add the Imgur link to the blog, but again, I don't want to use separate websites and then keep track of deletions and updates of exact image URLs, so just Vercel now.
Add all images in one folder and sync with Vercel for image links.
I want any progressive change in an image file, as you can see in diary update, to be added to the same image collage for that month, and I don't want to fix image links. So, this setup is perfect for that.
Here is the script for creating a collage with all the image files I want in order and the final output filename. git commit
to let Vercel sync changes
#!/bin/bash
collage_height=300
temp_dir=$(mktemp -d)
declare -a processed_images
collage_output="${@: -1}"
unset 'ARGV[${#ARGV[@]}-1]'
for input_image in "$@"; do
if [ "$input_image" != "$collage_output" ]; then
original_dimensions=$(identify -format "%w %h" "$input_image")
read -r original_width original_height <<< "$original_dimensions"
new_width=$(echo "$collage_height * $original_width / $original_height" | bc)
output_image="$temp_dir/$(basename "$input_image")"
convert "$input_image" -resize "${new_width}x$collage_height^" -gravity center -extent "${new_width}x$collage_height" "$output_image"
processed_images+=("$output_image")
fi
done
convert +append "${processed_images[@]}" "$collage_output"
rm -r "$temp_dir"
echo "Collage created: $collage_output"
Finally, aliasing commands to shorten to wfp
, wfu
and wfi
alias wfp="sh <path>/publish.sh"
alias wfu="sh <path>/update.sh"
alias wfi="sh <path>/create_collage.sh"
That's it! I can focus entirely on writing now, just as I wrote this post and updated what you see here with those scripts ๐