<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[WIIIN0DE]]></title><description><![CDATA[Web Developer]]></description><link>https://blog.wellosoft.net</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 00:43:18 GMT</lastBuildDate><atom:link href="https://blog.wellosoft.net/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[ForwardDomain.net has been acquired]]></title><description><![CDATA[I’m excited to announce that forwarddomain.net has been acquired. The hosting service and its project will remain free and open source. It’s just that I’m no longer owning the project.
Forward Domain has been this little sister project of a larger pa...]]></description><link>https://blog.wellosoft.net/forwarddomainnet-has-been-acquired</link><guid isPermaLink="true">https://blog.wellosoft.net/forwarddomainnet-has-been-acquired</guid><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Mon, 02 Feb 2026 03:46:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770003495675/2c77b08c-1a3e-466b-af5b-0b79739ab7ca.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’m excited to announce that <a target="_blank" href="https://forwarddomain.net">forwarddomain.net</a> has been acquired. The hosting service and its project will remain free and open source. It’s just that I’m no longer owning the project.</p>
<p>Forward Domain has been this little sister project of a larger part of <a target="_blank" href="https://domcloud.co">my cloud hosting</a> solution for over than 5 years, and I took a proud that forward domain project is significantly have lower downtime plus being easy to maintain and scalable. I look forward into my current and next project, especially since I moved on into more interesting projects such as being a build engineer on Redox OS.</p>
]]></content:encoded></item><item><title><![CDATA[How to Fix File Dialog in COSMIC]]></title><description><![CDATA[Pop!_OS just released it’s new and stable COSMIC DE, and better yet, it came with ARM image! I can’t help myself but straight away wiping my linux disk into a brand new Pop!_OS.
And everything went smooth! Except it came with broken open file dialog,...]]></description><link>https://blog.wellosoft.net/how-to-fix-file-dialog-in-cosmic</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-fix-file-dialog-in-cosmic</guid><category><![CDATA[PopOS]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sat, 13 Dec 2025 05:53:58 GMT</pubDate><content:encoded><![CDATA[<p>Pop!_OS just released it’s new and stable <a target="_blank" href="https://system76.com/cosmic">COSMIC DE</a>, and better yet, it came with <a target="_blank" href="https://blog.system76.com/post/pop-os-letter-from-our-founder#:~:text=New%20Pop%21%5FOS%2024%2E04%20LTS%20for%20ARM%20computers">ARM image</a>! I can’t help myself but straight away wiping my linux disk into a brand new Pop!_OS.</p>
<p>And everything went smooth! Except it came with broken open file dialog, like I can’t open any files in any apps. There are some open issues with it that has been closed, but I think it’s x86_64 specific?</p>
<p>Instead of trying to fix it, I’m just going to replace the open file dialog, turns out it’s really doable!</p>
<h2 id="heading-install-your-second-open-file-dialog">Install your second open file dialog</h2>
<p>Open a root shell and install one:</p>
<pre><code class="lang-bash">sudo apt install xdg-desktop-portal-kde
</code></pre>
<p>Here I am using kde, but I think you can pick any from <code>xdg-desktop-portal-*</code></p>
<h2 id="heading-switch-your-open-file-dialog">Switch your open file dialog</h2>
<p>Here’s the magic, open this config file:</p>
<pre><code class="lang-bash">mkdir ~/.config/xdg-desktop-portal
vim ~/.config/xdg-desktop-portal/portals.conf
</code></pre>
<p>Edit it like:</p>
<pre><code class="lang-bash">[preferred]
org.freedesktop.impl.portal.FileChooser=kde
</code></pre>
<p>Then save it, then restart the XDG portal</p>
<pre><code class="lang-bash">systemctl restart --user xdg-desktop-portal
</code></pre>
<p>You should have a working open file dialog by now!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765605223448/651ac571-aefe-4e8c-99fc-67505a7a0b78.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Update LLVM from 18 to 21 In Ubuntu 24]]></title><description><![CDATA[Here’s I do it, first install the llvm 21:
wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
sudo ./llvm.sh 21

Then install all LLVM tools
sudo apt install llvm-21* clang-21* lld-21* lldb-21*

Then install the alternatives (see the script)
wget htt...]]></description><link>https://blog.wellosoft.net/update-llvm-from-18-to-21-in-ubuntu-24</link><guid isPermaLink="true">https://blog.wellosoft.net/update-llvm-from-18-to-21-in-ubuntu-24</guid><category><![CDATA[Ubuntu 24.04]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Wed, 03 Sep 2025 16:05:02 GMT</pubDate><content:encoded><![CDATA[<p>Here’s I do it, first install the llvm 21:</p>
<pre><code class="lang-plaintext">wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
sudo ./llvm.sh 21
</code></pre>
<p>Then install all LLVM tools</p>
<pre><code class="lang-plaintext">sudo apt install llvm-21* clang-21* lld-21* lldb-21*
</code></pre>
<p>Then install the alternatives (see <a target="_blank" href="https://github.com/ShangjinTang/dotfiles/blob/05ef87daae29475244c276db5d406b58c52be445/linux/ubuntu/22.04/bin/update-alternatives-clang">the script</a>)</p>
<pre><code class="lang-plaintext">wget https://raw.githubusercontent.com/ShangjinTang/dotfiles/05ef87daae29475244c276db5d406b58c52be445/linux/ubuntu/22.04/bin/update-alternatives-clang
sudo bash ./update-alternatives-clang
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Podman Issue with tar in Pop-OS! 22]]></title><description><![CDATA[I got a problem with tar inside podman
The problem
podman run -v $HOME:/mnt:Z -it --rm debian:trixie bash
root@9d22f8d85816:/# cd /mnt/Documents/
root@9d22f8d85816:/mnt/Documents# apt update
root@9d22f8d85816:/mnt/Documents# apt install wget
root@9d2...]]></description><link>https://blog.wellosoft.net/podman-issue-with-tar-in-pop-os-22</link><guid isPermaLink="true">https://blog.wellosoft.net/podman-issue-with-tar-in-pop-os-22</guid><category><![CDATA[containers]]></category><category><![CDATA[PopOS]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Mon, 25 Aug 2025 17:20:21 GMT</pubDate><content:encoded><![CDATA[<p>I got a problem with tar inside podman</p>
<h3 id="heading-the-problem">The problem</h3>
<pre><code class="lang-bash">podman run -v <span class="hljs-variable">$HOME</span>:/mnt:Z -it --rm debian:trixie bash
root@9d22f8d85816:/<span class="hljs-comment"># cd /mnt/Documents/</span>
root@9d22f8d85816:/mnt/Documents<span class="hljs-comment"># apt update</span>
root@9d22f8d85816:/mnt/Documents<span class="hljs-comment"># apt install wget</span>
root@9d22f8d85816:/mnt/Documents<span class="hljs-comment"># wget https://download.netsurf-browser.org/netsurf/releases/source-full/netsurf-all-3.11.tar.gz</span>
tar: netsurf-all-3.11/netsurf/frontends/beos/res/credits.html: Cannot change mode to rwxrwxrwx: Operation not permitted
tar: netsurf-all-3.11/netsurf/frontends/beos/res/licence.html: Cannot change mode to rwxrwxrwx: Operation not permitted
tar: netsurf-all-3.11/netsurf/frontends/beos/res/welcome.html: Cannot change mode to rwxrwxrwx: Operation not permitted
tar: netsurf-all-3.11/netsurf/frontends/framebuffer/res/Messages: Cannot change mode to rwxrwxrwx: Operation not permitted
...
</code></pre>
<p>These errors are coming from symlinks, which podman unable to change file mode permission.</p>
<h3 id="heading-the-solution">The solution</h3>
<p>It was pointed by this comment</p>
<p><a target="_blank" href="https://github.com/microsoft/vscode-remote-release/issues/11042#issuecomment-3044713731">https://github.com/microsoft/vscode-remote-release/issues/11042#issuecomment-3044713731</a></p>
<p>Let's check on our <code>crun</code> in host system:</p>
<pre><code class="lang-sh">pop-os@pop-os:~$ sudo -i
root@pop-os:~<span class="hljs-comment"># crun --version</span>
crun version 0.17
</code></pre>
<p>Yep, it's too old, we need to update it manually from <a target="_blank" href="https://github.com/containers/crun/releases/tag/1.23.1">https://github.com/containers/crun/releases/tag/1.23.1</a>:</p>
<pre><code class="lang-sh">root@pop-os:~<span class="hljs-comment"># wget https://github.com/containers/crun/releases/download/1.23.1/crun-1.23.1-linux-amd64</span>
root@pop-os:~<span class="hljs-comment"># chmod +x crun-1.23.1-linux-amd64</span>
root@pop-os:~<span class="hljs-comment"># ./crun-1.23.1-linux-amd64 --version</span>
crun version 1.23.1
commit: d20b23dba05e822b93b82f2f34fd5dada433e0c2
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
root@pop-os:~<span class="hljs-comment"># mv /usr/bin/crun /usr/bin/crun.old</span>
root@pop-os:~<span class="hljs-comment"># mv ./crun-1.23.1-linux-amd64 /usr/bin/crun</span>
root@pop-os:~<span class="hljs-comment"># crun --version</span>
crun version 1.23.1
commit: d20b23dba05e822b93b82f2f34fd5dada433e0c2
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
root@pop-os:~<span class="hljs-comment"># exit</span>
<span class="hljs-built_in">logout</span>
</code></pre>
<p>Now let's try it:</p>
<pre><code class="lang-bash">pop-os@pop-os:~$ podman run -v <span class="hljs-variable">$HOME</span>:/mnt:Z -it --rm debian:trixie bash
root@0ef97237a0c0:/<span class="hljs-comment"># cd /mnt/Documents/</span>
root@0ef97237a0c0:/mnt/Documents<span class="hljs-comment"># tar -xf netsurf-all-3.11.tar.gz</span>
root@0ef97237a0c0:/mnt/Documents<span class="hljs-comment"># ls netsurf-all-3.11/</span>
ChangeLog.md  README.md    libcss  libhubbub  libnsfb    libnslog  libnsutils      libpencil    librufl       libutf8proc     netsurf
Makefile      buildsystem  libdom  libnsbmp   libnsgif    libnspsl  libparserutils  librosprite  libsvgtiny  libwapcaplet  nsgenbind
root@0ef97237a0c0:/mnt/Documents<span class="hljs-comment">#</span>
</code></pre>
<p>It works!</p>
]]></content:encoded></item><item><title><![CDATA[Writing a brand-new OS is almost impossible by now]]></title><description><![CDATA[A few months ago, I was wondering how an OS works, and instead of trying a new one from scratch, I decided to jump in on the existing one, which is Redox OS, to be exact. Redox OS is an OS that embraces microkernels and Rust as its base language, whi...]]></description><link>https://blog.wellosoft.net/writing-a-brand-new-os-is-almost-impossible-by-now</link><guid isPermaLink="true">https://blog.wellosoft.net/writing-a-brand-new-os-is-almost-impossible-by-now</guid><category><![CDATA[low level programming]]></category><category><![CDATA[Rust]]></category><category><![CDATA[operating system]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sun, 10 Aug 2025 15:19:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754836480458/0670d1b1-48ef-4555-8564-1915660af801.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few months ago, I was wondering how an OS works, and instead of trying a new one from scratch, I decided to jump in on the existing one, which is <a target="_blank" href="https://redox-os.org/">Redox OS</a>, to be exact. Redox OS is an OS that embraces microkernels and <em>Rust</em> as its base language, which are two things that are majorly different from Linux and interesting in their own right.</p>
<p>The kernel is around ~30k LoC, most bootstrapping is handled by the bootloader and drivers. The kernel itself boots in under 1 second most of the time. It’s amazing what the team has accomplished: Last year it was dynamic linking and this month they just finished Unix socket implementation.</p>
<p>Redox OS is <a target="_blank" href="https://gitlab.redox-os.org/redox-os/website/-/merge_requests/436">10 years old</a>, so it should be mature now?</p>
<p>I think the OS kernel itself is quite mature, but the lack of supported programs hinders me a lot. As of this writing, the only compilers running inside Redox are GCC and Python (via RustPython), and most programs right now are cross-compiled. This saddens me a bit because most of the time I don’t write programs in C, but in more modern programs such as Go, Node.js, PHP, Python, and of course, Rust.</p>
<p>Why would these compilers have not been ported yet? That’s why I trying to jump in</p>
<p>And after a few attempts the problem is harder than I think:</p>
<h2 id="heading-how-porting-software-works">How Porting Software Works</h2>
<p>Porting is an attempt to make software work under a new OS. To do that, compilers use a special method known as <strong>cross-compiling</strong>. Cross-compiling in Redox OS works by using their own fork of <a target="_blank" href="http://gitlab.redox-os.org/redox-os/gcc">GCC</a> and <a target="_blank" href="https://gitlab.redox-os.org/redox-os/rust">Rust</a> compiler so both can compile software into binaries that work within Redox kernel.</p>
<p>Why would they have to fork them? Why don’t we just pretend it compiles into <em>Linux</em> because both of them are using ELF binaries?</p>
<p>Theoretically, it’s possible to keep using the upstream GCC or Rust and not change anything since both emit Assembly binaries, which is a machine-specific language that makes no difference with what the OS is running with. But they have to, to introduce compiler gates such as <code>__redox__</code> in C and <code>#[cfg(target_os = "redox")]</code> in Rust. If you look at their <a target="_blank" href="https://gitlab.redox-os.org/redox-os/gcc/-/commits/redox-13.2.0">GCC fork changes</a>, the changes are not that much.</p>
<p>The reason for these compiler gates is the difference between Linux and Redox <strong>syscall</strong>. A syscall is a special instruction available in Assembly to send messages to the kernel and may return the result, known as <strong>interrupts</strong>. The kernel <a target="_blank" href="https://gitlab.redox-os.org/redox-os/kernel/-/blob/master/src/arch/x86_shared/idt.rs?ref_type=heads">prepares interrupts</a> at bootstrapping before it launches any programs.</p>
<p>Syscall is so important without it the software will not even print anything to terminal. To prevent the software code from having syscall everywhere, the C language has a standard library called <strong>libc</strong>. Writing <strong>libc</strong> is the job for any OS to implement because it defines the <em>contract</em> between C programs and the kernel. To prevent the conflict in <code>libc</code> itself, the C language <a target="_blank" href="https://stackoverflow.com/questions/9376837/difference-between-c-standard-library-and-c-posix-library">has two standards</a>: the <strong>C standard library</strong> and the <strong>C POSIX library</strong>.</p>
<p>The C standard library defines basic things such as how to print to stdout and returning an exit code, while the POSIX standard is a superset of it which handles things such as threading, networking, etc. While C standard is a must, POSIX is not. POSIX standard exists in <code>libc</code> for Linux and MacOS but not Windows, which is why porting a C software to Windows is <strong>a pain</strong> for most people.</p>
<p>In Redox OS, <code>libc</code> is implemented in <a target="_blank" href="https://gitlab.redox-os.org/redox-os/relibc">relibc</a>, which is based in Rust with the help of <a target="_blank" href="https://github.com/mozilla/cbindgen">cbindgen</a> for C headers. It also contains additional services such as <code>ld.so</code>, which is the <a target="_blank" href="https://man7.org/linux/man-pages/man8/ld.so.8.html">same concept</a> in Linux for a loader for dynamically linked binaries, and some <a target="_blank" href="https://www.inferara.com/en/blog/c-runtime/">C runtime libraries</a>.</p>
<p>Fortunately, Redox is <em>almost</em> POSIX-compliant. It’s <em>almost</em> because it won’t cover all of them, because either the standard itself is deprecated or it’s not available. Even though Redox <em>can</em> be 100% POSIX compliant, they still have to patch software with <code>__redox__</code> because people are also using <code>__linux__</code> for things that are not in POSIX such as parsing <code>/proc</code>, advanced networking features, messing with <code>cgroups</code> and many more.</p>
<p>For this reason alone, there are about <a target="_blank" href="https://github.com/search?q=repo%3Aredox-os%2Fcookbook%20.patch&amp;type=code">70 patches in Redox OS Cookbook</a> as of this writing. Most of them are patches for C programs and libraries because it’s easier to patch them for themself rather than sending them upstream.</p>
<p>For many software that are not C, patching software is <strong>wildly different</strong>.</p>
<h2 id="heading-how-redox-os-ports-rust-software">How Redox OS ports Rust Software</h2>
<p>When you look at the Redox OS Cookbook patches, patches for Rust programs are almost non-existent. This is because either they have been <strong>forked</strong> or <strong>merged upstream</strong>. The exact reason for this is that Rust programs rely on libraries for OS-specific things. So creating a patch file for Rust programs is nonsense, because the library sources are downloaded from Cargo.</p>
<p>Cargo has <code>[patch.crates-io]</code> to <a target="_blank" href="https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html">patch dependencies</a>, but it’s hard to create a patch because the patch must be a Git URL or a local path, and the version must be exact. So Redox resorted to <strong>fork important libraries</strong> that are important to the programs that required to make the OS usable.</p>
<p>For example, the <a target="_blank" href="https://github.com/rust-windowing/winit">winit library</a> is important in Rust to handle low-level windowing in a GUI. The Redox OS GUI Compositor is <a target="_blank" href="https://gitlab.redox-os.org/redox-os/orbital"><strong>Orbital</strong></a>, and Orbital uses forked <a target="_blank" href="https://gitlab.redox-os.org/redox-os/winit">winit</a> as of this writing. Initially, they have submitted patches to winit to support Redox by adding calls to <a target="_blank" href="https://gitlab.redox-os.org/redox-os/syscall">redox-syscall</a> when <code>#[cfg(target_os = "redox")]</code>, but as of this writing the forked winit uses <a target="_blank" href="https://gitlab.redox-os.org/redox-os/libredox">libredox</a>, and it hasn’t been merged upstream (both libraries exist for different historical reasons).</p>
<p>What’s <a target="_blank" href="https://gitlab.redox-os.org/redox-os/libredox">libredox</a>? It’s a complement for <a target="_blank" href="https://github.com/rust-lang/libc">libc</a> in Rust, which is a FFI binding into C standard and POSIX standard in Rust. When a program calls Rust’s <code>libc</code>, the call will be <a target="_blank" href="https://github.com/rust-lang/rust/blob/c8ca44c98eade864824a3c0a15fbdc1edb7f9dd4/compiler/rustc_target/src/spec/base/redox.rs#L7">dynamically linked</a> into <code>relibc</code> when it compiles, so Redox can change <code>libc</code> intricate logic without pushing changes to <code>libc</code> when needed. The <code>libc</code> functions may not cover all the things Redox can offer, so <code>libredox</code> exists to complement it.</p>
<p>Even though Rust upstream <a target="_blank" href="https://doc.rust-lang.org/rustc/platform-support/redox.html">can compile most programs into Redox</a>, it still need <a target="_blank" href="https://static.redox-os.org/toolchain/">the forked toolchain</a> to include the forked GCC compiler, which is needed when a Rust library need to link into external libraries such as the <a target="_blank" href="https://docs.rs/openssl/latest/openssl/">openssl crate</a>.</p>
<p>Personally, right now I have a problem with porting <a target="_blank" href="https://github.com/tree-sitter/tree-sitter">tree-sitter</a>. It’s a Rust program/library, but fails to compile with <a target="_blank" href="https://github.com/al8n/fs4-rs">fs4</a> crate because it doesn’t support Redox at this time. When I look at their source code, I have to submit patches to <a target="_blank" href="https://github.com/bytecodealliance/rustix">rustix</a> to tell them, “Hey, Redox supports <code>statvfs</code> now!” before submitting patches to <code>fs4</code>.</p>
<p>That’s the journey for actually porting <strong>one</strong> software. If you read <a target="_blank" href="https://redox-os.org/news/">the monthly news</a>, there are a few software ports month by month. It’s not without reason that porting software is a hard challenge when it comes to a brand-new OS.</p>
<h2 id="heading-the-pursuit-of-a-complete-software-compiler">The Pursuit of a Complete Software Compiler</h2>
<p>Porting software is not just a compiler problem, but also involves making sure that the software runs properly. There have been a lot of attempts to port important software to Redox, like <a target="_blank" href="https://gitlab.redox-os.org/redox-os/cookbook/-/blob/master/recipes/wip/db/sqlite3/recipe.toml?ref_type=heads">sqlite3</a>, which sometimes reports <code>disk I/O errors</code>, or <a target="_blank" href="https://gitlab.redox-os.org/redox-os/cookbook/-/blob/master/recipes/wip/vm/qemu/recipe.toml">qemu</a>, which was reported to not run properly.</p>
<p>Personally, I have spent the time testing the Rust compiler in Redox. At the time of this writing, running <code>rustc</code> emits <a target="_blank" href="https://gitlab.redox-os.org/redox-os/redox/-/issues/1704">a Page Fault</a>. The page fault is suspected due to <code>extern crate</code> not being handled properly, but there can be other reasons.</p>
<p>There’s also a push to the Go compiler, which is <a target="_blank" href="https://gitlab.redox-os.org/redox-os/cookbook/-/merge_requests/572">compiling</a>, but also emits a page fault, a breakpoint trap, or a panic in relibc. The panic in relibc is suspected to be Golang binaries emitting an ELF binary that Redox currently <a target="_blank" href="https://gitlab.redox-os.org/redox-os/relibc/-/merge_requests/684">can’t handle with</a>, while the breakpoint trap can be something else that I don’t understand.</p>
<p>These two things are a problem that can happen when a compiler is <a target="_blank" href="https://en.wikipedia.org/wiki/Self-hosting_%28compilers%29">self-hosting</a>. The iterative loops of development in a self-hosting compiler result in the compiler code using the most advanced features the compiler itself can do, also known as <a target="_blank" href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">dogfooding</a>. But let’s think of it this way: If the compiler itself is working in Redox in the future, it’s most likely that the compiler will output the correct binaries all the time, which is an important thing to have.</p>
<p>I was thinking “okay, maybe try to avoid compilers that emit binaries”, something like Node.js or Deno. But Node.js uses V8, which, when I tried to <a target="_blank" href="https://gitlab.redox-os.org/redox-os/cookbook/-/merge_requests/584">port it</a>, complained about <a target="_blank" href="https://github.com/abseil/abseil-cpp">abseil-cpp</a> a lot. And guess what? Abseil C++ is the <a target="_blank" href="https://abseil.io/">Google C++ Standard Library</a> and it contains a lot of syscall and I have a feeling that I should not touch this unless I’m a <a target="_blank" href="https://timesofindia.indiatimes.com/technology/social/googlers-softies-and-other-nicknames-that-top-tech-company-employees-are-known-by/articleshow/113793075.cms">Googler</a>!</p>
<p>When I read <a target="_blank" href="https://gitlab.redox-os.org/redox-os/cookbook">the cookbook README</a> I noticed “the porting process can take months” which now I fully believe and understand. These are just the compiler issues, let alone the libraries and software that depend on it.</p>
<p>Even though the Rust Compiler in Redox isn’t working yet, the compiler can cross-compile software into Redox, and it has been like that since day 1, so the Rust community accepts <code>#[cfg(target_os = "redox")]</code> as Redox progresses and gets more mature. Unfortunately, the same thing is not happening for other communities, like Go, Node.js, or Python communities, since the compiler itself is not supporting Redox yet.</p>
<p>Even if it is working now or soon, it can be complicated to convince the compiler maintainers to support another OS, and adoption of their libraries can take years to mature. It’s almost like if you want the OS to get a lot of adoption, you have to convince a lot of maintainers across the earth that it’s worth it, and maybe actually send patches to speed up the process.</p>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>For these reasons, I’m starting to think maybe “a brand-new OS is almost impossible by now” and I’m grateful that Redox started this pursuit since 10 years ago, when Rust was just released its 1.0 version, when I was in high school, barely knew low-level programming at all.</p>
<p>To me, it feels that pursuing a consumer-friendly, fully working OS from scratch is a lifetime goal. And if you want your own OS getting adopted, you have to convince your OS of its unique features, not just for your users, but also to developers who maintain the software and libraries you’re likely to have porting with. It’s gonna be like a chicken or egg problem.</p>
<p>I’m one of that person who reads <a target="_blank" href="https://wiki.osdev.org/">wiki.osdev.org</a> and amazed and almost going crazy with Assembly and QEMU. But I know it’s a waste of time with the advent of AI and other cool stuff, and low-level engineering is <em>not</em> one of them, even <a target="_blank" href="https://wiki.osdev.org/Beginner_Mistakes">OSDev wiki</a> has warned it, even their <a target="_blank" href="https://wiki.osdev.org/Abandoned_Projects">list of Abandoned Projects</a> is kind of huge.</p>
<p>I’m glad that I bumped into Redox OS before, and I recommend this approach if you plan to learn low-level programming. There’s nothing wrong with trying from stratch as long as you intend it for learning, but if you want to start from existing one, there are a lot of other OS niches out there that may better suit you; maybe take a look at <a target="_blank" href="https://github.com/flosse/rust-os-comparison">this</a> and <a target="_blank" href="https://github.com/jubalh/awesome-os">that</a>.</p>
]]></content:encoded></item><item><title><![CDATA[How to Change root partition from EXT4 to XFS without changing boot disk]]></title><description><![CDATA[I have a quite a problem where my new VPS provider didn’t allow me to change boot disk so I have to be clever. The problem is my VPS provider doesn’t give me disk format so I’m stuck with EXT4 but I want it to be XFS since it has better interoperabil...]]></description><link>https://blog.wellosoft.net/how-to-change-root-partition-from-ext4-to-xfs-without-changing-boot-disk</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-change-root-partition-from-ext4-to-xfs-without-changing-boot-disk</guid><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Wed, 21 May 2025 13:38:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747835185695/b417f04e-b553-4026-8a40-5bec0e2c22d8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have a quite a problem where my new VPS provider didn’t allow me to change boot disk so I have to be clever. The problem is my VPS provider doesn’t give me disk format so I’m stuck with EXT4 but I want it to be XFS since it has better interoperability with quotas, so I need to do crazy things to change it.</p>
<p>To do this we still need a secondary disk. This disk I chose is just plain XFS disk, no boot entry or anything. Also, most VPS will give you VNC which will make our task easier as we don’t enforce our boot drive to choose another disk by default (trust me, it’s complicated)</p>
<p>Last words before I begin — as usual: Read and learn as your risk, bcoz <strong>any</strong> <strong>data loss or any monetary damage is your fault</strong>, I’ll not be responsible. If data is important, backup it before we begin.</p>
<h2 id="heading-copy-the-root-partition">Copy the root partition</h2>
<p>First, I have a second disk mounted on <code>/mnt</code></p>
<pre><code class="lang-bash">&gt; mount /dev/sdb1 /mnt
&gt; lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   50G  0 disk
├─sda1   8:1    0  488M  0 part /boot
└─sda2   8:2    0 49.5G  0 part /
sdb      8:16   0   80G  0 disk
└─sdb1   8:17   0   80G  0 part /mnt
sr0     11:0    1 1024M  0 rom
</code></pre>
<p>I copied all of my root disk to there</p>
<pre><code class="lang-bash"> rsync -avxHAX --progress / /mnt/
</code></pre>
<p>We gonna note the UUIDs here</p>
<pre><code class="lang-bash">&gt; blkid
/dev/sda2: UUID=<span class="hljs-string">"e0b6a590-a595-4f11-b771-82d13a3e07e9"</span> TYPE=<span class="hljs-string">"ext4"</span> PARTUUID=<span class="hljs-string">"b0036ea3-02"</span>
/dev/sda1: UUID=<span class="hljs-string">"99057368-5f18-4aa3-a65e-91fdef205027"</span> TYPE=<span class="hljs-string">"ext4"</span> PARTUUID=<span class="hljs-string">"b0036ea3-01"</span>
/dev/sdb1: UUID=<span class="hljs-string">"f9c78304-defa-468e-9caf-7a2567b3ad09"</span> TYPE=<span class="hljs-string">"xfs"</span> PARTUUID=<span class="hljs-string">"db8255b7-5140-4b9e-94d7-afc7481f57a8"</span>
</code></pre>
<p>Then I need to change the <code>/mnt/etc/fstab</code> , checking also with the partition type</p>
<pre><code class="lang-diff"><span class="hljs-deletion">-UUID=e0b6a590-a595-4f11-b771-82d13a3e07e9       /       ext4    relatime,grpquota,quota,usrquota,rw     0       1</span>
<span class="hljs-addition">+UUID=f9c78304-defa-468e-9caf-7a2567b3ad09       /       xfs     relatime,grpquota,quota,usrquota,rw     0       1</span>
</code></pre>
<h2 id="heading-configure-boot-partition">Configure boot partition</h2>
<p>I need to check if I’m booting under UEFI or BIOS</p>
<pre><code class="lang-bash">&gt; [ -d /sys/firmware/efi ] &amp;&amp; <span class="hljs-built_in">echo</span> <span class="hljs-string">"UEFI Boot Detected"</span> || <span class="hljs-built_in">echo</span> <span class="hljs-string">"Legacy BIOS Boot Detected"</span>
Legacy BIOS Boot Detected
</code></pre>
<p>OK I’m using old BIOS so I’m gonna using method below, please be aware if your VPS is UEFI please <em>stop reading</em> and ask ChatGPT what equivalent commands to run <code>grub2-mkconfig</code> for systems with UEFI.</p>
<p>The next thing that I need to do is reload grub to allow detecting other disk boot and forcefully add <code>xfs</code> and <code>ext4</code> drivers (because our current initramfs only load ext4) into the initramfs using dracut.</p>
<pre><code class="lang-bash">grub2-mkconfig -o /boot/grub2/grub.cfg
dracut --force --add-drivers <span class="hljs-string">"xfs ext4"</span> --regenerate-all
</code></pre>
<p>You can check the new initramfs content by <code>lsinitrd /boot/initramfs-$(uname -r).img</code> .</p>
<p>Then open VNC to reboot, switch to other disk. If there’s multiple option try one by one until you found the one that successfully boots (bcos the other options may using other initramfs with old kernel/not loaded by our last dracut).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747827649772/2cea4071-a2e0-4eb4-970d-288231b17592.png" alt class="image--center mx-auto" /></p>
<p>Now connect to SSH and confirm if we have switched root partition.</p>
<pre><code class="lang-bash">&gt; lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   50G  0 disk
├─sda1   8:1    0  488M  0 part /boot
└─sda2   8:2    0 49.5G  0 part
sdb      8:16   0   80G  0 disk
└─sdb1   8:17   0   80G  0 part /
sr0     11:0    1 1024M  0 rom
</code></pre>
<p>Now we mount the old root partition, wipe it, and clone the data back</p>
<pre><code class="lang-bash">mkfs.xfs /dev/sda2 -f
mount /dev/sda2 /mnt
rsync -avxHAX --progress / /mnt/
</code></pre>
<p>Chroot to the disk and get the blkid</p>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> dir <span class="hljs-keyword">in</span> boot proc sys dev run; <span class="hljs-keyword">do</span> mount --<span class="hljs-built_in">bind</span> /<span class="hljs-variable">$dir</span> /mnt/<span class="hljs-variable">$dir</span>; <span class="hljs-keyword">done</span>
chroot /mnt
blkid
</code></pre>
<p>Continue editing the <code>/etc/fstab</code> (still in the chroot!) with the new UUID coming from the disk blkid</p>
<pre><code class="lang-diff"><span class="hljs-deletion">-UUID=f9c78304-defa-468e-9caf-7a2567b3ad09       /       xfs     relatime,grpquota,quota,usrquota,rw     0       1</span>
<span class="hljs-addition">+UUID=054536e0-bf74-4af6-ac74-51d61dc409d1       /       xfs     relatime,grpquota,quota,usrquota,rw     0       1</span>
</code></pre>
<p>Now, still in chroot, update grub and (if you’re using rhel where /boot/loader/entries exists) update loader entries</p>
<pre><code class="lang-bash">grub2-mkconfig -o /boot/grub2/grub.cfg
grubby --update-kernel=ALL --remove-args=<span class="hljs-string">"root"</span>
grubby --update-kernel=ALL --args=<span class="hljs-string">"root=UUID=054536e0-bf74-4af6-ac74-51d61dc409d1"</span>
<span class="hljs-comment"># while I'm here I need to turn on quotas too</span>
grubby --update-kernel=ALL --args=<span class="hljs-string">"rootflags=usrquota,grpquota"</span>
</code></pre>
<p>Exit chroot and reboot, and now you should find….</p>
<pre><code class="lang-bash">&gt; findmnt /
TARGET SOURCE    FSTYPE OPTIONS
/      /dev/sda2 xfs    rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,usrquota,grpquota
</code></pre>
]]></content:encoded></item><item><title><![CDATA[How To Upgrade PostgreSQL installed from YUM]]></title><description><![CDATA[I have a bit different case with other tutorials as I use Postgres YUM repo to get extra extensions installed. My case here is to upgrade from version 16 to 17.
First, the usual installation:
PG=17
dnf -y install postgresql$PG-{server,contrib}
# Inst...]]></description><link>https://blog.wellosoft.net/how-to-upgrade-postgresql-installed-from-yum</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-upgrade-postgresql-installed-from-yum</guid><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Mon, 07 Apr 2025 00:37:24 GMT</pubDate><content:encoded><![CDATA[<p>I have a bit different case with other tutorials as I use <a target="_blank" href="https://yum.postgresql.org/">Postgres YUM repo</a> to get extra extensions installed. My case here is to upgrade from version 16 to 17.</p>
<p>First, the usual installation:</p>
<pre><code class="lang-bash">PG=17
dnf -y install postgresql<span class="hljs-variable">$PG</span>-{server,contrib}
<span class="hljs-comment"># Install extra extensions</span>
dnf -y install {postgis35,pgrouting,pgvector,pg_uuidv7,timescaledb}_<span class="hljs-variable">$PG</span> postgresql<span class="hljs-variable">$PG</span>-devel
<span class="hljs-keyword">for</span> ext <span class="hljs-keyword">in</span> <span class="hljs-string">"postgis"</span> <span class="hljs-string">"postgis_raster"</span> <span class="hljs-string">"postgis_sfcgal"</span> <span class="hljs-string">"postgis_tiger_geocoders"</span> <span class="hljs-string">"postgis_topology"</span> <span class="hljs-string">"earthdistance"</span> <span class="hljs-string">"address_standardizer"</span> <span class="hljs-string">"address_standardizer_data_us"</span> <span class="hljs-string">"pgrouting"</span> <span class="hljs-string">"pg_uuidv7"</span> <span class="hljs-string">"vector"</span>; <span class="hljs-keyword">do</span>
  <span class="hljs-comment"># this lets non admin enable the extensions</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"trusted = true"</span> &gt;&gt; <span class="hljs-string">"/usr/pgsql-<span class="hljs-variable">$PG</span>/share/extension/<span class="hljs-variable">$ext</span>.control"</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>While this repo doesn’t have <code>pg_lsclusters</code>, it does have some bash script to aid in upgrading. We’re going to use <code>postgresql-17-setup</code>. This script aids in initdb and upgrade from one previous version (in this case it’s 16). Note that if you want to upgrade two step major versions (like from 15 to 17), you better to do upgrade in version 16 first.</p>
<p>Now we need to init the db and check upgrade:</p>
<pre><code class="lang-bash">$&gt; /usr/pgsql-17/bin/postgresql-17-setup initdb
Initializing database ... OK

$&gt; /usr/pgsql-17/bin/postgresql-17-setup check_upgrade
Performing upgrade check: OK

See /var/lib/pgsql/17/pgupgrade.log <span class="hljs-keyword">for</span> details.
</code></pre>
<p>Then perform upgrade</p>
<pre><code class="lang-bash">pssh -t 0 -Iih ~/hosts &lt;&lt;<span class="hljs-string">'EOC'</span>
<span class="hljs-comment"># copy old config as backup</span>
/bin/cp -f /var/lib/pgsql/16/data/postgresql.conf /var/lib/pgsql/16/data/postgresql.old.conf
/bin/cp -f /var/lib/pgsql/16/data/pg_hba.conf /var/lib/pgsql/16/data/pg_hba.old.conf
<span class="hljs-comment"># disable daemon and perform upgrade</span>
systemctl <span class="hljs-built_in">disable</span> postgresql-16 --now
/usr/pgsql-17/bin/postgresql-17-setup upgrade
<span class="hljs-comment"># in my setup, I symlink postgresql service, let's replace it</span>
ln -fs /usr/lib/systemd/system/postgresql-17.service /usr/lib/systemd/system/postgresql.service
systemctl daemon-reload
<span class="hljs-comment"># restore config</span>
/bin/cp -f /var/lib/pgsql/16/data/postgresql.old.conf /var/lib/pgsql/17/data/postgresql.conf
/bin/cp -f /var/lib/pgsql/16/data/pg_hba.old.conf /var/lib/pgsql/17/data/pg_hba.conf

systemctl <span class="hljs-built_in">enable</span> postgresql-17 --now
EOC
</code></pre>
<p>After installation you should check if it is running correctly.</p>
]]></content:encoded></item><item><title><![CDATA[Streaming CSV inside a ZIP File with Go Channel]]></title><description><![CDATA[So I have a pretty tough use case where I need to load 2GB of CSV files into a database and somehow manages it to be done in a container where it limit RAM usage into less than 200 MB.
First things first, how do you even load the CSV file? 2GB CSV fi...]]></description><link>https://blog.wellosoft.net/streaming-csv-inside-a-zip-file-with-go-channel</link><guid isPermaLink="true">https://blog.wellosoft.net/streaming-csv-inside-a-zip-file-with-go-channel</guid><category><![CDATA[Go Language]]></category><category><![CDATA[multithreading]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sun, 29 Dec 2024 09:10:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735463490875/748613bb-e26b-43e2-aa1d-ae9e90c6cfaa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So I have a pretty tough use case where I need to load 2GB of CSV files into a database and somehow manages it to be done in a container where it limit RAM usage into less than 200 MB.</p>
<p>First things first, how do you even load the CSV file? 2GB CSV file won’t fit into 200 MB of RAM. I wish there’s easy “PHP” way where file uploads is the language job. But hell no, since we’re running our backend in a container we’re using GCP for file uploads. Normally, when our backend need that file via GCP API, we download that file entirely and put it in the memory.</p>
<pre><code class="lang-go"><span class="hljs-keyword">import</span> <span class="hljs-string">"cloud.google.com/go/storage"</span>

<span class="hljs-keyword">type</span> GcpStorage <span class="hljs-keyword">struct</span> {
    client *storage.Client
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *GcpStorage)</span> <span class="hljs-title">ReadObject</span><span class="hljs-params">(bucket <span class="hljs-keyword">string</span>, object <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">([]<span class="hljs-keyword">byte</span>, error)</span></span> {
    rc, err := s.client.Bucket(bucket).Object(object).NewReader(context.Background())
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }
    <span class="hljs-keyword">defer</span> rc.Close()
    <span class="hljs-comment">// we read all of them (note: will crash with 2GB of CSV!)</span>
    <span class="hljs-keyword">return</span> io.ReadAll(rc)
}
</code></pre>
<p>We could simply replace the io.ReadAll with a loop that reads CSV line-by-line to avoid reading all the contents of the file and crashing the backend. But do you know what else that I hate? The fact that user have to <strong>literally upload the 2GB worth of text file</strong>. Why not compressing it beforehand? It significantly saves the network bandwidth and S3 storage. A real test shows when it get compressed, the size shrunk 20x times!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735454682696/9d4cfd50-8dad-49fc-812b-cc0a90412d73.png" alt class="image--center mx-auto" /></p>
<p>When I started to code it, I thought stream process should look like this:</p>
<ol>
<li><p>Open S3 stream from bucket path</p>
</li>
<li><p>Open zip stream from S3 stream</p>
</li>
<li><p>Open CSV stream from a CSV file inside a Zip stream</p>
</li>
</ol>
<p>I came into a problem at step 2:</p>
<ol>
<li><p>Zip doesn’t support streaming bytes. It must be able to seek at specific byte position. Source: <a target="_blank" href="https://stackoverflow.com/a/16947430">https://stackoverflow.com/a/16947430</a></p>
</li>
<li><p>The GCS API (that we’re using) may support seed/randomized byte position reading via <a target="_blank" href="https://cloud.google.com/go/docs/reference/cloud.google.com/go/storage/1.14.0#cloud_google_com_go_storage_ObjectHandle_NewRangeReader">NewRangeReader</a>, an S3 Read that capable of reading at specific offset bytes. But Go Zip package can’t use this to its advantage so you’re stuck with classic <code>NewReader</code> — but you just can’t clever out the Go Interface.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735456417748/b063d37d-fd08-49d4-a4cd-f95676b4d0bf.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>But in the grand scheme of things, trying to seek bytes (ie. randomized read bytes) is a terrible idea for over-a-network reading (too many round trips!) so it’s better to download the whole zip file into the memory. The new streaming approach is this:</p>
<ol>
<li><p>Open S3 stream from bucket path</p>
</li>
<li><p>Read all file bytes from S3 stream</p>
</li>
<li><p>Open zip stream from file bytes</p>
</li>
<li><p>Open CSV stream from a CSV file inside a Zip stream</p>
</li>
</ol>
<p>At step 2 we’re allocating 100 MB worth of zipped file, which is fine because it’s still lower than our RAM limit. Here’s the code looks like:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *GcpStorage)</span> <span class="hljs-title">Read</span><span class="hljs-params">(bucket <span class="hljs-keyword">string</span>, object <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(io.ReadCloser, error)</span></span> {
    rc, err := s.client.Bucket(bucket).Object(object).NewReader(context.Background())
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }
    <span class="hljs-keyword">return</span> io.ReadCloser(rc), <span class="hljs-literal">nil</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(s *GcpStorage)</span> <span class="hljs-title">ReadFileCsvInZipStream</span><span class="hljs-params">(reader io.ReadCloser)</span> <span class="hljs-params">(*zip.File, error)</span></span> {
    <span class="hljs-keyword">defer</span> reader.Close()
    <span class="hljs-comment">// this reads the whole zip file! (~100 MB allocation)</span>
    fileBytes, err := io.ReadAll(reader)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">// these readers don't need Close() since the data lives on memory</span>
    byteReader := bytes.NewReader(fileBytes)
    zipReader, err := zip.NewReader(byteReader, <span class="hljs-keyword">int64</span>(<span class="hljs-built_in">len</span>(fileBytes)))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }
    <span class="hljs-keyword">for</span> _, f := <span class="hljs-keyword">range</span> zipReader.File {
        <span class="hljs-keyword">if</span> filepath.Ext(f.Name) == <span class="hljs-string">".csv"</span> {
            f.Open()
            <span class="hljs-keyword">return</span> f, <span class="hljs-literal">nil</span>
        }
    }

    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, errors.New(<span class="hljs-string">"CSV File not found"</span>)
}
</code></pre>
<p>The <code>*zip.File</code> contains <code>func (f *zip.File) Open() (io.ReadCloser, error)</code> which returning a file stream that we can put into a function that can process a streamed CSV file.</p>
<p>For the CSV streaming process, we want to “batch” the process. It’s like if we have read 100 lines from CSV, we execute a single INSERT statement containing 100 row of data into our DB.</p>
<p>It maybe easy to do the batching via single loop of <code>strings.Split(s, "\n")</code> and say if <code>i % 100 == 0</code> then do the insertion. However, we won’t reinvent the wheel writing CSV parser ourself. We’ll use <a target="_blank" href="https://github.com/gocarina/gocsv">https://github.com/gocarina/gocsv</a> and they have a specific function to parse a streamed CSV file:</p>
<pre><code class="lang-go"><span class="hljs-comment">// UnmarshalToChan parses the CSV from the reader and send each value in the chan c.</span>
<span class="hljs-comment">// The channel must have a concrete type.</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">UnmarshalToChan</span><span class="hljs-params">(in io.Reader, c <span class="hljs-keyword">interface</span>{})</span> <span class="hljs-title">error</span></span> {
  <span class="hljs-comment">// ....</span>
}
</code></pre>
<p>Whoa, wait! <strong>a channel?!</strong> Why would someone use that complicated mess???</p>
<p>You’re not alone. I’ve been coding Go for three years and i’ve never use something like that complicated professionally. But I kept go on and later it made so much sense to use channels because <strong>unlike Javascript</strong>, you would want to <strong>pass channels</strong> rather than functions.</p>
<p>Let’s say I want to tell our CSV parser to read into this struct:</p>
<pre><code class="lang-go"><span class="hljs-comment">// A simulation data. Name, Metadata, Properties are from CSV. Anything else is automated.</span>
<span class="hljs-keyword">type</span> SimulationData <span class="hljs-keyword">struct</span> {
    ID         uuid.UUID <span class="hljs-string">`gorm:"type:uuid;column:id;primaryKey" json:"id"`</span>
    Name       <span class="hljs-keyword">string</span>    <span class="hljs-string">`gorm:"type:text;column:name" json:"name" csv:"Name"`</span>
    Metadata   <span class="hljs-keyword">string</span>    <span class="hljs-string">`gorm:"type:text;column:metadata" json:"metadata" csv:"Metadata"`</span>
    Properties <span class="hljs-keyword">string</span>    <span class="hljs-string">`gorm:"type:text;column:properties" json:"properties" csv:"Properties"`</span>
    NthIndex   <span class="hljs-keyword">int</span>       <span class="hljs-string">`gorm:"type:int;column:nth_index" json:"nth_index"`</span>
}
</code></pre>
<p>We read the CSV this way, where <code>reader</code> is from <code>Open()</code> of a <code>*zip.File</code>, and <code>payloadChan</code> is the channel where we will process insertion into the database.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">ReadCsvStreamed</span><span class="hljs-params">(reader io.ReadCloser, payloadChan <span class="hljs-keyword">chan</span> []SimulationData)</span> <span class="hljs-title">error</span></span> {
    <span class="hljs-keyword">defer</span> reader.Close()
    <span class="hljs-keyword">var</span> csvRowChan = <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> SimulationData)

    <span class="hljs-comment">// this function runs on separate thread</span>
    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        <span class="hljs-comment">// we close it here since we send the channel data from here</span>
        <span class="hljs-keyword">defer</span> <span class="hljs-built_in">close</span>(payloadChan)
        rows := <span class="hljs-built_in">make</span>([]SimulationData, <span class="hljs-number">0</span>)
        index := <span class="hljs-number">0</span>
        <span class="hljs-keyword">for</span> row := <span class="hljs-keyword">range</span> csvRowChan {
            input := row
            input.ID = uuid.New()
            input.NthIndex = index
            index += <span class="hljs-number">1</span>
            rows = <span class="hljs-built_in">append</span>(rows, input)
            <span class="hljs-comment">// when current batch is over 100</span>
            <span class="hljs-keyword">if</span> <span class="hljs-built_in">len</span>(rows) &gt;= <span class="hljs-number">100</span> {
                payloadChan &lt;- rows
                rows = <span class="hljs-built_in">make</span>([]SimulationData, <span class="hljs-number">0</span>)
            }
        }
        <span class="hljs-comment">// send remaining data if any</span>
        <span class="hljs-keyword">if</span> <span class="hljs-built_in">len</span>(rows) &gt; <span class="hljs-number">0</span> {
            payloadChan &lt;- rows
        }
    }()

    <span class="hljs-comment">// csvRowChan is closed inside this function</span>
    <span class="hljs-keyword">if</span> err := gocsv.UnmarshalToChan(reader, csvRowChan); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> err
    }
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>
}
</code></pre>
<p>All legos bricks needed are defined. Now, we stack it together:</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">Handler</span><span class="hljs-params">(storage *GcpStorage, db *gorm.DB, wg *sync.WaitGroup, filename <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(err error)</span></span> {
    zipReader, err := storage.Read(<span class="hljs-string">"mybucket"</span>, filename)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }
    <span class="hljs-comment">// zipReader is closed here</span>
    zipHandle, err := storage.ReadFileCsvInZipStream(zipReader)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }
    fileReader, err := zipHandle.Open()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }

    pchan := <span class="hljs-built_in">make</span>(<span class="hljs-keyword">chan</span> []SimulationData)
    wg.Add(<span class="hljs-number">1</span>)
    <span class="hljs-keyword">go</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span></span> {
        <span class="hljs-keyword">for</span> p := <span class="hljs-keyword">range</span> pchan {
            <span class="hljs-keyword">if</span> err := db.Create(p).Error; err != <span class="hljs-literal">nil</span> {
                fmt.Printf(<span class="hljs-string">"%+v"</span>, err)
            }
        }
        wg.Done()
    }()

    <span class="hljs-comment">// pChan and fileReader is closed here</span>
    err = ReadCsvStreamed(fileReader, pchan)
    <span class="hljs-keyword">return</span>
}
</code></pre>
<p>If you look closely, there are three threads in this process:</p>
<ol>
<li><p>The main thread, which finished when <code>Handler()</code> finished reading the CSV</p>
</li>
<li><p>The thread inside <code>ReadCsvStreamed()</code> which receiving the data from CSV and collect it into another thread in the form of row batches</p>
</li>
<li><p>The <code>go func()</code> thread inside the main thread, which process the batched rows into gorm database with <code>db.Create</code>.</p>
</li>
</ol>
<p>You may wonder what’s <code>wg sync.WaitGroup</code> for? It is used if you want to make sure your app waits for all asynchronous tasks to be completed before shutting down.</p>
<p>For completeness, here’s the simple main function:</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    storage := GcpStorage{}
    db := gorm.DB{}
    wg := sync.WaitGroup{}
    <span class="hljs-comment">// init storage and db from envar</span>
    <span class="hljs-comment">// ....</span>

    <span class="hljs-comment">// normally you place the handler inside HTTP function</span>
    <span class="hljs-comment">// but for brevity we call this directly</span>
    <span class="hljs-keyword">if</span> err := Handler(&amp;storage, &amp;db, &amp;wg, <span class="hljs-string">"myfile.csv.zip"</span>); err != <span class="hljs-literal">nil</span> {
        <span class="hljs-built_in">panic</span>(err)
    }
    fmt.Println(<span class="hljs-string">"All CSV data is parsed"</span>)
    wg.Wait()
    fmt.Println(<span class="hljs-string">"All CSV data is saved"</span>)
}
</code></pre>
<p>Now you know how to handle go routines and channel and what it is used for and why it needed! Imagine if we had to do this <em>in the Javascript way</em> (i.e. passing functions instead of channel) it’s gonna be <strong>terrible</strong> (you may have to deal with hardcoded type / <code>interface{}</code> conversions a lot) and our code won’t gain from multithreading!</p>
<p>Full code: <a target="_blank" href="https://gist.github.com/willnode/75a96840ff33ada3f2aa3db3d28cde07">https://gist.github.com/willnode/75a96840ff33ada3f2aa3db3d28cde07</a></p>
]]></content:encoded></item><item><title><![CDATA[Svelte 5 Upgrade: ` Self-closing HTML tags for non-void elements are ambiguous`]]></title><description><![CDATA[There’s a very annoying warning that doesn’t get mentioned in Svelte 5 migration step:
[vite-plugin-svelte] src/user/host/templates/DnsEdit.svelte:169:4 Self-closing HTML tags for non-void elements are ambiguous — use `<i ...></i>` rather than `<i .....]]></description><link>https://blog.wellosoft.net/svelte-5-upgrade-self-closing-html-tags-for-non-void-elements-are-ambiguous</link><guid isPermaLink="true">https://blog.wellosoft.net/svelte-5-upgrade-self-closing-html-tags-for-non-void-elements-are-ambiguous</guid><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sun, 29 Dec 2024 05:31:45 GMT</pubDate><content:encoded><![CDATA[<p>There’s a very annoying warning that doesn’t get mentioned in <a target="_blank" href="https://svelte.dev/docs/svelte/v5-migration-guide">Svelte 5 migration step</a>:</p>
<pre><code class="lang-plaintext">[vite-plugin-svelte] src/user/host/templates/DnsEdit.svelte:169:4 Self-closing HTML tags for non-void elements are ambiguous — use `&lt;i ...&gt;&lt;/i&gt;` rather than `&lt;i ... /&gt;`
https://svelte.dev/e/element_invalid_self_closing_tag
</code></pre>
<p>It creates a gazillion of warning since I type <code>&lt;i class="" /&gt;</code> a lot for font awesome icons.</p>
<p>I followed the trail, and it seems that Svelte creator just get enlighten seen from this issue he created: <a target="_blank" href="https://github.com/sveltejs/svelte/issues/11052">https://github.com/sveltejs/svelte/issues/11052</a></p>
<p>I was about to get mad since this means a lot of manual work until I came to the end of the thread:</p>
<blockquote>
<p><em>To update your components en masse, you can use the following command:</em></p>
<pre><code class="lang-plaintext">npx svelte-migrate self-closing-tags
</code></pre>
<p><em>This will prevent Svelte 5 warning you to replace</em> <code>&lt;div /&gt;</code> <em>with</em> <code>&lt;div&gt;&lt;/div&gt;</code> <em>etc.</em></p>
</blockquote>
<p>It does the job surprisingly well.</p>
<p>There’s also <code>npx svelte-migrate svelte-5</code> mentioned in <a target="_blank" href="https://www.npmjs.com/package/svelte-migrate">svelte-migrate</a> that does migration for runes. It’s another useful and time-saving tool and again, weirdly not mentioned in the migration guide.</p>
]]></content:encoded></item><item><title><![CDATA[How to Extend LVM Boot Partition on Oracle Cloud]]></title><description><![CDATA[So you’re running out of disk space and about to increase the volume boot size, great! Now you met with this dialog.

The documentation link mentions here: https://docs.oracle.com/en-us/iaas/Content/Block/Tasks/rescanningdisk.htm
How lovely, but I’m ...]]></description><link>https://blog.wellosoft.net/how-to-extend-lvm-boot-partition-on-oracle-cloud</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-extend-lvm-boot-partition-on-oracle-cloud</guid><category><![CDATA[Oracle]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Wed, 09 Oct 2024 12:31:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728477377105/b4786cd2-2029-4858-a453-a166b1fa5869.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So you’re running out of disk space and about to increase the volume boot size, great! Now you met with this dialog.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728475710248/92c7a49b-b99a-48b2-bc15-6a78df8ce982.png" alt class="image--center mx-auto" /></p>
<p>The documentation link mentions here: <a target="_blank" href="https://docs.oracle.com/en-us/iaas/Content/Block/Tasks/rescanningdisk.htm">https://docs.oracle.com/en-us/iaas/Content/Block/Tasks/rescanningdisk.htm</a></p>
<p>How lovely, but I’m in Rocky Linux, not Oracle Linux. So, these commands don’t work, and I have to scorch the whole internet resources to make it works!</p>
<p>Let’s break it down:</p>
<h2 id="heading-rescan-the-disk">Rescan The Disk</h2>
<p>My partition is look like this:</p>
<pre><code class="lang-plaintext">[root@sgp ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda              8:0    0   100G  0 disk
├─sda1           8:1    0    99M  0 part /boot/efi
├─sda2           8:2    0  1000M  0 part /boot
├─sda3           8:3    0     4M  0 part
├─sda4           8:4    0     1M  0 part
└─sda5           8:5    0  98.9G  0 part
  └─rocky-root 253:0    0  98.9G  0 lvm  /
</code></pre>
<p>As you can see there’s lot of partition but it’s just 100 GB of single boot partition inside a logical disk we call them LVM, which is what <code>/</code> mounts to.</p>
<p>The first command before running anything is test the disk. We use <code>dd</code> for that, and tell it to read <code>/dev/sda</code> to nowhere <code>/dev/null</code> with <code>count=1</code>. We got this:</p>
<pre><code class="lang-plaintext">[root@sgp ~]# dd iflag=direct if=/dev/sda of=/dev/null count=1
1+0 records in
1+0 records out
512 bytes copied, 0.00101196 s, 506 kB/s
</code></pre>
<p>Ok it seems it is correctly accessible, let’s continue with actual disk rescan.</p>
<pre><code class="lang-plaintext">[root@sgp ~]# echo "1" | sudo tee /sys/class/block/sda/device/rescan
1
</code></pre>
<h2 id="heading-extending-the-disk">Extending The Disk</h2>
<p>The disk has been bumped, now we extend the partition.</p>
<p>If this is not a root <code>/</code> partition i’ll just simply do <code>xfs_growfs /home</code> and call it a day. Unfortunately, we need the root partition to grow right now.</p>
<p>Fortunately, Oracle Cloud says just use <code>oci-growfs</code> so yeah that’s what I’ll use. First, we install it.</p>
<pre><code class="lang-plaintext">[root@sgp ~]# yum install oci-utils -y
</code></pre>
<p>Now we extend it</p>
<pre><code class="lang-plaintext">[root@sgp ~]# /usr/libexec/oci-growfs
Volume Group: rocky
Volume Path: /dev/rocky/root
Mountpoint Data
---------------
          mountpoint: /
              source: /dev/mapper/rocky-root
     filesystem type: xfs
         source size: 98.9G
                type: lvm
                size: 98.9G
    physical devices: ['/dev/sda5']
    physical volumes: ['/dev/sda']
    partition number: ['5']
   volume group name: rocky
   volume group path: /dev/rocky/root

Partition dry run expansion "/dev/sda5" succeeded.
CHANGE: partition=5 start=2265088 old: size=207450079 end=209715166 new: size=333279199 end=335544286

Expanding partition /dev/sda5: Confirm?   [y/N]
</code></pre>
<p>The log continues until</p>
<pre><code class="lang-plaintext">
Extending /dev/sda5 succeeded.
Device /dev/sda5 extended successfully.
Logical volume /dev/rocky/root extended successfully.
</code></pre>
<p>Heal yeah, we did it!</p>
<pre><code class="lang-plaintext">[root@sgp ~]# df -h
Filesystem              Size  Used Avail Use% Mounted on
devtmpfs                4.0M     0  4.0M   0% /dev
tmpfs                   5.7G  3.7M  5.7G   1% /dev/shm
tmpfs                   2.3G  240M  2.1G  11% /run
/dev/mapper/rocky-root  159G   98G   62G  62% /
/dev/sda2               936M  594M  343M  64% /boot
/dev/sda1                99M  7.3M   92M   8% /boot/efi
</code></pre>
]]></content:encoded></item><item><title><![CDATA[How to upload any website to the Internet, quickly]]></title><description><![CDATA[So you have created a website on your computer? Great! Now, we come to the following problem: how to make it accessible to the public?
There are many great options for web hosting (we call them PaaS), and they also offer a free account to get you sta...]]></description><link>https://blog.wellosoft.net/how-to-upload-any-website-to-the-internet-quickly</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-upload-any-website-to-the-internet-quickly</guid><category><![CDATA[hosting]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[PaaS]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sun, 09 Jul 2023 01:37:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688864345611/36ff3fb2-99a7-497b-826a-77f2de80e225.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So you have created a website on your computer? Great! Now, we come to the following problem: how to make it accessible to the public?</p>
<p>There are many great options for web hosting (we call them PaaS), and they also offer a free account to get you started without losing any money.</p>
<p>I created a PaaS where it should take less than ten clicks (less than 10 minutes) to set up any website over the Internet. Let's get it started:</p>
<h2 id="heading-first-step-create-an-account">First Step: Create an Account</h2>
<p>Go to <a target="_blank" href="https://domcloud.co/">domcloud.co</a> and create an account using Google/GitHub Sign In. With 3rd party sign-in, you don't have to confirm your email to get in.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688796071509/00f1478e-1506-4294-9d52-2389b7c8e6fd.png" alt class="image--center mx-auto" /></p>
<p>After that, a welcome page will be shown. Click <strong>Create a Website</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688797291551/46331f2d-e9ae-4c9b-9e88-5f6f55370a66.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-uploading-a-website">Uploading a website</h2>
<p>This page contains two modes: <strong>Start from a template</strong> or <strong>Upload or clone from the Internet</strong>. We choose the latter.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688797721389/0df3417a-2377-4639-872e-7687833c1732.png" alt class="image--center mx-auto" /></p>
<p>The next step is to drop your project folder into the uploader, then choose what's the kind of framework you're using in that project, <a target="_blank" href="https://www.youtube.com/shorts/0WFk-qh2Cc0">like the animation below</a>. It will generate a script about how the project will be initiated.</p>
<p><img src="https://github.com/domcloud/domcloud/assets/20214420/1427ed96-0668-41ec-acca-50e5cb3c44a5" alt class="image--center mx-auto" /></p>
<p>Then fill in the unique name, website region, also with a custom domain name (if you own one), then click <strong>Add a website</strong>. (Read more about <a target="_blank" href="https://domcloud.co/docs/features/dns">using a custom domain</a>)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688853801710/a524aadc-f117-4763-81d5-c5dce2beb73b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-done">Done!</h2>
<p>The next screen tells the progress as it happens in almost real-time. It should only take a few minutes.</p>
<p><img src="https://domcloud.co/assets/ss/new-progress-b.png" alt class="image--center mx-auto" /></p>
<p>After it is finished, your website should be accessible over the Internet!</p>
<h2 id="heading-continue-editing">Continue Editing</h2>
<p>Continue editing the website online via tools like an online file browser and Visual Studio Code remote development using SSH (my favorite). <a target="_blank" href="https://domcloud.co/docs/intro/getting-started#managing-website">Read more about it in the documentation</a>.</p>
<p>If your website setup comes into a problem, there is <a target="_blank" href="https://domcloud.co/blog/improving-ux-for-newbies#connection-check-api">a tool to check its connection</a>. It should be able to troubleshoot common HTTPS and DNS problems.</p>
<p>This tutorial doesn't cover database migration, but some framework scripts do already handle that (check the script!). If in the end, your website has a database problem, you might need to initiate the database yourself. <a target="_blank" href="https://domcloud.co/docs/features/database/">Check the documentation</a> for that.</p>
<p>That's it! I hope this brings new excitement into learning website development. Let me know what you think about it 🤓</p>
]]></content:encoded></item><item><title><![CDATA[How to Install Unity and Visual Studio on Mac without Admin Rights]]></title><description><![CDATA[I came across a situation where I was stuck with a work laptop that didn't allow me to install apps with system privileges... So I went with a workaround.
This assumes you still have sudo access needed to grant a Unity license and additional Visual S...]]></description><link>https://blog.wellosoft.net/how-to-install-unity-and-visual-studio-on-mac-without-admin-rights</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-install-unity-and-visual-studio-on-mac-without-admin-rights</guid><category><![CDATA[Unity3D]]></category><category><![CDATA[macOS]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Thu, 02 Mar 2023 12:40:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677805419287/b5bf6f3b-d3db-4e49-b90f-409430becc01.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I came across a situation where I was stuck with a work laptop that didn't allow me to install apps with system privileges... So I went with a workaround.</p>
<p>This assumes you still have <code>sudo</code> access needed to grant a Unity license and additional Visual Studio packages.</p>
<h2 id="heading-installing-unity">Installing Unity</h2>
<p>Start with downloading <a target="_blank" href="https://unity.com/download#how-get-started">Unity Hub for Mac</a>. After opening the <code>.dmg</code> file, copy the "Unity Hub" App and paste it to <code>~/Applications</code> (Use <code>⌘⇧G</code> to access this folder). Don't drag it directly to Applications as it will ask you for a system password.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677756867682/4af3f849-b373-4d52-86b2-d6fe0e8e1e69.png" alt class="image--center mx-auto" /></p>
<p>Continue following the installation as usual, but skip the granting license part.</p>
<p>When installing Unity, change the installation target directory to <code>~/Applications/Unity</code> (create the <code>Unity</code> folder in <code>~/Applications</code> first)</p>
<p>Now, we'll try to activate the Unity license. Use manual activation by using the "Activate with license request" as we can't use the "automatic" way without asking for a system password.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677757232786/3c1ec6c9-becc-4448-aa2e-770efff0dd01.png" alt class="image--center mx-auto" /></p>
<p>It will generate a <code>.alf</code> file, upload it to <a target="_blank" href="https://license.unity3d.com/manual">the unity site</a>, download the <code>.ulf</code>, or licensed file. Then we activate it using Terminal.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># replace "2021.3.19f1" with your Unity version</span>
~/Applications/2021.3.19f1/Unity.app/Contents/MacOS/Unity -batchmode -manualLicenseFile ~/Downloads/Unity_v2017.x.ulf -logfile
</code></pre>
<p>It should be activated and you can open Unity from now.</p>
<h2 id="heading-installing-visual-studio">Installing Visual Studio</h2>
<p>Download Visual Studio for Mac. Follow the usual installation steps.</p>
<p>Without the knowledge of the system password, we'll be stuck on this installation progress, and had to cancel:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677758338826/dac0a709-eb51-408b-8a85-f174b2e17c09.png" alt class="image--center mx-auto" /></p>
<p>Fornatunely, the download has just been cached. Open this folder in Finder using <code>⌘⇧G</code>:</p>
<pre><code class="lang-bash">~/Library/Caches/VisualStudioInstaller/downloads/
</code></pre>
<p>Find the <code>.dmg</code> file contains Visual Studio, copy the "Visual Studio" app to <code>~/Applications</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677758733574/e1d38407-e1a3-41da-8080-c184ab7309d5.png" alt class="image--center mx-auto" /></p>
<p>Next, open a project with Unity. We will integrate Unity with Visual Studio.</p>
<h2 id="heading-additional-setup-for-unity-visual-studio">Additional Setup for Unity + Visual Studio</h2>
<p>Configure your Unity to Open External Script Editor with Visual Studio for Mac: (browse and locate the editor in <code>~/Applications/Visual Studio.app</code>)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677759039958/e7884d1d-d02d-467d-a868-a6f491e15374.png" alt class="image--center mx-auto" /></p>
<p>Next, open the C# Project. It should be opened with Visual Studio.</p>
<p>At first open, you will be asked to Install Mono, which is required so your project compiles:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677759411346/db65abb9-a104-4d20-8a3c-685247e2b220.png" alt class="image--center mx-auto" /></p>
<p>Clicking Restart will not work as it will ask system password. We'll use Terminal to install it.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># go to temporary downloads. Change "17.0" with your version</span>
<span class="hljs-built_in">cd</span> ~/Library/Caches/VisualStudio/17.0/TempDownload/ 
<span class="hljs-comment"># install all .pkgs there</span>
sudo installer -pkg monoframework-mdk-6.12.0.188.macos10.xamarin.universal.pkg -target /
sudo installer -pkg microsoft-jdk-11.0.16.1-macos-aarch64.pkg -target /
sudo installer -pkg OpenJDK8U-jdk_x64_mac_hotspot_8u302b08.pkg  -target /
</code></pre>
<p>Things should works now. Enjoy!</p>
<p>References:</p>
<ul>
<li><p><a target="_blank" href="https://docs.unity3d.com/2023.1/Documentation/Manual/ManualActivationCmdMac.html">Submit a license request from a command line and browser</a></p>
</li>
<li><p><a target="_blank" href="https://stackoverflow.com/questions/61094646/visual-studio-installation-failed-in-mac-os-x">Visual Studio installation failed in MAC OS X</a></p>
</li>
<li><p><a target="_blank" href="https://apple.stackexchange.com/questions/72226/installing-pkg-with-terminal">Installing .pkg with Terminal</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Switching from Google Analytics to a cookie-less solution]]></title><description><![CDATA[Web-traffic Analytics is an important metric for any web service. For me, it can be useful to see whether a project is deemed worth continuing or indeed useful for people. 
The most obvious solution is to use Google Analytics. It's robust and free. F...]]></description><link>https://blog.wellosoft.net/switching-from-google-analytics-to-a-cookie-less-solution</link><guid isPermaLink="true">https://blog.wellosoft.net/switching-from-google-analytics-to-a-cookie-less-solution</guid><category><![CDATA[Open Source]]></category><category><![CDATA[analytics]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sat, 16 Jul 2022 06:57:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1657954610001/1OCManKNj.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Web-traffic Analytics is an important metric for any web service. For me, it can be useful to see whether a project is deemed worth continuing or indeed useful for people. </p>
<p>The most obvious solution is to use Google Analytics. It's robust and free. For many years people (and I) don't have a problem with it. But, recent development in data protection and privacy enforcement like GDRP makes me think a lot: Why would I have to make that annoying cookie consent just to make Google Analytics work?</p>
<p>For a long time, I just pulled my websites from Google Analytics. But after a while, without it, I'm clueless. I don't know if my marketing strategy would make any difference. So I search for a "cookie-less" solution so I don't need to put cookie consent. Turns out, there are a lot of services that offer the "cookie-less" solution, albeit not free. But I found one that works best for me: <a target="_blank" href="https://plausible.io/">Plausible</a>.</p>
<blockquote>
<p>Heads up: Legal stuff is hard. IANAL, I'm just a simple person that does not even read Terms &amp; conditions when installing stuff. And... This is not paid writing, I'm just sharing something that works for me.</p>
</blockquote>
<p>I choose that platform primarily because it's <em>open source</em>. Well, they do have a cloud solution but I choose to <a target="_blank" href="https://plausible.io/self-hosted-web-analytics">self-host</a> because I already have my cloud instance running for various other projects. Having these options is exactly what I need and also, another reason why I love open source. But don't be mistaken, their cloud solution pricing is reasonable too. Maybe I consider switching to their cloud in the future.</p>
<p>Be warned that, a cookie-less solution may not be for everyone, especially for projects with a large team. For instance, you can't have advanced analytics like retention, heatmap, or A/B testing without putting a cookie or tracker on your website. For small projects, it works great. <a target="_blank" href="https://plausible.io/data-policy">Their data policy</a> will tell you more about how they collect and store data.</p>
<p>It's nice to tell you why I choose it so far. Now I'll give some screenshots to give you a quick idea of what you'll get using this platform.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657953403699/g_qkiupq_.png" alt="image.png" /></p>
<p>So the data is stored in the range from "last 12 months" to "realtime" where it's shown graph in minutes, and yes they do reupdate itself without manual refreshing. It's awesome that they can also tell us the unique visitor I had daily.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657953592411/MbdAiS0q7.png" alt="image.png" /></p>
<p>This is the thing that I liked so far. It shows top sources, top pages visited, country maps, and device lists by whatever graph range you're viewing. You can also choose different sources from the top right of each panel. Seeing these metrics reminds me of a similar thing that Google Analytics do with their basic tracking, but much better as it's not relying on cookies. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657954402663/G0lB9G_3c.png" alt="image.png" /></p>
<p>So that's it. I put all my project analytics under this platform from now on. It's dead simple to do that. I can't wait to experiment with marketing for my projects!</p>
]]></content:encoded></item><item><title><![CDATA[How to Solve Squash Failed on GitLab]]></title><description><![CDATA[So I'm trying to merge one of the major features with lots of changes and merge conflicts then suddenly came out with this error:
Squashing Failed: Squash the commits locally, resolve any conflicts, then push the branch. Try again.

It's happened to ...]]></description><link>https://blog.wellosoft.net/how-to-solve-squash-failed-on-gitlab</link><guid isPermaLink="true">https://blog.wellosoft.net/how-to-solve-squash-failed-on-gitlab</guid><category><![CDATA[GitLab]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Tue, 05 Apr 2022 10:18:42 GMT</pubDate><content:encoded><![CDATA[<p>So I'm trying to merge one of the major features with lots of changes and merge conflicts then suddenly came out with this error:</p>
<pre><code>Squashing Failed: Squash the commits locally, resolve any conflicts, then push the branch. Try again.
</code></pre><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649150015956/9danrs7DX.png" alt="Screen Shot 2022-04-05 at 16.13.16.png" /></p>
<p>It's happened to me several times. It took me some time before I gave up with it and get moved on with duplicating my merge requests (and asking for a re-approval from my workmate).</p>
<p>Now, not anymore. So here's how I solve it. </p>
<p>Assume your current work branch is <code>awesome-feat</code></p>
<ul>
<li>Move back to the base branch, and ensure it's up to date.</li>
</ul>
<pre><code><span class="hljs-attribute">git</span> checkout main
git pull
</code></pre><ul>
<li>Merge squash from the work branch. This shouldn't commit anything,  but your previous work will be moved to this branch.</li>
</ul>
<pre><code>git merge <span class="hljs-operator">-</span><span class="hljs-operator">-</span>squash awesome<span class="hljs-operator">-</span>feat
</code></pre><ul>
<li>Confirm changes are in your files right now (If you have conflicts, you may need to resolve them first). Then, delete your work branch locally</li>
</ul>
<pre><code>git branch <span class="hljs-operator">-</span>D awesome<span class="hljs-operator">-</span>feat
</code></pre><ul>
<li>Create a new branch, one with the same name, and checkout. </li>
</ul>
<pre><code>git branch awesome<span class="hljs-operator">-</span>feat
git checkout awesome<span class="hljs-operator">-</span>feat
</code></pre><ul>
<li>Your current work should be still the same, uncommitted, with a new branch that's clean with recent changes. Commit.</li>
</ul>
<pre><code>git add .
git commit <span class="hljs-operator">-</span>m <span class="hljs-string">"My new awesome feat"</span>
</code></pre><ul>
<li>Here's the tricky part: Force push that to your existing MR branch.<br /> <strong>Don't mistake it with the main branch or any other</strong>.</li>
</ul>
<pre><code>git push <span class="hljs-operator">-</span><span class="hljs-operator">-</span>set<span class="hljs-operator">-</span>upstream origin awesome<span class="hljs-operator">-</span>feat <span class="hljs-operator">-</span><span class="hljs-operator">-</span>force
</code></pre><p>Your merge request should be clean without conflicts and ready to merge  🎉</p>
]]></content:encoded></item><item><title><![CDATA[Stuck Trying to Migrate from CentOS Linux to Rocky Linux after EOL?]]></title><description><![CDATA[So CentOS Linux is already passed their End of Life and I forgot to migrate my system to Rocky Linux. What could be worse?
I run the migration script today and I encountered this pellicular error:
Error: Error downloading packages:  No URLs in mirror...]]></description><link>https://blog.wellosoft.net/stuck-trying-to-migrate-from-centos-linux-to-rocky-linux-after-eol</link><guid isPermaLink="true">https://blog.wellosoft.net/stuck-trying-to-migrate-from-centos-linux-to-rocky-linux-after-eol</guid><category><![CDATA[Linux]]></category><category><![CDATA[migration]]></category><category><![CDATA[centos]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Mon, 31 Jan 2022 13:10:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643634542235/kKIseNLmy.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So CentOS Linux is <a target="_blank" href="https://blog.centos.org/2020/12/future-is-centos-stream/">already passed their End of Life</a> and I forgot to migrate my system to <a target="_blank" href="https://rockylinux.org/">Rocky Linux</a>. What could be worse?</p>
<p>I run <a target="_blank" href="https://github.com/rocky-linux/rocky-tools/tree/main/migrate2rocky">the migration script today</a> and I encountered this pellicular error:</p>
<pre><code class="lang-txt">Error: Error downloading packages:  No URLs in mirrorlist
</code></pre>
<p>I thought it was just a random network error, well it also failed with <code>dnf update</code>. Things go dark from here.</p>
<p>After a few hours of trial and error debugging the network. I successfully fixed it by changing the CentOS' YUM repo server to one of the mirror servers. What this is means is that the canonical  CentOS YUM repo server list (<code>mirrorlist.centos.org</code>) is simply no longer be usable anymore and we need to change it to one of the usable mirrors left in the world.</p>
<p>One of the mirrors that worked for me when I wrote this is <code>http://repo.uk.bigstepcloud.com/centos-vault/</code>. There are lots of other mirrors to choose from <a target="_blank" href="https://mirror-status.centos.org/">listed here</a>.</p>
<p>To change the CentOS YUM repo server used, you need to modify these files using <code>vim</code> or <code>nano</code> (your list may be more or less, you need to check which yum repos enabled in your machine using <code>yum repolist --enabled</code> :</p>
<pre><code><span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>yum.repos.d/CentOS<span class="hljs-operator">-</span>Linux<span class="hljs-operator">-</span>AppStream.repo
<span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>yum.repos.d/CentOS<span class="hljs-operator">-</span>Linux<span class="hljs-operator">-</span>BaseOS.repo
<span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>yum.repos.d/CentOS<span class="hljs-operator">-</span>Linux<span class="hljs-operator">-</span>Extras.repo
</code></pre><p>You need to comment out the <code>mirrorlist</code> and uncomment the <code>baseurl</code> with changing the mirror base URL. The changes are done like this:</p>
<pre><code class="lang-patch">[baseos]
name=CentOS Linux $releasever - BaseOS
<span class="hljs-deletion">- mirrorlist=http://mirrorlist.centos.org/?release=$releasever&amp;arch=$basearch&amp;repo=BaseOS&amp;infra=$infra</span>
<span class="hljs-addition">+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&amp;arch=$basearch&amp;repo=BaseOS&amp;infra=$infra</span>
<span class="hljs-deletion">- #baseurl=http://mirrorlist.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/</span>
<span class="hljs-addition">+ baseurl=http://repo.uk.bigstepcloud.com/centos-vault/$contentdir/$releasever/BaseOS/$basearch/os/</span>
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
</code></pre>
<p>I run the migration system again and things work so well. guess I need to share this small tip so I can save someone else's hours haha. Hope it is useful!</p>
]]></content:encoded></item><item><title><![CDATA[Open Sourcing some of my Internal Projects]]></title><description><![CDATA[They're either dead or abandoned by my clients and I have no obligation to make it stay private. Hope you all can make something useful out of it.

jobfair-utm JobFair system translated for Indonesian content. Stack: Laravel.
siskampus Academic-wide ...]]></description><link>https://blog.wellosoft.net/open-sourcing-some-of-my-internal-projects</link><guid isPermaLink="true">https://blog.wellosoft.net/open-sourcing-some-of-my-internal-projects</guid><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Sun, 16 Jan 2022 04:17:00 GMT</pubDate><content:encoded><![CDATA[<p>They're either dead or abandoned by my clients and I have no obligation to make it stay private. Hope you all can make something useful out of it.</p>
<ul>
<li><a target="_blank" href="https://github.com/willnode/jobfair-utm">jobfair-utm</a> JobFair system translated for Indonesian content. Stack: Laravel.</li>
<li><a target="_blank" href="https://github.com/willnode/siskampus">siskampus</a> Academic-wide system for the university. Unfortunately, it's abandoned and not finished yet. Stack: CI</li>
<li><a target="_blank" href="https://github.com/willnode/eabsen">eabsen</a> Web-based attendance system. Stack: CI</li>
<li><a target="_blank" href="https://github.com/willnode/efaktur-ci">efaktur-ci</a> Electric facture system. Stack: CI</li>
<li><a target="_blank" href="https://github.com/willnode/wedo">wedo</a> Food delivery system. Stack: CI</li>
<li><a target="_blank" href="https://github.com/willnode/ekasir-ci">ekasir-ci</a> <a target="_blank" href="https://github.com/willnode/ekasir-front">ekasir-front</a> Electric counter system. Stack: CI and CRA </li>
<li><a target="_blank" href="https://github.com/willnode/daftarkp">daftarkp</a> KP (university interns) signing up system. Stack: CI</li>
</ul>
<p>Please note all names, logos, and icons are reserved copyrights by my clients.</p>
<p>More will be updated here. Anyway, I have a lot of side projects. You can see them all on my <a target="_blank" href="https://willnode.github.io/">GitHub page</a>.</p>
]]></content:encoded></item><item><title><![CDATA[The most reliable way to avoid relative imports in Node.js]]></title><description><![CDATA[Writing relative imports in Node.js is something I tend to avoid especially when it's growing larger in functionality. However, For something this basic yet it's so hard to get right. There are just many ways of doing that on the internet.
There are ...]]></description><link>https://blog.wellosoft.net/the-most-reliable-way-to-avoid-relative-imports-in-nodejs</link><guid isPermaLink="true">https://blog.wellosoft.net/the-most-reliable-way-to-avoid-relative-imports-in-nodejs</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Programming Tips]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Fri, 15 Oct 2021 08:06:44 GMT</pubDate><content:encoded><![CDATA[<p>Writing relative imports in Node.js is something I tend to avoid especially when it's growing larger in functionality. However, For something this basic yet it's so hard to get right. There are just many ways of doing that on the internet.</p>
<p>There are many ways to avoid relative imports in Node.js. One of them is either:</p>
<ol>
<li>Add <code>NODE_PATH=./</code> env ( <a target="_blank" href="https://nodejs.org/api/modules.html#modules_loading_from_the_global_folders">reference</a> )</li>
<li>Set <code>"baseUrl"</code> in <code>(js|ts)config.json</code> ( <a target="_blank" href="https://dev.to/ruppysuppy/how-pros-get-rid-of-relative-imports-in-js-ts-2i3f">reference</a> )</li>
<li>Use <code>require.main.require</code> ( <a target="_blank" href="https://stackoverflow.com/a/26163910/3908409">reference</a> )</li>
<li>Directly write into <code>node_modules</code> ( <a target="_blank" href="https://github.com/browserify/browserify-handbook#avoiding-">reference</a> )</li>
<li>Use NPM/Yarn workspaces  ( <a target="_blank" href="https://classic.yarnpkg.com/lang/en/docs/workspaces/">reference</a> )</li>
</ol>
<p>There are many downsides to each approach.</p>
<ol>
<li>Adding an environment variable requires adding <code>cross-env NODE_PATH=./</code> to all <code>package.json</code> scripts and every time you need to run the script yourself. This behavior is also somewhat unreliable during my testing, also VSCode Intellisense won't understand what you're trying to import.</li>
<li>The <code>baseUrl</code> option from <code>(js|ts)config.json</code> seems to work out of the box, only for VSCode Intellisense. Node.JS won't understand them so I need a babel compiler to set up, it's  <a target="_blank" href="https://medium.com/weekly-webtips/say-good-bye-relative-imports-in-nodejs-projects-65513bcdae6c">explained here anyway</a> but to me, this is way too complicated. </li>
<li>Using <code>require.main.require</code> seems like a hack to me, it enforces me to use that in all the scripts rather than the usual <code>require</code>, which of course it's something that I don't like.</li>
<li>Directly writing to <code>node_modules</code> is something against its purpose, also would you rather be willing to move your scripts to mode_modules? I wouldn't want it. It would become a nightmare to maintain.</li>
<li>Using NPM/Yarn workspaces seems promising at first glance but it enforces me to thinking in the way it was designed for "monorepo". Monorepo is good if you have multiple projects that share code, but really it's just too much because I just work on one big node app. Note this was Yarn only feature, NPM add support too but my last experience using it was <a target="_blank" href="https://github.com/npm/cli/issues/3637#issuecomment-898975594">buggy</a>.</li>
</ol>
<p>I have found a less popular but way more reliable: <code>symlink-dir</code>. Let's me summarize their explanation on <a target="_blank" href="https://www.npmjs.com/package/symlink-dir">NPM</a>:</p>
<blockquote>
<p>Lets suppose you'd like to self-require your package. You can link it to its own node_modules:
<code>symlink-dir . node_modules/my-package</code></p>
</blockquote>
<p>What is by mean to "link"? It's basically creating a directory shortcut. You can <a target="_blank" href="https://www.freecodecamp.org/news/symlink-tutorial-in-linux-how-to-create-and-remove-a-symbolic-link/">read it more here</a>. NPM/Yarn workspaces internally also doing this way. </p>
<p>So to use <code>symlink-dir</code>, I just need to add these values in <code>package.json</code>:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"postinstall"</span>: <span class="hljs-string">"symlink-dir src node_modules/src"</span>,
  }, 
  <span class="hljs-attr">"dependencies"</span>: {
    <span class="hljs-attr">"symlink-dir"</span>: <span class="hljs-string">"latest"</span>
  }
}
</code></pre>
<p>This creates a symlink from <code>src</code> folder to <code>node_modules</code> in my project. After <code>npm i</code> I can use <code>require('src/module.js')</code> instead of  <code>require('../../../src/module.js')</code>. Works with ESM imports too!</p>
<p>You can also add more symlinks by just appending the <code>postinstall</code> like <code>"symlink-dir src node_modules/src &amp;&amp; symlink-dir lib node_modules/src/libraries"</code> and redoing <code>npm i</code>. Out of all solutions previously, this method works best to me. Hope you like it too!</p>
]]></content:encoded></item><item><title><![CDATA[One code liner in Python 🐍]]></title><description><![CDATA[During college, my lecturer gave me a task to find the shortest code to create this string using python:
****
***
**
*
**
***
****
So how you would do that? Here's the code in Python we end up:
for i in range(4, 1, -1):
    print('*' * i)
for i in ra...]]></description><link>https://blog.wellosoft.net/one-code-liner-in-python</link><guid isPermaLink="true">https://blog.wellosoft.net/one-code-liner-in-python</guid><category><![CDATA[Python]]></category><category><![CDATA[optimization]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[python beginner]]></category><category><![CDATA[Python 3]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Wed, 08 Sep 2021 15:52:26 GMT</pubDate><content:encoded><![CDATA[<p>During college, my lecturer gave me a task to find the shortest code to create this string using python:</p>
<pre><code><span class="hljs-strong">****</span>
<span class="hljs-strong">**<span class="hljs-emphasis">*
<span class="hljs-strong">**
<span class="hljs-emphasis">*
<span class="hljs-strong">**
**</span>*</span>
**</span><span class="hljs-strong">**</span></span></span>
</code></pre><p>So how you would do that? Here's the code in Python we end up:</p>
<pre><code class="lang-python"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">4</span>, <span class="hljs-number">1</span>, <span class="hljs-number">-1</span>):
    print(<span class="hljs-string">'*'</span> * i)
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>, <span class="hljs-number">5</span>):
    print(<span class="hljs-string">'*'</span> * i)
</code></pre>
<p>For those who wonder, <code>range(a, b, c)</code> creates an iterator loop from <code>a</code> to <code>b</code>  by <code>c</code>. the <code>a</code> is inclusive but <code>b</code> is not. If <code>c</code> is 1, you can omit it since it's the default anyway. Thus <code>range(4, 1, -1)</code> generates <code>[4,3,2]</code> while <code>range(1, 5)</code> generates <code>[1,2,3,4]</code>.</p>
<p>And then we take that <code>i</code> to multiply with <code>'*'</code> because python multiplication with string causes that string to be repeated <code>i</code> times, e.g.  <code>'*' * 3</code> generates <code>'***'</code></p>
<p>This is already short in fact 😅 that's really awesome about Python, isn't it?</p>
<p>But wait, there's more! We can reduce our python code into one single <code>for</code> loop:</p>
<pre><code class="lang-python"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>, <span class="hljs-number">8</span>):
  <span class="hljs-keyword">if</span> (i &lt; <span class="hljs-number">4</span>):
    print(<span class="hljs-string">'*'</span> * (<span class="hljs-number">5</span> - i))
  <span class="hljs-keyword">else</span>:
    print(<span class="hljs-string">'*'</span> * (i - <span class="hljs-number">3</span>))
</code></pre>
<p>What does this code do? It transforms <code>[1,2,3,4,5,6,7]</code> to <code>[4,3,2,1,2,3,4]</code> so then we took these values to get multiplied by <code>*</code>. If it takes a while to grasp why I put these values on, I'll try to explain here:</p>
<pre><code class="lang-python"><span class="hljs-comment"># i &lt; 4</span>
i = <span class="hljs-number">1</span> =&gt; <span class="hljs-number">5</span> - <span class="hljs-number">1</span> = <span class="hljs-number">4</span>
i = <span class="hljs-number">2</span> =&gt; <span class="hljs-number">5</span> - <span class="hljs-number">2</span> = <span class="hljs-number">3</span>
i = <span class="hljs-number">3</span> =&gt; <span class="hljs-number">5</span> - <span class="hljs-number">3</span> = <span class="hljs-number">2</span>
<span class="hljs-comment"># else (i &gt;= 4)</span>
i = <span class="hljs-number">4</span> =&gt; <span class="hljs-number">4</span> - <span class="hljs-number">3</span> = <span class="hljs-number">1</span>
i = <span class="hljs-number">5</span> =&gt; <span class="hljs-number">5</span> - <span class="hljs-number">3</span> = <span class="hljs-number">2</span>
i = <span class="hljs-number">6</span> =&gt; <span class="hljs-number">6</span> - <span class="hljs-number">3</span> = <span class="hljs-number">3</span>
i = <span class="hljs-number">7</span> =&gt; <span class="hljs-number">7</span> - <span class="hljs-number">3</span> = <span class="hljs-number">4</span>
</code></pre>
<p>Okay, but why do we do this? It seems didn't make our code shorter, right?</p>
<p>That's why we need to do some math here. Enter the absolute function:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1631113620710/O2-9x7sQP.png" alt="image.png" /></p>
<p>How does this absolute function help us? Well, with some fitting on the equation, we get this graph:</p>
<p><a target="_blank" href="https://www.google.com/search?q=abs%28x+-+4%29+%2B+1"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1631114170643/C1KPRGkmZ.png" alt="image.png" /></a></p>
<p><em>(click the image to see interactive version)</em></p>
<p>This graph does what we want, transforms <code>[1,2,3,4,5,6,7]</code> to <code>[4,3,2,1,2,3,4]</code> with a single equation. Math is powerful 💪.</p>
<p>This is the code now looks like after we implement the equation:</p>
<pre><code class="lang-py"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>, <span class="hljs-number">8</span>):
  print(<span class="hljs-string">'*'</span> * (abs(i - <span class="hljs-number">4</span>) + <span class="hljs-number">1</span>))
</code></pre>
<p>Luckily python has <code>abs()</code> <a target="_blank" href="https://docs.python.org/3/library/functions.html#abs">built-in</a> without importing any modules. This reduces our code lines from 5 to 2 💪.</p>
<p>But does this can get further decreased to 1 line? Yes 😱</p>
<p>But before that, we have to put the <code>print()</code> function outside the loop. This can be done with a temporary variable that holds the whole string in a loop:</p>
<pre><code class="lang-py">output = []
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>, <span class="hljs-number">8</span>):
  output.append(<span class="hljs-string">'*'</span> * (abs(i - <span class="hljs-number">4</span>) + <span class="hljs-number">1</span>))
print(<span class="hljs-string">'\n'</span>.join(output))
</code></pre>
<p>Now we hold the loop value to a variable named <code>output</code> and use that to print <code>output</code> once. Note that we use <code>'\n'.join(output)</code> which converts the array to string with <code>\n</code> between values. <code>\n</code> simply means a new line, for each string, we add during for loop.</p>
<p>But why this makes our code more verbose? Enter Python's killer feature: List Comprehension.</p>
<p>List Comprehension simply means doing a loop within a single line. Effectively makes a loop became a python expression. What this means for a layman is they allow us to convert this code:</p>
<pre><code class="lang-py">variable = []
<span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> list:
  variable.append(expression(item))
</code></pre>
<p>to:</p>
<pre><code class="lang-py">variable = [expression(item) <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> list]
</code></pre>
<p>Three lines into one! Doesn't that awesome? 😎</p>
<p>This is our final code after the list comprehension:</p>
<pre><code class="lang-py">print(<span class="hljs-string">'\n'</span>.join([<span class="hljs-string">'*'</span> * (abs(i - <span class="hljs-number">4</span>) + <span class="hljs-number">1</span>) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>, <span class="hljs-number">8</span>)]))
</code></pre>
<p>That's really short, isn't it? Of course, this code is not readable as we are beginning with 😅 but hey, we do this for fun 💪 also these pieces of stuff are also useful for machine code optimization (if you're into that).</p>
<p>Anyway, if you're familiar with code golfing, you can get the lowest of all codes within 55 characters in Python by removing unneeded whitespace:</p>
<pre><code class="lang-python">print(<span class="hljs-string">'\n'</span>.join([<span class="hljs-string">'*'</span>*(abs(i<span class="hljs-number">-4</span>)+<span class="hljs-number">1</span>)<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">1</span>,<span class="hljs-number">8</span>)]))
</code></pre>
<p>Anyone can beat that? 😅 Hope this useful 💪</p>
]]></content:encoded></item><item><title><![CDATA[Does domcloud.io still worthy? (let's talk monolithic vs serverless)]]></title><description><![CDATA[If you compare domcloud.io vs popular rivals like vercel.com or netlify.com, DOM Cloud definitely get squashed up, there's no way to compare a hobby project with some really serious startup companies. Vercel and Netlify have one thing in common: they...]]></description><link>https://blog.wellosoft.net/does-domcloudio-still-worthy-lets-talk-monolithic-vs-serverless</link><guid isPermaLink="true">https://blog.wellosoft.net/does-domcloudio-still-worthy-lets-talk-monolithic-vs-serverless</guid><category><![CDATA[serverless]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Mon, 06 Sep 2021 14:16:01 GMT</pubDate><content:encoded><![CDATA[<p>If you compare <a target="_blank" href="https://domcloud.io">domcloud.io</a> vs popular rivals like <a target="_blank" href="https://vercel.com">vercel.com</a> or <a target="_blank" href="https://netlify.com">netlify.com</a>, DOM Cloud definitely get squashed up, there's no way to compare a hobby project with some really serious startup companies. Vercel and Netlify have one thing in common: they're abstractions of serverless products from tech giants either it's AWS or GCP. </p>
<p>For the reader: I hope you already know what basically monolithic and serverless is, if not, <a target="_blank" href="https://medium.com/ni-tech-talk/monolith-vs-microservices-vs-serverless-and-what-to-choose-for-your-business-needs-49d58b9e91f1">read here</a>. And you may not know about DOM Cloud, it's basically my own hobby project to host websites, it's run on monolithic architecture just like another hosting business does.</p>
<h2 id="pros-and-cons-of-serverless">Pros and cons of serverless</h2>
<p>Definitely, we can't compare uptime quality between monolithic and serverless websites. Serverless is the winner for uptime and it's load resilient over monolithic no matter how it is set up, but this is also a trade: Serverless is complex, that's why both services exist to make it easy for us.</p>
<p>But still, there are still limitations. Serverless websites are immutable, you can't edit files once deployed, because serverless apps might be already replicated over multiple servers. Also, you can't have a persistent connection (WebSocket) and you have to make sure your server code is really idempotent (aka. identical) no matter how their local files and memory are set up or changed.</p>
<p>This is why it's common to use Node.js for serverless apps because with Node.js you are forced to use an external database somewhere even for simple things like sessions. This is very different than, say PHP. Which you just use <code>start_session()</code> then forget how it really works, because PHP loves to write things in files, just like ancient times. This is why practically you won't see any serverless apps written in PHP.</p>
<p>There are many other services that help to lift the serverless architecture limitations, like managed databases or authentication that comes from <a target="_blank" href="https://firebase.google.com/">Firebase</a> or <a target="_blank" href="https://supabase.io/">Supabase</a>. You can also use <a target="_blank" href="https://pusher.com/">Pusher</a> for handling WebSocket notifications. Those services are easy to get used to (I think) and can easily scale up once your project does.</p>
<p>But, at what cost?</p>
<h2 id="pricing-concerns">Pricing concerns</h2>
<p>With these specialized services, it's easy to get up and running any serverless website. No doubt, probably it's the future. They are also generous with their free plan. But once you go production and scale-up, prices will be unpredictable. Just for a starting point, Vercel + Supabase costs about $45 a month. Okay maybe $45 a month is cheap but this will only go up if usage gets high. And remember, they're behind tech giants like AWS. I heard people can easily get thousands of bills in AWS usage. Why it would be any different if you're behind an abstraction service that also runs on AWS?</p>
<p>Maybe I'm overrating things because these companies must be successful if they had, like thousands of dollar bills in AWS usage. I think this is true if they're a tech company. But many are don't. I work with many people and trust me, they just want their website up and running. You may be thinking because these services are free to start, you just leave them as it is once you lose the freelance contract? You would be a moron, I think. And remember, companies (who are not tech companies) have other bills to pay.</p>
<p>Now, what about monolithic? It's simple as spinning a VM. You can start with a $10 VM instance and pack any software that you want. This takes skill, of course. But many software like Docker should handle this easily. With a monolithic setup, you bear the risk of downtime if there's a misconfiguration or regional downtime. But hey, it's cheaper and the good thing is, you give the company <strong>an option</strong> if they want to stay in the cloud or going on-premise. This is a big deal that's often overlooked. Because you can't do that with serverless.</p>
<p>I'm maybe biased while writing this but I still have concerns with going serverless and this is it. And <a target="_blank" href="https://twitter.com/chriscoyier/status/1432784930396835849">I'm not alone</a>.</p>
<p>So domcloud.io still worthy? Yes, as I see it's still useful. Still, it's a no match if you compare it with already hugely successful services like Vercel. But, time will tell as I'm improving it (it's a solo hobby project anyway).</p>
]]></content:encoded></item><item><title><![CDATA[5 basic things you need to know about managing a Linux server]]></title><description><![CDATA[As a web developer, you may come up with the situation of needing to set up a Virtual Machine to host your website online. While it's great that we have a variety of managed cloud services to lift this complexity for us, it's not a bad thing if we wa...]]></description><link>https://blog.wellosoft.net/5-basic-things-you-need-to-know-about-managing-a-linux-server</link><guid isPermaLink="true">https://blog.wellosoft.net/5-basic-things-you-need-to-know-about-managing-a-linux-server</guid><category><![CDATA[Linux]]></category><category><![CDATA[server]]></category><category><![CDATA[hosting]]></category><dc:creator><![CDATA[Wildan M]]></dc:creator><pubDate>Mon, 23 Aug 2021 15:48:41 GMT</pubDate><content:encoded><![CDATA[<p>As a web developer, you may come up with the situation of needing to set up a Virtual Machine to host your website online. While it's great that we have a variety of managed cloud services to lift this complexity for us, it's not a bad thing if we want to learn about some basic knowledge of spinning up our own server. In fact, I come across this situation, and having this kind of knowledge is essential to me, especially when running a small project all by yourself or run with a tight budget.</p>
<p>Without further ado, let's start:</p>
<h2 id="1-use-a-web-hosting-control-panel">1. Use a web hosting control panel</h2>
<p>This is true, especially if you aren't familiar with the Linux tools or have a Windows background. There's a lot of setup and maintenance hurdles that come when you start to manage your own server. And definitely, you don't want to memorize all the things using only the terminal! So to lift this, we need to install this piece of software.</p>
<p>While there are some popular options like cPanel or Plesk, it comes with string (price) attached. So we need to use the free one, and <a target="_blank" href="https://www.virtualmin.com/">Virtualmin</a>, to me, is just perfect.</p>
<p>The <a target="_blank" href="https://www.virtualmin.com/documentation/installation/automated">installation step</a> is easy too. I won't cover them all here but all it needs to do is run this command (as root):</p>
<pre><code class="lang-sh">wget http://software.virtualmin.com/gpl/scripts/install.sh
chmod 0700 ./install.sh
sh ./install.sh --bundle LEMP
</code></pre>
<p>Do you see that? I put <code>--bundle LEMP</code> because I prefer to install Nginx over Apache. Whatever the choice is up to you but overall Nginx is way faster than Apache because of its simple configuration and event-driven, and you need those (the webserver) anyway to let it handle the heavy lifting of managing your app processes. </p>
<p>I'll discuss these later but one thing you need to know is installing Virtualmin or any of these <strong>will destroy any existing setup on your server</strong> so only install it when your VM or server is a fresh install.</p>
<p>But then, what happens after the installation? Head to <code>https://&lt;Your IP address&gt;:10000</code> to open Webmin portal (yes, Virtualmin is part of Webmin). </p>
<p>You may encounter some SSL issues but skip it, it's fine. Login with root account and password that <a target="_blank" href="https://www.virtualmin.com/documentation/installation/automated#toc-setting-a-root-password-KTINPhGW">you've set up</a>, then follow the initial setup and create a new domain. Voila, you're done! There is no need to learn the hassle of creating a non-root user in a terminal and trying to synchronize everything overnight!</p>
<p>Sounds good? Let's continue.</p>
<h2 id="2-know-how-to-run-your-app-properly">2. Know how to run your app properly</h2>
<p>Chances are you already know how to start your own local server. You may do the same thing on your VM but it will be a bad, bad idea. For an online server that needs to run 24 hours 7 days a week, you need some sort of proxy that protects your app processes from randomly crashing due to fatal crashes or high flood of traffic, and this proxy is what Nginx (or Apache) really does. What these proxies really do is they're run the actual HTTP(S) port on 80 and 443 and forward any incoming traffic to a local port or socket which your app really listens to. Web proxies are also really good at dividing traffic based on domains (like domain A goes to this app then domain B goes to that app) and reporting or logging crashes of your app. Web proxies are just that awesome and you need to understand how to configure them properly, even though Virtualmin will already do the heavy lifting for you.</p>
<p>After the installation of Nginx that comes from Virtualmin, you might be surprised that, it already comes with PHP support by default, like the usual way web hosting does. You can see how NginX config on a specific domain looks like:</p>
<pre><code><span class="hljs-section">server</span> {
    <span class="hljs-attribute">server_name</span> example.com;
    <span class="hljs-attribute">listen</span> <span class="hljs-number">1.2.3.4</span>;
    <span class="hljs-attribute">listen</span> <span class="hljs-number">1.2.3.4:443</span> ssl;
    <span class="hljs-attribute">root</span> /home/username/public_html;
    <span class="hljs-attribute">index</span> index.html index.htm index.php;
    <span class="hljs-attribute">access_log</span> /var/log/virtualmin/example.com_access_log;
    <span class="hljs-attribute">error_log</span> /var/log/virtualmin/example.com_error_log;
    <span class="hljs-attribute">location</span> <span class="hljs-regexp">~ \.php(/|$)</span> {
        <span class="hljs-attribute">try_files</span> <span class="hljs-variable">$uri</span> =<span class="hljs-number">404</span>;
        <span class="hljs-attribute">fastcgi_pass</span> localhost:<span class="hljs-number">8001</span>;
    }    
    <span class="hljs-attribute">ssl_certificate</span> /home/username/ssl.combined;
    <span class="hljs-attribute">ssl_certificate_key</span> /home/username/ssl.key;
}
</code></pre><p>You can see that it has the domain you want to listen to <code>example</code>, the server IP to listen with <code>1.2.3.4</code>, where the base directory is <code>/home/username/public_html</code> and so on. What is essential in NginX is that by default it only serves as a static web server. To make most of the features it gives you need to understand additional configs, like the <code>fastcgi_pass</code> to enable dynamic processing on <code>.php</code> files.</p>
<p>PHP is great, but not for all of us. What about running Node.js or Python apps? You might be tempted to use <code>proxy_pass</code> but it won't handle the app startup and crashes for us. Luckily we have a better option: <a target="_blank" href="https://www.phusionpassenger.com/">Phusion Passenger</a>.</p>
<p>What is that? That is the app server, which is able to manage and automatically run Node.JS, Python, Ruby, well, all kinds of server apps you might need to run in your server, and it has an awesome integration with Nginx. </p>
<p>After you follow <a target="_blank" href="https://www.phusionpassenger.com/docs/tutorials/installation/node/">the installation step</a>, all you need to do is add these options in <a target="_blank" href="http://nginx.org/en/docs/beginners_guide.html">NginX config</a>:</p>
<pre><code>    <span class="hljs-attribute">root</span> /home/username/public_html/public;
    <span class="hljs-attribute">passenger_enabled</span> <span class="hljs-literal">on</span>;
    <span class="hljs-comment"># for easier debugging!</span>
    <span class="hljs-attribute">passenger_friendly_error_pages</span> <span class="hljs-literal">on</span>;
</code></pre><p>It will magically find any <a target="_blank" href="https://www.phusionpassenger.com/library/config/nginx/reference/#passenger_startup_file">relevant startup file</a> (like <code>app.js</code> or <code>passenger_wsgi.py</code>) in the parent root directory (<code>public_html</code>) and boot the relevant process (of course, you need to already do have Node.js or whatever programming language you want to run with and already installing <code>node_modules</code> or whatever code dependencies your app need beforehand).</p>
<p>Can you run your app now? Great. Let's continue.</p>
<h2 id="3-overcoming-dns-and-ssl-problems">3. Overcoming DNS (and SSL) problems</h2>
<p>Your app runs on an IP public address now. Great! But nobody (even Google) wants to remember your IP address. You need to buy a domain yourself. I won't point out what's to recommend on this one so go buy a domain from whatever web hosting service you're already familiar with. But what I need to tell you is that after you buy a domain, you need to point out that domain to your server's IP address. This is done under "DNS Management" and in there, you can insert your server IP address using A record (and AAAA if care about IPv6), and that's it! </p>
<p>But wait. There's more...</p>
<p>DNS caches everything under the TTL (Time to Live), and by default for most services, it's set for about 4 hours (14400). You certainly don't want to wait 4 hours to just be able to see your new shiny website, right?</p>
<p>While you can just bypass DNS using <a target="_blank" href="https://man7.org/linux/man-pages/man5/hosts.5.html">hosts file</a>, it's much better if you can go to <a target="_blank" href="https://dns.google/">Google DNS resolver</a> and see if Google itself picks up your new domain. It usually works for the first time without flushing their cache. (if it doesn't work, well something may be wrong with your registrar?!).</p>
<p>After you check it, if Google DNS resolver sees your IP address now but your device still won't pick up it, you can set your device DNS resolver to 8.8.8.8 and 8.8.4.4 (the <a target="_blank" href="https://developers.google.com/speed/public-dns">Google Public DNS</a>) and after that, it will certainly work. Yay!</p>
<p>But wait. There's more (again!)...</p>
<p>You can access your site using http:// but not https://, right? HTTPS is very important nowadays so you can't ignore it. But then, to do that, you need to sign your SSL certificates. I won't go into detail about why it works like that but what I need to tell you is that there's the only kind of SSL agency there, it's called <a target="_blank" href="https://letsencrypt.org/">Let's Encrypt</a> and it will create a signed SSL certificate for your domain for free. How?</p>
<p>Assuming you already have a correct configuration in your Nginx, with Virtualmin you can go to <a target="_blank" href="https://www.youtube.com/watch?v=bUe9dJOnUV0">SSL certificate panel</a> and send a request to Let's encrypt in a few clicks. Sound simple? Yes, but not just that. You can also make Virtualmin renew your SSL certificate automatically. Awesome, isn't it?</p>
<p>Now your website should be online and can start gaining users. Congrats!</p>
<h2 id="4-understand-memory-management-and-when-to-panic">4. Understand memory management (and when to panic)</h2>
<p>It's a regular evening and your server has been running for few weeks. Yet you look at the Virtualmin memory gauges of your server and start wondering. </p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1gy9gjpdsm44l1tgp0qf.png" alt="image" /></p>
<p>Usually, this will come in two situations:</p>
<h3 id="situation-a-your-server-is-wasting-resources">Situation A: Your server is wasting resources</h3>
<p>You notice that your server only uses around 40% of memory so you think if you could cut the monthly bill in half by reducing its computing memory. Before it happens, I will tell you: don't do that! unless you're running into a budget problem.</p>
<p>Even though it seems your system is wasting the excess memory, it's actually not. Linux will always <a target="_blank" href="https://www.linuxatemyram.com/">use most of your RAM</a> because of RAM cache. And you might wonder, what's the deal with it?</p>
<p>Well, most operations on Linux rely heavily on files. The kernel is smart, when a process is done reading or writing a file, it's actually kept temporarily as RAM cache so when another process reads it the next time, it won't wait for the storage disk to actually perform the read, dramatically improving the overall system performance.</p>
<p>I won't tell more details in this, but that's the whole point. Cutting your free memory will severely decrease the overall performance. Don't reduce it unless you have a problem with budgeting.</p>
<h3 id="situation-b-your-server-is-heating">Situation B: Your server is heating</h3>
<p>Some users tell you slower loading times and some error pages show up significantly. How would you confirm what goes wrong? Firstly, you need to check the load. If the load metric is going like tens, it means your server can't keep up the traffic. Then you need to check your memory consumption by going to <a target="_blank" href="https://man7.org/linux/man-pages/man1/free.1.html">free -m</a>. The result is probably like this:</p>
<pre><code>              <span class="hljs-string">total</span>        <span class="hljs-string">used</span>        <span class="hljs-string">free</span>      <span class="hljs-string">shared</span>  <span class="hljs-string">buff/cache</span>   <span class="hljs-string">available</span>
<span class="hljs-attr">Mem:</span>           <span class="hljs-number">1817        </span><span class="hljs-number">1327          </span><span class="hljs-number">67</span>          <span class="hljs-number">72</span>         <span class="hljs-number">421</span>         <span class="hljs-number">268</span>
<span class="hljs-attr">Swap:</span>             <span class="hljs-number">0</span>           <span class="hljs-number">0</span>           <span class="hljs-number">0</span>
</code></pre><p>If you have low memory available, it means your server struggling to find more memory. You can increase the computation memory right now but it would be a bad idea if you actually turn off your VM during high traffic, so what you can do is <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04">create a swap file</a>.</p>
<p>A swap file is basically an additional memory attached to a storage disk. You might think it's a good idea to put a large swap file and suddenly your memory problem would go, but no. Because too much swapping could render the system unstable too.</p>
<p>A better way to think about a swapfile is like a band-aid. During temporary traffic surges, Linux can put non-essential memory processes to swapfile so the actual memory can better spend on essential programs and frequently used RAM caches. For this to work, I recommend a swapfile about 1 or 2 GB size and <a target="_blank" href="https://linuxhint.com/understanding_vm_swappiness/">swappiness</a> around 30. </p>
<p>But if you have already done these but still have low memory availability, it's time to add more actual memory.</p>
<p>I talked frequently about memory. While it's the most likely problem during high load, sometimes there's another factor too, like a rogue process running in the background or maybe there's just not enough core counts in your VM. A look at <a target="_blank" href="https://man7.org/linux/man-pages/man1/top.1.html">top</a> can help you a lot to diagnose this.</p>
<h2 id="5-protect-your-server-with-basic-security-and-backup-knowledge">5. Protect your server with basic security and backup knowledge</h2>
<p>This is the final piece of advice that I need to tell you, as security and backups are so easy to overlook yet most people don't realize how important it is until it's too late.</p>
<p>Good basic security of any VM serving the internet is to only allow specific inbound ports like SSH, HTTP(S), and any related port you might use. Most cloud providers already have a firewall config so it's good to use them if available. Or if it isn't you can use <a target="_blank" href="https://firewalld.org/">firewalld</a> or <a target="_blank" href="https://linux.die.net/man/8/iptables">iptables</a> to safeguard your traffic at the kernel level. If you're like me who uses Webmin, you might want to change the port to something else as leaving the default 10000 probably would make it vulnerable to botnets that swarming the internet.</p>
<p>If you have some memory and computation to spare, it might be a good idea to install software that prevents brute force attacks like <a target="_blank" href="https://www.fail2ban.org/">Fail2Ban</a>, or maybe just don't enable password-based login in SSH and disable database remote control when you don't need it.</p>
<p>And lastly, do a frequent backup, as you probably don't aware of how things can be messed up certainly in the future. The good news is most cloud providers have a way to make a "snapshot" of your VM and allow you to schedule it weekly. It probably cost you an additional pennies but it totally worth it.</p>
<h2 id="closing">Closing</h2>
<p>That is a lot of topics I discover. You might think, where do I get all those knowledge? Well, I have some background story that you might want to read, go <a target="_blank" href="https://dev.to/willnode/i-generally-don-t-satisfied-with-all-web-hosting-out-there-2b63">check it out</a> :)</p>
]]></content:encoded></item></channel></rss>