<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Theo Jones</title><link href="/" rel="alternate"/><link href="/feed.xml" rel="self"/><id>/</id><updated>2026-02-25T00:00:00+00:00</updated><entry><title>Experience Using Opencode on the Latest Models</title><link href="/posts/experience-using-opencode-latest-models/" rel="alternate"/><published>2026-02-25T00:00:00+00:00</published><updated>2026-02-25T00:00:00+00:00</updated><author><name>Theo</name></author><id>tag:None,2026-02-25:/posts/experience-using-opencode-latest-models/</id><summary type="html">&lt;p&gt;Thoughts on using Opencode with modern LLMs like Kimi 2.5 for coding projects and the addictive nature of AI-assisted programming.&lt;/p&gt;</summary><content type="html">&lt;p&gt;I've been experimenting more with the latest LLM models for coding. And it's pretty impressive how far things have come, and how these tools are pretty impressive.&lt;/p&gt;
&lt;p&gt;I've mostly been using the Kimi 2.5 model with Opencode as the coding agent. I still find that mix pretty great. I think the whole vibe coding/AI assisted programming workflow that Opencode and similar encourage might not be the best for quality code, but it is pretty addictive seeing that type of rapid progress. Until you get to the very highest (expensive) tier of Claude/Anthropic and OpenAI models, Kimi performs basically on par or better than what the biggest companies offer.&lt;/p&gt;
&lt;p&gt;And these coding agents can take care of a lot of the boring drudgework of programming. They are good enough right now that I don't have to spend too much time manually intervening and fixing what the LLM did — these tools are getting pretty accurate.&lt;/p&gt;
&lt;p&gt;I've spent many hours working on a project, tweaking it back and forth, with the main thing stopping me from spending even more time is the fact that I have to pay for the credits to run inference for the models. Once you spend the time working through all of quirks these tools have gotten pretty smooth as far as workflow goes. It's genuinely &lt;em&gt;fun&lt;/em&gt; to do this on the recent LLM models that have come out.&lt;/p&gt;
&lt;p&gt;Cost right now is the only real problem, these things will burn through tokens by the millions. It's pretty clear that the $20/mo for coding agents tier from OpenAI etc (even though the limits are being tightened) are being subsidized pretty aggressively. When you compare the amount of time you get on a coding agent from such a plan vs what open source alternatives cost, OpenAI etc can't be making money from the coding agent offerings. On the other hand, it's probably cheaper to use a self hosted frontend (OpenWebUI etc with a hosted inference API) than it is to pay for a paid tier of ChatGPT.&lt;/p&gt;
&lt;p&gt;I've also noticed that Opencode and other open source agents/frontends are very sensitive to token output performance. Using a somewhat more expensive inference provider that provides fast performance will improve the experience quite a bit. Switching API providers basically fixed some of the issues I was having with the model freezing etc.&lt;/p&gt;
&lt;p&gt;The project I've been working on as part of my testing is this &lt;a href="https://git.selfhosted.onl/theo/marginleaf"&gt;https://git.selfhosted.onl/theo/marginleaf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It's a personal blogging CMS. It can do the typical blogging engine things, but instead of a frontend editing interface, I created an API, and I built some tools that allow me to fully manage it from Open WebUI, which opens some pretty neat possibilities. It feels like a somewhat interesting possibility to have some of these chat tools getting good enough that they can be the main interface with an application — instead of a more traditional web UI.&lt;/p&gt;
&lt;p&gt;In particular, the Open WebUI tools can be found here &lt;a href="https://git.selfhosted.onl/theo/marginleaf/src/branch/main/openwebui_tools"&gt;https://git.selfhosted.onl/theo/marginleaf/src/branch/main/openwebui_tools&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;But mostly I created it because it's fun to work on that type of thing.&lt;/p&gt;</content><category term="general"/><category term="opencode"/><category term="llm"/><category term="coding"/><category term="ai"/><category term="kimi"/></entry><entry><title>Kimi 2.5 and Self-Hosting Open WebUI</title><link href="/posts/kimi/" rel="alternate"/><published>2026-02-20T00:00:00+00:00</published><updated>2026-02-20T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2026-02-20:/posts/kimi/</id><summary type="html">&lt;p&gt;Open source alternative to ChatGPT and setting up a self-hosted Open WebUI frontend.&lt;/p&gt;</summary><content type="html">&lt;p&gt;Been poking around with the Kimi 2.5 LLM and also started self-hosting &lt;a href="https://github.com/open-webui/open-webui"&gt;Open WebUI&lt;/a&gt; on my server (a self-hosted ChatGPT-style web frontend for LLM APIs).&lt;/p&gt;
&lt;p&gt;Kimi probably isn't the best model on the market, but Kimi 2.5 is the first time I've used a truly open source model that feels to be vaguely in the same category of performance as ChatGPT, etc. And I don't really feel much of a penalty using it vs ChatGPT.&lt;/p&gt;
&lt;p&gt;Of course, running it directly is &lt;em&gt;way&lt;/em&gt; beyond what any device I have can do reasonably well.&lt;/p&gt;
&lt;p&gt;But there are already API providers around offering it with very favorable privacy and data retention policies, so I'm probably going to switch to using it over ChatGPT.&lt;/p&gt;
&lt;p&gt;I wouldn't recommend using the chat/API offered by the model's creator—I don't really trust that company.&lt;/p&gt;
&lt;p&gt;If I self-host the front end, all of the actually sensitive data like chat logs etc are stored on my server.&lt;/p&gt;
&lt;p&gt;Open WebUI is pretty cool. It works almost as well as ChatGPT does. I've run into some issues with the model occasionally freezing during processing, but I've occasionally seen that type of thing with other LLM providers.&lt;/p&gt;
&lt;p&gt;It has a search integration that works with the model so it can web search etc. It's pretty customizable.&lt;/p&gt;
&lt;p&gt;I quickly created a custom tool that the model can use which queries the &lt;a href="https://openalex.org"&gt;OpenAlex&lt;/a&gt; API to find open access academic articles. The code for that can be found here &lt;a href="https://git.selfhosted.onl/theo/openwebui-tools-skills/src/branch/main"&gt;https://git.selfhosted.onl/theo/openwebui-tools-skills/src/branch/main&lt;/a&gt;&lt;/p&gt;</content><category term="general"/><category term="llm"/><category term="kimi"/><category term="open-webui"/><category term="self-hosting"/><category term="ai"/><category term="open-source"/></entry><entry><title>Pinning Footer to Bottom of Page in Bootstrap Studio</title><link href="/posts/pinning-footer-to-bottom/" rel="alternate"/><published>2026-01-20T00:00:00+00:00</published><updated>2026-01-20T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2026-01-20:/posts/pinning-footer-to-bottom/</id><summary type="html">&lt;p&gt;Some of the themes that come with bootstrap studio don't have the footer pinned to the bottom of the page.&lt;/p&gt;</summary><content type="html">&lt;p&gt;Some of the themes that come with bootstrap studio don't have the footer pinned to the bottom of the page.&lt;/p&gt;
&lt;p&gt;The instructions in this link are helpful here: &lt;a href="https://forum.bootstrapstudio.io/t/footer-always-at-the-bottom-of-the-page/7517"&gt;https://forum.bootstrapstudio.io/t/footer-always-at-the-bottom-of-the-page/7517&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Basically, create custom sitewide CSS (ie. through a .css file under the styles folder of the design), with the following:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;body&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;flex-direction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;column&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="kt"&gt;vh&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;footer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;margin-top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="general"/><category term="bootstrap"/><category term="css"/><category term="tutorial"/></entry><entry><title>Stenomasks and Speech to Text</title><link href="/posts/stenomasks-and-speech-to-text/" rel="alternate"/><published>2025-01-20T00:00:00+00:00</published><updated>2025-01-20T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2025-01-20:/posts/stenomasks-and-speech-to-text/</id><summary type="html">&lt;p&gt;For a while I've had this StenoMask thing, which is a sound isolated box that can be talked into for speech recognition.&lt;/p&gt;</summary><content type="html">&lt;p&gt;For a while I've had this StenoMask thing, which is a sound isolated box that can be talked into for speech recognition. I think the notational thing it's commonly used for is court reporters speaking into it for notes that can be transcribed later. Of course, my use case with it is writing without a keyboard and similar.&lt;/p&gt;
&lt;p&gt;When I first started experimenting with it, I found that it was really hard to get any kind of acceptable accuracy with speech recognition software.&lt;/p&gt;
&lt;p&gt;I've been trying it again now. Speech recognition software has gotten to the point where I can talk to it normally and it basically just works when transcribing.&lt;/p&gt;
&lt;p&gt;Which makes the thing actually useful for me now.&lt;/p&gt;
&lt;p&gt;This is what I am using https://whispertyping.com/&lt;/p&gt;
&lt;p&gt;It would be interesting to try to give Dragon NaturallySpeaking, which is used by a lot of formal disability accommodation places, a try again. It's what psychologists and stuff have recommended for me for some of the relevant disabilities I have. I just haven't been able to get good accuracy out of past versions of it. Dragon is very expensive, as in hundreds of dollars, so doesn't feel worth it to give it another try.&lt;/p&gt;</content><category term="general"/><category term="accessibility"/><category term="speech-to-text"/><category term="tools"/></entry><entry><title>Philadelphia Chinatown (2023 Oct)</title><link href="/posts/philadelphia-chinatown/" rel="alternate"/><published>2023-10-20T00:00:00+00:00</published><updated>2023-10-20T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-10-20:/posts/philadelphia-chinatown/</id><summary type="html">&lt;p&gt;Photos from Philadelphia's Chinatown&lt;/p&gt;</summary><content type="html">&lt;p&gt;While taking these photos, I saw a lot of signage about a proposed stadium for the 76ers.&lt;/p&gt;
&lt;p&gt;Most of this was in opposition (I am not informed enough to give a direct opinion regarding the issue).&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/PhotoLibrary__2023__10__L1010864.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/PhotoLibrary__2023__10__L1010870.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/PhotoLibrary__2023__10__L1010876.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/PhotoLibrary__2023__10__L1010859.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/47887819.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/PhotoLibrary__2023__10__L1010881.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Chinatown Photo" src="/images/chinatown/PhotoLibrary__2023__10__L1010883.jpg"&gt;&lt;/p&gt;</content><category term="general"/><category term="photography"/><category term="street-photography"/><category term="philadelphia"/><category term="chinatown"/></entry><entry><title>2023 Disposable Camera Roll — Center City Philadelphia</title><link href="/posts/disposable-camera-center-city/" rel="alternate"/><published>2023-10-15T00:00:00+00:00</published><updated>2023-10-15T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-10-15:/posts/disposable-camera-center-city/</id><summary type="html">&lt;p&gt;Photos from various photowalks near Center City Philadelphia&lt;/p&gt;</summary><content type="html">&lt;p&gt;Photos from various photowalks near Center City Philadelphia.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Center City Photo" src="/images/photography/disposable-oct-2023/PhotoLibrary__1970__01__000538770002.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Center City Photo" src="/images/photography/disposable-oct-2023/PhotoLibrary__1970__01__000538770014.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Center City Photo" src="/images/photography/disposable-oct-2023/PhotoLibrary__1970__01__000538770001.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Center City Photo" src="/images/photography/disposable-oct-2023/PhotoLibrary__1970__01__000538770023.jpg"&gt;&lt;/p&gt;</content><category term="general"/><category term="photography"/><category term="film"/><category term="philadelphia"/><category term="street-photography"/></entry><entry><title>ChatGPT Makes Automation Symmetrical with Doing</title><link href="/posts/chatgpt-systems-admin-automation/" rel="alternate"/><published>2023-05-23T00:00:00+00:00</published><updated>2023-05-23T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-05-23:/posts/chatgpt-systems-admin-automation/</id><summary type="html">&lt;p&gt;One of the clearest implications of ChatGPT for systems administrators is that it makes automating a task almost symmetrical with doing a task.&lt;/p&gt;
&lt;p&gt;On the new file server I use for personal projects (dedicated server with a NVMe SSD boot drive and four hard drives as secondary file storage drives …&lt;/p&gt;</summary><content type="html">&lt;p&gt;One of the clearest implications of ChatGPT for systems administrators is that it makes automating a task almost symmetrical with doing a task.&lt;/p&gt;
&lt;p&gt;On the new file server I use for personal projects (dedicated server with a NVMe SSD boot drive and four hard drives as secondary file storage drives, I recently did a reinstall of Debian. I set this server up with the hard drives in a BTRFS raid 5. I installed Docker on it, and I set up the Apache web server to make the files on that server public. Cloudflare Tunnel was used to put that Apache server behind SSL.&lt;/p&gt;
&lt;p&gt;I took quick and rough notes on what commands were used and what may vary between servers and had ChatGPT create an automation script in Python.&lt;/p&gt;
&lt;p&gt;The notes can be found here
&lt;a href="https://gist.github.com/theopjones/a7f2b6ba17f3de23826f688f0a87d01d"&gt;https://gist.github.com/theopjones/a7f2b6ba17f3de23826f688f0a87d01d&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The prompt I used is&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;Create a python script to automate the server setup task in the following notes/log of a manual setup. Assume that the python script is running as root. In the case of commands which require manual intervention, wait for the user to conduct the manual intervention, the command should be started as part of the script.&lt;/p&gt;
&lt;p&gt;The result of ChatGPT was the following, which is good enough to make this setup easily reproducible across servers or to document with code what the setup was so that it can be easily reproduced.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/theopjones/6147770b550356e55d209e67549fb948"&gt;https://gist.github.com/theopjones/6147770b550356e55d209e67549fb948&lt;/a&gt;&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>I’m looking for work</title><link href="/posts/im-looking-for-work/" rel="alternate"/><published>2023-05-23T00:00:00+00:00</published><updated>2023-05-23T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-05-23:/posts/im-looking-for-work/</id><summary type="html">&lt;p&gt;I was recently laid off from my previous company.&lt;/p&gt;
&lt;p&gt;I’m a seasoned IT and customer service professional with over five years of experience. My skills extend from software deployment and support to Linux administration and Python scripting for automation.&lt;/p&gt;
&lt;p&gt;I’ve acted as an administrator for major SaaS platforms …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I was recently laid off from my previous company.&lt;/p&gt;
&lt;p&gt;I’m a seasoned IT and customer service professional with over five years of experience. My skills extend from software deployment and support to Linux administration and Python scripting for automation.&lt;/p&gt;
&lt;p&gt;I’ve acted as an administrator for major SaaS platforms such as Google Workspace, Docusign, email marketing tools (PersistIQ, ActivePipe), CRMs (CopperCRM, Contactually, Follow Up Boss), and Okta, effectively resolving email infrastructure issues. Also, I’ve offered on-call and after-hours support for urgent user requests.&lt;/p&gt;
&lt;p&gt;My proficiency in open-source platforms includes managing LAMP + Nginx servers, working with cloud compute/VPS hosting platforms, and utilizing Linux for desktop and server projects. I have automated tasks using Python and other scripting languages, focusing on account creation, data migrations, and infrastructure management. Additionally, I’ve used low-code platforms like Zapier, and have some familiarity with the Dell Boomi Platform.&lt;/p&gt;
&lt;p&gt;One noteworthy accomplishment is automating most of the user onboarding process, allowing accelerated growth without increasing IT staff. I’ve also efficiently transitioned data from one CRM system to another, leveraging APIs to rebuild account environments.&lt;/p&gt;
&lt;p&gt;I am adept at defining requirements with software engineers and vendors for new product rollouts. I am well-versed in IT security, including implementing and documenting new security processes and mitigating threats.&lt;/p&gt;
&lt;p&gt;My experience with support ticketing and project management systems spans Service Cloud, atSpoke, Jira, and Asana.&lt;/p&gt;
&lt;p&gt;Furthermore, I hold degrees in Geography and Ecology and Evolutionary Biology from the University of Arizona, with a focus on geographic information systems. I’ve tutored STEM and geography subjects and have experience in GIS and scientific data analysis from internships.&lt;/p&gt;
&lt;p&gt;My desired salary for a new role is $85,000/yr, though I’m open to $60,000-$85,000 depending on the total compensation package, the nature of the employer, and the status of my other interviews. While I prefer a W2 role, I’m also open to contract-to-hire and independent contractor status, and am available for freelance work that doesn’t conflict with full-time employment.&lt;/p&gt;
&lt;p&gt;For more information, please reach out to me by email tjones2@fastmail.com or through my LinkedIn profile &lt;a href="https://www.linkedin.com/in/theodore-jones-7b89b7269/"&gt;https://www.linkedin.com/in/theodore-jones-7b89b7269/&lt;/a&gt;&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>Setting up GoBlog on FreeBSD</title><link href="/posts/setting-up-goblog-on-freebsd/" rel="alternate"/><published>2023-03-27T00:00:00+00:00</published><updated>2023-03-27T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-03-27:/posts/setting-up-goblog-on-freebsd/</id><summary type="html">&lt;p&gt;&lt;a href="https://goblog.app/"&gt;GoBlog&lt;/a&gt; is a blogging engine that I have used on my personal blog, and various other personal projects. I’m going to do a walkthrough of how to set this up on a FreeBSD server.&lt;/p&gt;
&lt;p&gt;If you want a quick TLDR, here is a shell script that automatically spins up …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;a href="https://goblog.app/"&gt;GoBlog&lt;/a&gt; is a blogging engine that I have used on my personal blog, and various other personal projects. I’m going to do a walkthrough of how to set this up on a FreeBSD server.&lt;/p&gt;
&lt;p&gt;If you want a quick TLDR, here is a shell script that automatically spins up GoBlog. It doesn’t set up a jail or other container, but it can be used in one.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/theopjones/e09c9713c10f4000d154de50c438d2ba"&gt;https://gist.github.com/theopjones/e09c9713c10f4000d154de50c438d2ba&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Its a blogging engine with fairly few users, and I wouldn’t recommend it for important business websites, or people who aren’t at least somewhat technically oriented and who know their way around UNIX-like operating systems.&lt;/p&gt;
&lt;p&gt;But for the technically inclined, it makes a good personal blog. It is very performant and supports a lot of interesting social features, including most of the IndieWeb standards. It can also (with some limitations), talk to Mastedon and other similar ActivityPub using services, and allow these social services to subscribe to your blog.&lt;/p&gt;
&lt;p&gt;I previously had my personal blog on a Debian home server, using Docker for containerization.&lt;/p&gt;
&lt;p&gt;I’ve discussed an overview of this setup here&lt;/p&gt;
&lt;p&gt;&lt;a href="https://theopjones.blog/notes/2022/09/2022-09-12-oxjfr"&gt;https://theopjones.blog/notes/2022/09/2022-09-12-oxjfr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://theopjones.blog/posts/2022/09/2022-09-17-exlan"&gt;https://theopjones.blog/posts/2022/09/2022-09-17-exlan&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Unfortunately, my new apartment doesn’t have any internet with the fast upload speeds needed for this type of home server setup, so I’m moving my setup to a dedicated server.&lt;/p&gt;
&lt;p&gt;I’ve decided to go with FreeBSD for this setup because it has a lot of powerful features, and in my opinion is a often lot more streamlined and elegant than Linux in how it handles things.&lt;/p&gt;
&lt;p&gt;I’d recommend spinning up a jail to act as a container to separate this setup from the rest of your system, particularly if you want to run more than one service on your server/VPS.&lt;/p&gt;
&lt;p&gt;In the future, I’ll write up instructions and a shell script on how to build this in a jail and set up a reverse proxy with SSL for this (either Caddy or Nginx would make a good fit for reverse proxy).&lt;/p&gt;
&lt;p&gt;There are multiple helper tools to set this up. I like &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt; for this role.&lt;/p&gt;
&lt;p&gt;Likely because it is a small blogging engine without very many users, there isn’t a FreeBSD port or package for this, so, we will need to compile it from the Git repo.&lt;/p&gt;
&lt;p&gt;We will need the following FreeBSD packages to do this &lt;code&gt;go-devel git gcc sqlite3 bash&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;GoBlog can also use &lt;code&gt;tor&lt;/code&gt; for creating a .onion service for site visitors who want additional privacy when viewing your blog.&lt;/p&gt;
&lt;p&gt;I have created a Python script (discussed later) to help with generating a config file, if you want to use that, you will also need &lt;code&gt;python3 py39-yaml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;See the following command for how to install all of these packages&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pkg install go-devel git gcc sqlite3 bash tor python3 py39-yaml&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;To clone the Goblog source code from Git, run the following command&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git clone https://github.com/jlelse/GoBlog.git&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Change directory into the newly downloaded source code repo.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;cd GoBlog&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Build the GoBlog source code&lt;/p&gt;
&lt;p&gt;&lt;code&gt;go-devel build -tags=sqlite_fts5 -ldflags '-w -s' -o GoBlog&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Copy GoBlog to &lt;code&gt;/usr/local/bin/&lt;/code&gt; (the appropriate folder given standard FreeBSD folder structure. Give the Goblog executable the right permissions to be ran by all users.&lt;/p&gt;
&lt;p&gt;install -m 755 GoBlog /usr/local/bin/GoBlog&lt;/p&gt;
&lt;p&gt;The data directory that our RC script (more details later in this post) will use as the working directory is &lt;code&gt;/var/GoBlog/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Additional data used by Goblog is contained in the following folders in the Git repo &lt;code&gt;pkgs testdata templates leaflet hlsjs dbmigrations strings plugins&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Create a corresponding folder for each of these under &lt;code&gt;/var/GoBlog/&lt;/code&gt; and copy the contents.&lt;/p&gt;
&lt;p&gt;Create empty folders &lt;code&gt;/var/GoBlog/data&lt;/code&gt; and &lt;code&gt;/var/GoBlog/config. This is for user generated data which persists across versions. The&lt;/code&gt;data` folder will be populated on the first run of GoBlog.&lt;/p&gt;
&lt;p&gt;The config file will need to be manually generated. An example config file is contained in the GoBlog git repo as &lt;code&gt;example-config.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can also use the following python script I have created to guide you through the process of creating the config file. It will prompt you for the information needed to set up the most common configurations.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/theopjones/748c296b3c33881352bb7ac72772ae67"&gt;https://gist.github.com/theopjones/748c296b3c33881352bb7ac72772ae67&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Next up we will need to create an RC file for GoBlog. I have created one as follows&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/theopjones/d62e480a71f5cbcead7e381ffd422fda"&gt;https://gist.github.com/theopjones/d62e480a71f5cbcead7e381ffd422fda&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;(Both of the above scripts are created and used by the whole installation shell script mentioned at the beginning of the post.&lt;/p&gt;
&lt;p&gt;Write it to &lt;code&gt;/usr/local/etc/rc.d/goblog&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Then make the rc script file executable&lt;/p&gt;
&lt;p&gt;&lt;code&gt;chmod +x "$rc_script_file"&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Enable GoBlog to load when the system does&lt;/p&gt;
&lt;p&gt;&lt;code&gt;echo 'goblog_enable="YES"' &amp;gt;&amp;gt; /etc/rc.conf&lt;/code&gt;&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>This blog is back up</title><link href="/posts/this-blog-is-back-up/" rel="alternate"/><published>2023-03-23T00:00:00+00:00</published><updated>2023-03-23T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-03-23:/posts/this-blog-is-back-up/</id><summary type="html">&lt;p&gt;This blog is back up&lt;/p&gt;
&lt;p&gt;It was down for a while after I moved apartments.&lt;/p&gt;
&lt;p&gt;Had to move it to an external server, because 1) the computer I was using as a home server got shipping damaged during the move, and 2) my new home internet has much slower upload …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This blog is back up&lt;/p&gt;
&lt;p&gt;It was down for a while after I moved apartments.&lt;/p&gt;
&lt;p&gt;Had to move it to an external server, because 1) the computer I was using as a home server got shipping damaged during the move, and 2) my new home internet has much slower upload speeds than the fiber connection in my last apartment&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Experimenting with AI Music Generation</title><link href="/posts/ai-music-generation/" rel="alternate"/><published>2023-02-02T00:00:00+00:00</published><updated>2023-02-02T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-02-02:/posts/ai-music-generation/</id><summary type="html">&lt;p&gt;I’ve been experimenting with AI music generation software lately and I have found it to be quite interesting. I’ve tried two programs, Mubert and Soundful, and I was pleasantly surprised with the results I got. Although the music generated wasn’t very creative, it was good at replicating …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I’ve been experimenting with AI music generation software lately and I have found it to be quite interesting. I’ve tried two programs, Mubert and Soundful, and I was pleasantly surprised with the results I got. Although the music generated wasn’t very creative, it was good at replicating common background music styles. There was no noticeable distortion or excessively syllabic sounds in the audio, which I have seen in other services in the past.&lt;/p&gt;
&lt;p&gt;The biggest issue with these AI music services is licensing. Most of the services I’ve tried claim copyright over the music generated using their tool and put many restrictions on how the music can be licensed back to the user. You can’t use it in certain projects, or distribute the music on its own. Mubert has particularly restrictive licensing terms. To obtain full rights to the audio, the cost can be exorbitant, with Mubert charging upwards of $400, while Soundful, which has more reasonable terms, charges around $50. It almost seems like these services are trying to price their product just below what human artists would charge. This business model doesn’t make much sense.&lt;/p&gt;
&lt;p&gt;When it comes to the ethics of AI art, I don’t think it will harm artists as much as some people fear. For example, these AI music services I talked about can only replicate the most basic and generic types of music. They are still far inferior to human artists, even for creating background music for YouTube videos. I plan to get into streaming and making YouTube videos, and for that, I will still probably go with conventional music. I believe AI art will simply become another tool for creating art, reshuffling the deck a bit and potentially putting some people out of business or into business, but it won’t be a sea change in the industry.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Some thoughts on the ethics of AI art/generative AI</title><link href="/posts/some-thoughts-on-the-ethics-of-ai-artgenerative-ai/" rel="alternate"/><published>2023-02-02T00:00:00+00:00</published><updated>2023-02-02T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2023-02-02:/posts/some-thoughts-on-the-ethics-of-ai-artgenerative-ai/</id><summary type="html">&lt;p&gt;AI art is getting a lot of controversy for its implications for current artists. What will it do to employment prospects in the arts? What about the copyright implications? What about all of the art that is used to train these models?&lt;/p&gt;
&lt;p&gt;All of those questions are important things to …&lt;/p&gt;</summary><content type="html">&lt;p&gt;AI art is getting a lot of controversy for its implications for current artists. What will it do to employment prospects in the arts? What about the copyright implications? What about all of the art that is used to train these models?&lt;/p&gt;
&lt;p&gt;All of those questions are important things to think about.&lt;/p&gt;
&lt;p&gt;I for one think that some of the fears of human artists getting fully displaced by automation is a bit over stated.&lt;/p&gt;
&lt;p&gt;I think it won’t displace artists as much as people are worried about. I think it will be just another tool that’s used to create art.&lt;/p&gt;
&lt;p&gt;It may reshuffle the decks a bit and maybe put some people out of business, put some people into business, but it won’t be as much of a sea change in that regard as many think.&lt;/p&gt;
&lt;p&gt;However, what worries me the most about the increasing role of AI tools is their closed nature. As these increasingly sophisticated AI models do more and more, not just in the field of art but in every aspect of our lives, it’s crucial that these tools are open and accessible to everyone.&lt;/p&gt;
&lt;p&gt;Unfortunately, that is not the case with most of these tools. Currently, the AI models and their outputs and inputs are owned by just a few companies, leaving most users locked out.&lt;/p&gt;
&lt;p&gt;I have a strong concern that this will concentrate the art market, displacing the decentralized infrastructure and ecosystem of small business artists with a much more centralized art world, dominated by a few companies that provide tools that play an increasingly critical role in creating art in the modern world.&lt;/p&gt;
&lt;p&gt;The majority of the significant recent generative AI models are proprietary, from AI music generators to tools like GPT and MidJourney. These tools are not even available for use on your own computer, instead, you have to send your inputs to be processed on a cloud server owned and maintained by the authors of the AI model. Even a few models that are source available (and even marketed as open source), like Stable Diffusion, are not fully free and open source.&lt;/p&gt;
&lt;p&gt;One reason for these models not being free and open source is what some sources call “toxic candy models.”&lt;/p&gt;
&lt;p&gt;&lt;a href="https://salsa.debian.org/deeplearning-team/ml-policy/-/blob/master/ML-Policy.rst"&gt;As per this memo by a contributor to the Debian Linux distribution, writing in regards to determining which AI software should be included as FOSS&lt;/a&gt;, these are models where the algorithm’s weights and other parts are a complete black box, and you only receive the final output of the model generation process without information on how it was generated.&lt;/p&gt;
&lt;p&gt;This includes models based on data/input scraped from the internet. This results in situations where the art used to create the final model is usually proprietary, and the legality of even doing this scraping is in dispute. And of course the companies can’t distribute that art to anyone who wants to modify or fully understand the model. They can’t provide a full list of every bit of art they used.&lt;/p&gt;
&lt;p&gt;So you as a user, if you want to fundamentally modify what’s fed into those models, if you want to see what’s fed into those models and figure out where the model gets what it gets from, you fundamentally can’t under the current ecosystem.&lt;/p&gt;
&lt;p&gt;Access to the data is necessary for users to fully understand, modify, use, and build own their versions of the model based on this.&lt;/p&gt;
&lt;p&gt;I think that this issue is adjacent to one of the more plausible arguments that the models should be considered a derivative work of the input art – but I am not sure if I endorse such an argument.&lt;/p&gt;
&lt;p&gt;A concerning trend is for companies producing source-available AI models to release them under non-free and open-source licenses that do not meet standard guidelines for open-source licenses, such as the FSF definition, the OSI open source definition, or the Debian Free Software Definition.&lt;/p&gt;
&lt;p&gt;The most notable of these licenses is the Responsible Artificial Intelligence Source Code (RAIL) License, which imposes restrictions on how users can use the output generated by the tool.&lt;/p&gt;
&lt;p&gt;This is similar to proprietary companies that claim copyright interest in the output of their program&lt;/p&gt;
&lt;p&gt;This is a departure from the open-source community’s consensus that the software developer does not have ownership over what people use the software for – despite the fact that some of the companies involved still attempt to claim to be open source friendly.&lt;/p&gt;
&lt;p&gt;There is a movement in the software industry, particularly in the AI world, for developers to dictate what users can do with their software.&lt;/p&gt;
&lt;p&gt;This mindset and movement asserts that the developer of the software has, in terms of both moral obligation and right, and in terms of the legal ability to enforce this, has the duty and ability to basically dictate what users do with this software.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://facctconference.org/static/pdfs_2022/facct22-63.pdf"&gt;https://facctconference.org/static/pdfs_2022/facct22-63.pdf&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;(From the Abstract) A number of organizations have expressed concerns about the inappropriate or irresponsible use of AI and have proposed ethical guidelines around the application of such systems. While such guidelines can help set norms and shape policy, they are not easily enforceable. In this paper, we advocate the use of licensing to enable legally enforceable behavioral use conditions on software and code and provide several case studies that demonstrate the feasibility of behavioral use licensing. ```&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;From Pg 4 In this paper, we seek to encourage entities and individuals who create AI tools and applications, to leverage the existing IP license approach to restrict the downstream use of their tools and applications (i.e., their “IP”). Specifically, IP licensors should allow others to use their IP only if such licensees agree to use the IP in ways that are appropriate for the IP being licensed. While contractual arrangements are not the only means to encourage appropriate behaviour, it is a mechanism that exists today, is malleable to different circumstances and technologies, and acts as a strong signaling mechanism that the IP owner takes their ethical responsibilities seriously. ```&lt;/p&gt;
&lt;p&gt;This has the potential to spread beyond the AI world and impact the norms of the software industry as a whole. This mindset is in blatant contradiction of not just the norms of the open source community, but also the old norms of the software industry as a whole.&lt;/p&gt;
&lt;p&gt;The expansion of copyright for AI technology is a big concern. The RAIL license, used by Stable Diffusion among others, is an interesting and notable case.&lt;/p&gt;
&lt;p&gt;The developers behind this license believe it is necessary to prevent harmful and irresponsible uses of their products, and they believe that AI technology has a lot of potential for misuse. They argue for the need to come up with a legally enforceable mechanism to limit potentially irresponsible uses.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.licenses.ai/"&gt;https://www.licenses.ai/&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;Responsible AI Licenses (RAIL) empower developers to restrict the use of their AI technology in order to prevent irresponsible and harmful applications. These licenses include behavioral-use clauses which grant permissions for specific use-cases and/or restrict certain use-cases. In case a license permits derivative works, RAIL Licenses also require that the use of any downstream derivatives (including use, modification, redistribution, repackaging) of the licensed artificial must abide by the behavioral-use restrictions.&lt;/p&gt;
&lt;p&gt;But I don’t particularly agree with the necessity of using copyright as a means for this.&lt;/p&gt;
&lt;p&gt;However, I do not agree with the use of copyright as a means to achieve this.&lt;/p&gt;
&lt;p&gt;AI art generation may do different things than traditional art methods, but it’s not as much of a game-changer as some people claim. AI is just a buzzword for things that seem computationally practical based on everyday experiences, but where practical algorithms are new or nonexistent. Today’s AI techniques will become tomorrow’s conventional art techniques, and software tools for modifying and creating art have existed for a long time, such as Photoshop and GIMP. These AI tools are just an extension of digital art.&lt;/p&gt;
&lt;p&gt;Artistic controversies, such as whether or not something is real art, have arisen before with new forms of art, such as photography. AI art is just another method of art that uses technology to probe and sample an extrinsic space outside of the artist’s mind, similar to how photography creates art by sampling from from the physical environment.&lt;/p&gt;
&lt;p&gt;In both cases, the artist’s creativity comes from knowing what to sample, and how to sample it, thereby the creation of novel art is possible.&lt;/p&gt;
&lt;p&gt;Both conventional art methods and the new AI art have a lot of the same ethical issues.&lt;/p&gt;
&lt;p&gt;For example, one that gets mentioned a lot is the ability of AI art to potentially create fake media. Images that look like they’re of a real person or of a real event, but aren’t actually representative of the world.&lt;/p&gt;
&lt;p&gt;However, traditional means for visual art also have lots of ways to be misleading, manipulated, edited, and staged in a way that doesn’t reflect the real world. People often overestimate how accurate visual arts, especially photographic arts, are at truly representing the world. The new technology driving new ways to manipulate and generate imagery may reset the social environment around visual arts to something that’s actually more healthy and representative of not just what AI art is, but what visual art has always been.&lt;/p&gt;
&lt;p&gt;The extreme (but unlikely) case might be when fakes become so common that the only way to trust an image is to know where it came from, its history. This would reduce visual imagery to how it was perceived before modern photography became widely available, in which you had to trust the testimony of the artist or the author for wherever you were getting the image from.&lt;/p&gt;
&lt;p&gt;Every medium of art or expression has the ability to mislead and be misused, and the mechanisms that society has to limit that misuse don’t need to change with this new technology. The needed legal mechanisms already exist, such as defamation law, to limit the use of faked images to lie about someone. Attempting to bring copyright into what’s been traditionally handled by defamation law is an attempt to rewrite the balance. Copyright has different, often more extreme penalties, than society has seen fit to impose for conventionally.&lt;/p&gt;
&lt;p&gt;And how society has deemed it proper to handle things like lying about people or deliberately misleading has been constructed by the process of democracy and centuries of societal experience, to optimize various societal trade-offs. To balance negative social effects of potentially dangerous content and or damage to people’s reputation, disseminating, versus the importance of freedom of expression.&lt;/p&gt;
&lt;p&gt;This type of rulemaking is fundamentally anti-democratic and technocratic, as it appoints those who write the license and push the rules as arbiters of how society should handle these risks. It also doesn’t take into account the ways in which humans can fail, sometimes more than machines can fail. For example, traditional human forensic methodologies can also be very inaccurate, yet still entered as evidence.&lt;/p&gt;
&lt;p&gt;The use of AI technology raises many important questions about its potential misuse and accountability.&lt;/p&gt;
&lt;p&gt;But it is not necessarily true that AI technology is worse than humans in many cases often discussed.&lt;/p&gt;
&lt;p&gt;For instance, consider the process of creating a sketch of a suspect. A witness description could be interpreted by a human sketch artist or an AI model, both of which are interpretations and not the ground truth. The AI system may even come up with an equal or better interpretation than the human.&lt;/p&gt;
&lt;p&gt;It is crucial to have a wide social debate about the trade-offs of AI and where its limits lie. When is AI better than humans, and when has society already gone too far in trusting human methods? AI has many of the same limitations as humans, but it may demonstrate those limits in a way that prompts society to reconsider its past decisions and to be more responsible with both human and automated decision making.&lt;/p&gt;
&lt;p&gt;There is also the issue of accountability, especially when it comes to the normal legal system. A top-down institutional approach to limiting technology has much less accountability to the public and lacks a wide range of perspectives, leading to less legitimate and often worse results.&lt;/p&gt;
&lt;p&gt;I believe this mindset could spread throughout the software industry, including to places where it would be very dangerous.&lt;/p&gt;
&lt;p&gt;If this idea of social responsibility of companies and developers to restrict their users becomes more widespread, it would rewrite the balance of power between software companies and consumers in favor of the companies.&lt;/p&gt;
&lt;p&gt;Imagine if this mindset is taken to conventional tools. Imagine the world in which Microsoft is treated as both in terms of legal power and in terms of generally perceived ethical responsibility as responsible for what a writer does with Microsoft Word. Or if Adobe is considered in the same way responsible for what an artist does with Photoshop or Illustrator and so on.&lt;/p&gt;
&lt;p&gt;It would be no longer a world where you can do what you want with a piece of software that runs on your computer. Someone else, someone with limited accountability to you, would have a lot more power over what you can do on your own computer.&lt;/p&gt;
&lt;p&gt;The companies who make the software you use would have more power over what you can do with their software, and this change could make the world a much worse place.&lt;/p&gt;
&lt;p&gt;A point raised in the previously linked discussion of responsible AI licenses is the idea of authorial integrity over software. The developer or the company who produced it holds the mindset and vision that should influence what users do with the software. It is contended that this artistic or authorial vision should also affect everyone downstream. However, using the software in a way that is not part of that vision is essentially violating the rights of the author or the developer or the company.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://facctconference.org/static/pdfs_2022/facct22-63.pdf"&gt;https://facctconference.org/static/pdfs_2022/facct22-63.pdf&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;(Pg 2) The context in which a model is applied can be far removed from that which the developers had intended, a major point of concern from the perspective of human-centered machine learning [31] … applications that may be of concern, such as large-scale surveillance or the creation of “fake” media. In some cases, the developers or technology creators may legitimately want to control the use of their work due to concerns arising out of the data that it was trained on, the technology’s underlying assumptions about deploy-time characteristics, or the lack of sufficient adversarial testing and testing for bias. This is especially true of AI models that are difficult or expensive to recreate. For example, given that models such as GPT-3 [17] reportedly cost over $10 million (U.S.) to train, very few organizations are positioned to train (and potentially, need to retrain) a model of similar size&lt;/p&gt;
&lt;p&gt;The mindset that the developer or the company has control over the software is incorrect. There is a big difference between functional works and creative works, and software falls into the category of functional works. Software is essentially a description of a process and a set of instructions, a tool that is used to guide a method.&lt;/p&gt;
&lt;p&gt;It’s like a recipe or a textbook telling how you need to mix the paints to get a color. It’s not the painting that uses that color.&lt;/p&gt;
&lt;p&gt;Control over the software used to make art, is fundamentally exerting control over a method, over a technique that’s represented by that software.&lt;/p&gt;
&lt;p&gt;A work of art is a final product that can stand on its own, a work that’s enjoyed by itself. In that case, an artist can have an actual creative vision that’s put through into their art. And I think that doesn’t work when you get a tool like software.&lt;/p&gt;
&lt;p&gt;The paper raises the cost of creating the software as a reason for preserving the vision, but I believe that considering the cost of software development moves things in the opposite direction.&lt;/p&gt;
&lt;p&gt;In the art world, there is potential for substitutes, for other artists to come in and make a work of art that reflects their vision without necessarily needing to modify or use what another artist has done. The resources available to make art are often common enough or inexpensive enough that many visions of what art should be can coexist with each other. You can have many artists creating many works, and each of those works with their own vision.&lt;/p&gt;
&lt;p&gt;But when you get a software program that costs tens of millions of dollars, even hundreds of millions of dollars to produce, a normal person can’t step into that competition, they can’t step into that creative process around developing software.&lt;/p&gt;
&lt;p&gt;With software, the cost of production is so high that a normal person cannot compete in the creative process of developing software, giving the developer or copyright holder a lot of power over society.&lt;/p&gt;
&lt;p&gt;Once you include the case of interlinked supply chains, programs that are dependent on other programs, and the entire tech stack would have to be rebuilt from the ground up to have a different vision, which is infeasible even for the wealthiest person on the planet.&lt;/p&gt;
&lt;p&gt;This is why the freedom to use and modify software and expand upon it is important and critical. Asserting that copyright holders or companies or software developers have the right or obligation to restrict its use is very dangerous.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Experimenting with Owncast</title><link href="/posts/experimenting-with-owncast/" rel="alternate"/><published>2022-12-11T00:00:00+00:00</published><updated>2022-12-11T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-12-11:/posts/experimenting-with-owncast/</id><summary type="html">&lt;p&gt;Experimenting with Owncast – an open source twitch-like streaming application.&lt;/p&gt;
&lt;p&gt;First stream will probably be sometime this evening.&lt;/p&gt;
&lt;p&gt;Because of the potentially high bandwidth usage, I’m setting it up on a VPS that has higher internet bandwidth than anything I use. This VPS will probably get used for other livestreaming …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Experimenting with Owncast – an open source twitch-like streaming application.&lt;/p&gt;
&lt;p&gt;First stream will probably be sometime this evening.&lt;/p&gt;
&lt;p&gt;Because of the potentially high bandwidth usage, I’m setting it up on a VPS that has higher internet bandwidth than anything I use. This VPS will probably get used for other livestreaming/messaging tasks, and various other things I don’t want to be 100% dependent on my home internet connection.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>The Best Camera for Beginners</title><link href="/posts/best-camera-for-beginners/" rel="alternate"/><published>2022-12-10T00:00:00+00:00</published><updated>2022-12-10T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-12-10:/posts/best-camera-for-beginners/</id><summary type="html">&lt;p&gt;I believe that the best camera for beginners today is typically a fixed-lens bridge camera or a point-and-shoot. In the past, I have suggested to people that the first camera they purchase should be an entry-level DSLR or other interchangeable lens camera. However, it appears that the majority of camera …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I believe that the best camera for beginners today is typically a fixed-lens bridge camera or a point-and-shoot. In the past, I have suggested to people that the first camera they purchase should be an entry-level DSLR or other interchangeable lens camera. However, it appears that the majority of camera makers are abandoning the sub-thousand-dollar interchangeable lens camera market in favor of the high end market.&lt;/p&gt;
&lt;p&gt;I recently used Canon’s entry-level DSLR, the EOS T7, and it was not a positive experience. I experimented with it to get a feel for how it captured stills, how it took video, and everything else, but primarily I was curious as to whether or not it would make a suitable streaming camera to keep in a fixed spot and hook up as essentially a webcam utilizing Canon’s webcam tool. And even compared to the experience of using some simple point-and-shoot cameras, it was a step down.&lt;/p&gt;
&lt;p&gt;In many ways, the experience was inferior to that of a cell phone camera, and it felt antiquated. It also appeared as if the manufacturer developed the camera as a low-effort, entry-level product.&lt;/p&gt;
&lt;p&gt;In addition, Canon is discontinuing its EOS M line of interchangeable-lens mirrorless cameras, which was formerly a very capable system. It is  the first major camera platform I got into, but it appears like Canon won’t really develop that many new cameras for that platform. I’m kind of bummed about that. They are moving on to the EOS R full frame system, which is more expensive, thus they have no plans to continue making lenses for that format.&lt;/p&gt;
&lt;p&gt;In contrast, the market for budget-friendly point-and-shoot cameras has greatly improved with the introduction of optical image stabilization and computational photography features. A point-and-shoot camera with a one-inch sensor gives an excellent experience for a variety of everyday situations. It can perform well enough in low light that you can use it for the most of your daily tasks. Moreover, bridge cameras are becoming increasingly competent. Bridge cameras are, of course, fundamentally a more expensive market if you choose something with low-light performance. However, I believe that even the cheapest bridge cameras and super zoom cameras may produce decent results in the typical situations where one would use them.&lt;/p&gt;
&lt;p&gt;In addition, there is a world of premium point-and-shoot or fixed-lens cameras that have also become quite good. So I sold my interchangeable lens system save for film cameras and switched to fixed lens cameras for most of my hobby work, because I believe it is a better deal these days.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Using GPT-3 for Creative Writing</title><link href="/posts/gpt3-creative-script/" rel="alternate"/><published>2022-12-07T00:00:00+00:00</published><updated>2022-12-07T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-12-07:/posts/gpt3-creative-script/</id><summary type="html">&lt;p&gt;Created another script that uses a single run of the edit mode of GPT-3 with a high temperature (ie. giving GPT-3 a high degree of creativity). But it runs it three distinct times.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gogs.theopjones.blog/theo/LittleScripts/src/master/transcribefoldermultiple.py"&gt;https://gogs.theopjones.blog/theo/LittleScripts/src/master/transcribefoldermultiple.py&lt;/a&gt;
The results are interesting&lt;/p&gt;
&lt;p&gt;Unedited Transcript&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Created another script that uses a single run of the edit mode of GPT-3 with a high temperature (ie. giving GPT-3 a high degree of creativity). But it runs it three distinct times.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gogs.theopjones.blog/theo/LittleScripts/src/master/transcribefoldermultiple.py"&gt;https://gogs.theopjones.blog/theo/LittleScripts/src/master/transcribefoldermultiple.py&lt;/a&gt;
The results are interesting&lt;/p&gt;
&lt;p&gt;Unedited Transcript&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I’ve been experimenting a bit with using GPT-3 to process speech-to-text transcripts, which in their raw form contain no line breaks, no paragraph breaks, kind of off-text because it’s a direct transcription of my speech, like not how I would normally write it. I’m feeding, and I have a little Python script written to feed these raw, unprocessed speech-to-text transcripts into GPT-3. Of course, GPT-3 can’t be ran locally, so it has to make an external API call. But how the script I wrote works is it makes one API call to have the text split up into individual paragraphs, and it makes another set of API calls for each paragraph to correct the grammar, style, spelling, and all of that. I did the two-part thing because based on my experimentation, GPT-3 doesn’t really like being given a huge wall of text, so splitting it up into paragraphs is one of the best techniques I found to get GPT-3 not to remove a lot of text without creating replacement text or add totally new text. From what I can tell, the little script I wrote is able to keep things pretty faithful to how I originally dictated while still punching up the grammar and resolving a lot of the editing I would have to do to make a speech-to-text transcript usable on my blog or something. So I think it’s helpful because it reduces a lot of really error-prone stuff that comes with using speech-to-text to write. I’ve uploaded a little Python script. I’ve used use slash created, and you can find it below.&lt;/p&gt;
&lt;p&gt;Run 1 of GPT-3&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I’ve been experimenting a bit with using GPT-3 to process speech-to-text transcripts, which in their raw form contain no line breaks, no paragraph breaks, kind of off-text because it’s a direct transcription of my speech, like not how I would normally write it. I’m feeding, and I have a little Python script written to feed these raw, unprocessed speech-to-text transcripts into GPT-3.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;Of course, GPT-3 can’t be run locally, so it has to make an external API call.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;But how the script I wrote works is it makes one API call to have the text split up into individual paragraphs, and it makes another set of API calls for each paragraph to correct the grammar, style, spelling, and all of that.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I did the two-part thing because based on my experimentation GPT-3 doesn’t really like being given a huge wall of text, so splitting it up into paragraphs is one of the best techniques I found to get GPT-3 not to remove a lot of text without creating replacement text or add totally new text.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;From what I can tell, the little script I wrote is able to keep things pretty faithful to how I originally dictated while still punching up the grammar and resolving a lot of the editing I would have to do to make a speech-to-text transcript usable on my blog or something.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;So I think it’s helpful because it reduces a lot of really error-prone stuff that comes with using speech-to-text to write.&lt;/p&gt;
&lt;p&gt;I’ve uploaded a little Python script. You can find it below.&lt;/p&gt;
&lt;p&gt;Run 2 of GPT-3&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;Currently, all of the transcripts on my YouTube channel are just raw, unedited speech-to-text outputs. They’re posted in their raw form so that they’ll be more accessible to people with all types of cognitive impairments who may not have the mental capacity to parse a transcript that’s been edited to improve grammar, spelling, sentence structure, style, and probably many more things I’m forgetting. Unedited speech-to-text transcripts are also more accurate since there’s no bias about word usage. Since speech-to-text is closer to merely transcribing all the words that I actually said, it’s more accurate than an edited transcript will be because I’m speaking with some bias and editing to clean up the transcript would increase that bias. However, the unedited speech-to-text transcripts increase the difficulty of parsing the text. For someone with severe speech-to-text-related cognitive impairments, they may continue to be beneficial, but for a lot of people with some type of speech-to-text-related cognitive impairment, an edited transcript would be helpful. That’s where GPT-3 comes in. GPT-3, because of its ability to edit, process, and rework text, can be used to make speech-to-text transcripts more accessible for people with a wider range of learning styles and ways of processing text. In short, GPT-3 can help turn unedited speech-to-text transcript from something that’s not very accessible to something that’s more accessible for a lot of people with different kinds of cognitive impairments. This seems like a common sense thing. Since most online videos have speech-to-text transcripts, most online speech-to-text transcript must be pretty difficult to parse, especially if someone has a mild learning disability of some sort.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;GPT-3 can be used to punch up unedited speech to text transcripts, improving their readability and overall quality. I’ve been experimenting with using GPT-3 to process speech-to-text transcripts, which in their raw form contain no line breaks, no paragraph breaks, and are generally kind of off-text because it’s a direct transcription of my speech, not how I would normally write it. I’ve written a little Python script to feed these raw, unprocessed speech-to-text transcripts into GPT-3. Of course, GPT-3 can’t be ran locally, so it has to make an external API call. But the script I wrote works by making one API call to have the text split up into individual paragraphs and making another set of API calls for each paragraph to correct the grammar, style, spelling, and all of that. I did the two-part thing because based on my experimentation, GPT-3 doesn’t really like being given a huge wall of text, so splitting it up into paragraphs is one of the best techniques I found to get GPT-3 not to remove a lot of text without creating replacement text or add totally new text. From what I can tell, the little script I wrote is able to keep things pretty faithful to how I originally dictated while still punching up the grammar and resolving a lot of the editing I would have to do to make a speech-to-text transcript usable on my blog or something. So I think it’s helpful because it reduces a lot of really error-prone stuff that comes with using speech-to-text to write. I’ve uploaded a little Python script. You can find it below.&lt;/p&gt;
&lt;p&gt;Run 3 of GPT-3&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I’ve been using GPT-3 to process speech-to-text transcripts, which are quite rough in their original form.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;They have no line breaks or paragraph breaks, and their formatting is off because GPT-3 has to read directly from audio.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;But I wrote a little Python script which takes these unprocessed transcripts, inputs them into GPT-3, and the script outputs the corrected versions.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;Since GPT-3 can’t be accessed locally, the script has to make external API calls.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;The script works by making one API call to have the text split into individual paragraphs, and then another call for each paragraph to correct the grammar, style, spelling and such.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I did the two-part thing because, from my experimentation, GPT-3 doesn’t really like being given large walls of text, so splitting it into paragraphs is one of the best ways to get GPT-3 to not remove a lot of text without creating replacement text or adding totally new text.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;The script I made seems to be able to keep things pretty faithful to the original while still enhancing the grammar and resolving a lot of the editing I would have to do to make the transcript usable on my blog or elsewhere.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;So I think it’s helpful because it reduces a lot of the really error-prone stuff that comes with using speech-to-text to write.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;I’ve uploaded the Python script to use slash created - you can find it below.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Processing Speech-to-Text with GPT-3</title><link href="/posts/gpt3-speech-to-text/" rel="alternate"/><published>2022-12-06T00:00:00+00:00</published><updated>2022-12-06T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-12-06:/posts/gpt3-speech-to-text/</id><summary type="html">&lt;p&gt;I’ve been experimenting with using GPT-3 to process speech-to-text transcripts. These transcripts, in their raw form, contain no line breaks or paragraph breaks, and are not how I would normally write because they are direct transcriptions of my speech. I have written a small Python script to feed these …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I’ve been experimenting with using GPT-3 to process speech-to-text transcripts. These transcripts, in their raw form, contain no line breaks or paragraph breaks, and are not how I would normally write because they are direct transcriptions of my speech. I have written a small Python script to feed these unprocessed transcripts into GPT-3. Of course, GPT-3 cannot be run locally and requires an external API call.&lt;/p&gt;
&lt;p&gt;But how the script I wrote works is that it first makes one API call to split the text into individual paragraphs, and then it makes another set of API calls for each paragraph to correct the grammar, style, and spelling. I opted for the two-part approach because, based on my experimentation, GPT-3 doesn’t really handle large blocks of text very well. So, splitting it up into paragraphs is one of the best techniques I’ve found to prevent GPT-3 from removing too much text without creating replacement text or adding totally new text.&lt;/p&gt;
&lt;p&gt;From what I can tell, the small script I wrote is able to keep things faithful to how I originally dictated, whilst still improving the grammar and resolving much of the editing I would have to do to make a speech-to-text transcript usable on my blog or something. Thus, I think it’s helpful as it reduces a lot of the error-prone aspects associated with using speech-to-text to write.&lt;/p&gt;
&lt;p&gt;The script can be found here &lt;a href="https://gogs.theopjones.blog/theo/LittleScripts/src/master/transcribefolder.py"&gt;https://gogs.theopjones.blog/theo/LittleScripts/src/master/transcribefolder.py&lt;/a&gt;
(this post is just the output of this workflow, with minimal additional editing)&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Thoughts on Mastodon</title><link href="/posts/response-to-mastodon-discussion/" rel="alternate"/><published>2022-11-23T00:00:00+00:00</published><updated>2022-11-23T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-11-23:/posts/response-to-mastodon-discussion/</id><summary type="html">&lt;p&gt;In response to &lt;a href="https://www.tumblr.com/northshorewave/701681253552898048/so-whats-the-deal-with-mastodon-anyway-is-it-the"&gt;the post quoted below&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;“So what’s the deal with Mastodon anyway. Is it the prospective post-Twitter Musk-hater meeting place? Why would anyone choose to name their company after a prehistoric animal that humans hunted to extinction?”&lt;/p&gt;
&lt;p&gt;The short and quick answer to that is that Mastodon …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In response to &lt;a href="https://www.tumblr.com/northshorewave/701681253552898048/so-whats-the-deal-with-mastodon-anyway-is-it-the"&gt;the post quoted below&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;“So what’s the deal with Mastodon anyway. Is it the prospective post-Twitter Musk-hater meeting place? Why would anyone choose to name their company after a prehistoric animal that humans hunted to extinction?”&lt;/p&gt;
&lt;p&gt;The short and quick answer to that is that Mastodon is an open source program that provides Twitter like functionality. It’s something that you can use to set up a social media website of your own.&lt;/p&gt;
&lt;p&gt;It is possible for different instances of Mastodon to talk to each other but this highly depends on how the particular administrators have their instances configured and it is fairly common for instances to refuse to communicate with each other often for trivial reasons or just the administrator’s personal preference.&lt;/p&gt;
&lt;p&gt;So I would call Mastodon at best a semi-decentralized system because the general assumption of Mastodon is that most users will join an instance that’s ran by someone else and that most users won’t run their own instance. There is very limited portability of accounts between instances. Identity on Mastodon is completely tied to the individual instance.&lt;/p&gt;
&lt;p&gt;It is possible to run your own instance just for you, and get some other instances to talk to your instance, but most people use other instances. The software isn’t really built for one user instances, and generally assumes that an instance has a lot of users. Managing a Mastodon instance is relatively complicated compared to a lot of other server software.&lt;/p&gt;
&lt;p&gt;The protocol that allows Mastodon instances to talk to each other kind of sort of resembles RSS but it’s push based so an instance will notify other instances of new posts instead of pulling a list of posts from the instances. This results in a pretty different ecosystem because content tends to propagate from one instance to another and the usual configuration of a Mastodon instance is that it will mirror the content of the instances that it is connected to, and in some cases give users a feed of that content.&lt;/p&gt;
&lt;p&gt;Mastodon instances usually have much heavier handed moderation than other social media.&lt;/p&gt;
&lt;p&gt;Mastodon.social is probably the most popular Mastodon instance and when you hear people talk about Mastodon it or instances under similar management are what people are talking about.&lt;/p&gt;
&lt;p&gt;Truth Social and Gab are also Mastodon instances, but probably aren’t what people who talk about “Mastodon” mean. Most other Mastodon instances are very left-wing in their management.&lt;/p&gt;
&lt;p&gt;My take on Mastodon is fairly negative. I think it’s a system that somehow manages to reproduce kind of worst of Twitter but also has none of the benefits of true decentralization.&lt;/p&gt;
&lt;p&gt;Most of what Mastodon is good at can be fundamentally done by other ways. My opinion on this type of thing is that the protocols of the old open blogosphere fundamentally worked. The reasons why the old blogosphere kind of died out are unrelated to the things that Mastodon is optimizing for.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Blogging Engine Research</title><link href="/posts/blogging-engine-research/" rel="alternate"/><published>2022-11-22T00:00:00+00:00</published><updated>2022-11-22T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-11-22:/posts/blogging-engine-research/</id><summary type="html">&lt;p&gt;I’ve been doing a bit of research to see what blogging engines exist that are in between WordPress (which is kind of a bloated mess) for running a small blog and Hugo and other static site generators which don’t have web based UIs and a few other features …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I’ve been doing a bit of research to see what blogging engines exist that are in between WordPress (which is kind of a bloated mess) for running a small blog and Hugo and other static site generators which don’t have web based UIs and a few other features.&lt;/p&gt;
&lt;p&gt;An interesting one that’s a minimalistic blogging engine but still not quite static site generator level minimalistic is Bludit. It looks like a very minimalistic blog that’s just that doesn’t have like a lot of extra features or bloat to it. It supports markdown.&lt;/p&gt;
&lt;p&gt;It’s not a static site generator but it has a flat file data structure to it so it’s easy to backup because it doesn’t have a MySQL database and there’s not kind of that extra bloat of a MySQL database running on your server. That’s either you have to tolerate more RAM usage or break containerization by using a kind of shared MySQL across all services on your server. So it looks like a pretty good option. I haven’t replaced Hugo with it on my blog yet but it’s a very interesting minimalistic blogging engine from what I can tell and how I’ve experimented with it a bit so far.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Finding Cheap GPUs for Machine Learning</title><link href="/posts/cheapest-gpu-for-ml/" rel="alternate"/><published>2022-11-22T00:00:00+00:00</published><updated>2022-11-22T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-11-22:/posts/cheapest-gpu-for-ml/</id><summary type="html">&lt;p&gt;I’ve done some investigation recently to try to figure out what’s the cheapest GPUs around that would work for machine learning type tasks like running whisper or similar. I have a fairly beefy GPU in my computer, the A4000, which is an unusual configuration. It’s a workstation …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I’ve done some investigation recently to try to figure out what’s the cheapest GPUs around that would work for machine learning type tasks like running whisper or similar. I have a fairly beefy GPU in my computer, the A4000, which is an unusual configuration. It’s a workstation GPU, not a consumer GPU. And it’s a fairly high end GPU. I kind of got it because I mostly do productivity stuff on my computer, like photo, video, editing, some GPU intensive compute processes and things like that. But looking a bit into if there are lesser GPUs around just for recommendations to other people that would work. I think the obvious thing, and it’s the one situation I tested with old equipment I have around would be the RTX 2060. It’s kind of a consumer GPU, it goes used for about $200 from what I can tell on eBay and new for about $300 in a 12GB model. It’s the cheapest consumer GPU that has high VRAM.&lt;/p&gt;
&lt;p&gt;And for most machine learning tasks that I’m interested in, VRAM is the limiting factor to an extent that’s not true of gaming. On eBay I was able to find old workstation graphics cards that have a lot of RAM. One good example is the Nvidia M40, it has 12GB of RAM and I’m seeing it used for around $100. Like the absolute cheapest one that has enough RAM that I’m seeing is the K40, the Nvidia K40. And that also has 12GB of RAM. I would say the M40 would get pretty reasonable performance. The M40 has a pass mark score on GPU compute of 3775 operations per second. Kind of comparing that to the GPUs that I’ve kind of ran Whisper on, I guess it would do the large model in approximately one to one timing. One minute of audio input would take about a minute to process.&lt;/p&gt;
&lt;p&gt;The GPU that I have, the A4000, gets about four to one. Four minutes of audio input would take a minute to process. The cheapest GPU that I’ve found that has enough VRAM has the $45 K40, has a pass mark score of around 2000 ops per second and that would like I think get like two minutes of processing time for like each minute of audio or maybe slightly worse than that. But I think like, I think there are a lot of kind of cheap GPU options if you’re using the type of workflow that I use. And you just feed the speech tech software a pre-recorded recording and let it transcribe.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Switching Away from Apple</title><link href="/posts/tumblr-feed-thoughts/" rel="alternate"/><published>2022-11-22T00:00:00+00:00</published><updated>2022-11-22T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-11-22:/posts/tumblr-feed-thoughts/</id><summary type="html">&lt;p&gt;I am talking while going through my feed on Tumblr. I am going to talk about interesting posts as I see them. And then I am going to feed this recording into transcription software.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://northshorewave.tumblr.com/post/697583950171865088/whats-the-issue-with-your-macbook-as-far-as-i"&gt;https://northshorewave.tumblr.com/post/697583950171865088/whats-the-issue-with-your-macbook-as-far-as-i&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The first interesting post that I see is North …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I am talking while going through my feed on Tumblr. I am going to talk about interesting posts as I see them. And then I am going to feed this recording into transcription software.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://northshorewave.tumblr.com/post/697583950171865088/whats-the-issue-with-your-macbook-as-far-as-i"&gt;https://northshorewave.tumblr.com/post/697583950171865088/whats-the-issue-with-your-macbook-as-far-as-i&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The first interesting post that I see is North Shore Wave talking about switching away from macOS. I recently actually switched away from macOS myself. A lot of the reasons why I have been switching away from Apple products is that Apple’s business practices have become a lot worse. So last year I decided to just start doing a switch over to alternatives. I bought a proper workstation desktop and put Linux on it. I sold my Macbook and I switched to basically using Linux and Windows as my two primary OSs.&lt;/p&gt;
&lt;p&gt;I think the biggest issues I ran into with the transition from macOS are just device incompatibilities. I have a few specialized devices like a sound recording DAC and a few things like that that are kind of paired to either the Macbook hardware like the Thunderbolt port that isn’t like really common on PCs or is also dependent on some of the macOS software and doesn’t work well on Linux and or Windows. Then I’ve also ran into the issue where at some point my workflow kind of depends on proprietary software a bit. I eventually will run into something where there’s not a good open source alternative or the open source alternative is really different and I have to kind of experiment around to find the proper alternative.&lt;/p&gt;
&lt;p&gt;For me that came up a lot when it comes to photography stuff because like my workflow got like built around Photoshop and Lightroom and the thing with Photoshop and Lightroom is there is not really a single program that does like everything that Lightroom does. And for Photoshop there are open source alternatives like GIMP but it’s kind of just not the same in terms of how good the UI is and how good just the user experience and functionality is.&lt;/p&gt;
&lt;p&gt;For Lightroom the issue is basically that like Lightroom does a lot of stuff. Lightroom kind of does digital asset management and backup. You put your photos into it and it backs it up to remote storage which kind of sort of has privacy implications since Adobe’s remote storage but a lot of my hobby photography stuff isn’t that important from a privacy perspective. So it automatically makes backups. It automatically is able to sync between all devices so if you’re working on a tablet you can sync to that or if I take a bunch of photos I’m on a computer that I don’t have Photoshop installed or I don’t want to have Photoshop installed I can go into the web app and just upload things from there.&lt;/p&gt;
&lt;p&gt;Fortunately due to the pervasiveness of Chrome OS Adobe is starting to have an actually really good web app so it’s possible to just use Lightroom as I normally did now. That a little while ago didn’t used to be the case and still with Photoshop the actual web application version of it is just garbage. It’s totally terrible.&lt;/p&gt;
&lt;p&gt;So like for switching away from Mac OS it’s a process that took a while and like I think I’m finally getting rid of the last Apple device that I have like that I use on a regular basis. I recently bought an Android phone and I moved my phone plan over to the Android phone. It’s a Google Pixel. It’s a small Google Pixel not the full one. I kind of still have my iPhone because I’m kind of just moving data between the two but pretty soon I’ll get rid of the iPhone and just only have Android. That will be the last big Apple device.&lt;/p&gt;
&lt;p&gt;I’ve kind of already switched away from the iPad. I’ve switched away from the Macbook and like yeah I don’t rely on Apple services as much. Like it kind of does feel a bit weird switching to Google since Google is also a big company that does a lot of things wrong. The thing with Google is where they get really bad is privacy and like that feels like kind of a lost cause. What Apple’s doing that’s kind of new in its badness compared to whatever other tech companies do is the extent to which Apple doesn’t let you treat your device as your device and like tries to block what you can do with it.&lt;/p&gt;
&lt;p&gt;Like when Apple pressured Tumblr into blocking certain content on their site and just didn’t do it. And because their App Store just refused to accept the Tumblr app. That’s novel. The fact that Apple uses the fact that your device is locked to them and like you can’t sideload apps. That’s been an Apple thing for a while but what’s new and pernicious is that Apple is using that to really control what users can do with their device and kind of just restricting. It’s a threat to software freedom that’s new. Like with a classic proprietary OS like Windows like you can put whatever software you want on it.&lt;/p&gt;
&lt;p&gt;Apple not only prevents you from putting your own software on it but is now using that power to kind of just dictate what you can do with your device and like what activities Apple finds acceptable. And that’s really bad. And it’s not saying I want to spread throughout the software world. I’m scrolling over to the next post now and seeing if I can find any posts that look interesting.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Voice Typing Wrapper Around Whisper</title><link href="/posts/voice-typing-wrapper-around-whisper/" rel="alternate"/><published>2022-09-24T00:00:00+00:00</published><updated>2022-09-24T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-09-24:/posts/voice-typing-wrapper-around-whisper/</id><summary type="html">&lt;p&gt;I just wrote a voice typing wrapper around Whisper. It types what I say as keyboard input, and it creates a system tray icon to turn on and turn off the dictation.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://href.li/?https://github.com/theopjones/voice-typing"&gt;https://github.com/theopjones/voice-typing&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;(I just created it, it might have bugs, only tested on Linux)&lt;/p&gt;
&lt;p&gt;I …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I just wrote a voice typing wrapper around Whisper. It types what I say as keyboard input, and it creates a system tray icon to turn on and turn off the dictation.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://href.li/?https://github.com/theopjones/voice-typing"&gt;https://github.com/theopjones/voice-typing&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;(I just created it, it might have bugs, only tested on Linux)&lt;/p&gt;
&lt;p&gt;I’m not sure how much additional time I want to invest in this little project. Because I’m not an expert in this type of technology or AI in general. I’m not sure if I’d do a super good job at implementing it further.&lt;/p&gt;
&lt;p&gt;I think right now I have something that’s a very interesting proof of concept. But while testing it, I have encountered a few bugs and little glitches. And I definitely don’t get the same exact level of accuracy while voice typing with this tool that I’d get just pre-recording my voice and feeding it in all at once. But it is IMHO a more convenient way to write documents than recording a big audio file all at once.&lt;/p&gt;
&lt;p&gt;Internally what this does is it breaks up the audio into small little snippets and parses each one of those snippets automatically. This doesn’t do wonders for interacting with the underlying model because it’s not consistent with the assumptions being made in Whisper.&lt;/p&gt;
&lt;p&gt;The underlying Whisper model and its ability to parse grammar and everything kind of assumes that it is dealing with really long blocks of audio that it can go through all at once.&lt;/p&gt;
&lt;p&gt;When I dictate a lot to it at once it kind of jumbles up the grammar/punctuation. This is pretty easy to correct for, but it does look a bit weird, at least until I get done correcting it. I’m actually writing this right now with it.&lt;/p&gt;
&lt;p&gt;The method I am using to feed a constant stream of audio seems kind of like a Jerry-rig, instead of an actual solution to the underlying problem.&lt;/p&gt;
&lt;p&gt;In its current state, it comes pretty close to meeting my immediate need for a dictation program/voice typing program.  I mostly just want some reasonably accurate way to write short to medium sized documents.&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>The Whisper Speech to Text Library Appears Really Powerful</title><link href="/posts/whisper-speech-to-text/" rel="alternate"/><published>2022-09-23T00:00:00+00:00</published><updated>2022-09-23T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-09-23:/posts/whisper-speech-to-text/</id><summary type="html">&lt;p&gt;There’s a new speech-to-text program/library that just got released by OpenAI as open source called &lt;a href="https://github.com/openai/whisper"&gt;Whisper&lt;/a&gt; and it’s impressed me quite a bit so far. It’s really powerful and it competes pretty well with the incumbent major speech-to-text tools in terms of accuracy.&lt;/p&gt;
&lt;p&gt;The caveat being …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There’s a new speech-to-text program/library that just got released by OpenAI as open source called &lt;a href="https://github.com/openai/whisper"&gt;Whisper&lt;/a&gt; and it’s impressed me quite a bit so far. It’s really powerful and it competes pretty well with the incumbent major speech-to-text tools in terms of accuracy.&lt;/p&gt;
&lt;p&gt;The caveat being that its not a full featured tool. Currently all it does is is convert an audio file to text. It’s a command line tool so far. It doesn’t have anything more sophisticated like simulated keyboard input or training or any of the types of things you’d expect from a well-established desktop speech-to-text program like Dragon Naturally Speaking or something. It’s more intended as a research model than anything, but the results I’ve gotten out of it are spectacular. It is not quite a hundred percent perfect, but the error rate is impressively small. The accuracy is better than even mature speaker dependent systems like Dragon. It has a very strong model of grammar and gets things that are really difficult for most speech to text programs like capitalization, or prepositions or small words.&lt;/p&gt;
&lt;p&gt;It gets a lot of technical/specialized terms right, this is something most other speech to text systems I’ve used have a lot of difficulty with. It has the same accuracy to expect from a speaker dependent program that’s been trained a while on your voice even though it’s a speaker independent program that just works off of a generic model of speech.&lt;/p&gt;
&lt;p&gt;As part of my testing, I read a few of my older blog posts to it. The audio clips + the generated text can be seen &lt;a href="https://gogs.theopjones.blog/theo/sampleaudioclips"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It seems to work well with a wide range of microphones. I tried my SM7B (a standard broadcast dynamic microphone), but I also tried more exotic microphones. One of these is a a stenomask. A stenomask is a microphone that goes right up against your mouth and you speak in a really really soft voice into it, so it gives you privacy and people nearby can’t hear what your are saying. These microphones are very frequently used with speech recognition, but because stenomasks muffle the sound of the speaker’s voice, a lot of speech recognition programs have trouble with stenomasks, and when you use one versus a regular microphone accuracy tends to go downhill. I tried the stenomask on whisper and the same pattern of declined accuracy occurred but the accuracy was still pretty impressive and quite usable.&lt;/p&gt;
&lt;p&gt;There are of course some limitations. I’d say it’s only kind of sort of open source. You can download the tool to convert audio into text and you can download a pre-built model for it but software to actually generate that model from audio hasn’t been released yet. Additionally, the model is based on a lot of not open source licensed data so you can’t just regenerate the exact same model from public data sources even if you had the model generation code. So, I would say it’s not fully open source although it’s still a lot more open than basically any common and wide widely used speech to text program&lt;/p&gt;
&lt;p&gt;Its also just one part of the puzzle of a fully featured speech to text system. To be a full competitor to other tools there would have to be a whole ecosystem of software using this model, and not just what we have now – a way to convert an audio recording to text. This includes integrations to other software, and integrations with the operating system. Of course, none of that exists for this particular speech to text model. But it appears that the broader open source community is working on ways to make use of this tool. There is, for instance, &lt;a href="https://github.com/mallorbc/whisper_mic"&gt;a repo on Github&lt;/a&gt; for a program that can take in live microphone input and run it through speech to text in real time.&lt;/p&gt;
&lt;p&gt;I have been long interested in speech-to-text systems because I have a handwriting disability that makes it hard for me to quickly type and write normally, hopefully this progress means that some of the big incumbent sellers of speech to text software will have competition from the open source community.&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>Home Servers, Tunneling, etc</title><link href="/posts/home-servers-tunneling-etc/" rel="alternate"/><published>2022-09-17T00:00:00+00:00</published><updated>2022-09-17T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-09-17:/posts/home-servers-tunneling-etc/</id><summary type="html">&lt;p&gt;As a follow-up to my post earlier this week, I’ll discuss some other interesting things about setting up a home server.&lt;/p&gt;
&lt;p&gt;Unfortunately, the technology here is a little bit opaque, and I’m not really aware of any good documentation that exists on how to set up servers that …&lt;/p&gt;</summary><content type="html">&lt;p&gt;As a follow-up to my post earlier this week, I’ll discuss some other interesting things about setting up a home server.&lt;/p&gt;
&lt;p&gt;Unfortunately, the technology here is a little bit opaque, and I’m not really aware of any good documentation that exists on how to set up servers that is newbie friendly. Most of the writing here doesn’t really start from first principles, and a lot of what you’ll find is aimed at super knowledgeable people, or people like IT systems administrators.&lt;/p&gt;
&lt;p&gt;There’s a lot of stuff on Internet forums and on Reddit and on various peoples blogs. And when I figure this stuff out I do a lot of Googling and visiting Reddit threads, and visiting Stack Overflow threads.&lt;/p&gt;
&lt;p&gt;I’ve thought about writing a bit more about how the technology works and how to set up this type of server. But this is not something i’ve done yet&lt;/p&gt;
&lt;p&gt;Setting up HTTPS has become a lot easier than it used to be. Caddy, which I use for the reverse proxy in my server basically handles SSL without me having to do much. There’s also a helper for NGINX which deals with a lot of the setting up the reverse proxy and setting up SSL.&lt;/p&gt;
&lt;p&gt;The existence of Let’s Encrypt has basically eliminated the need to buy SSL certificates from designated certificate authorities, and it’s what the tools I mentioned above are built on top of.&lt;/p&gt;
&lt;p&gt;The security situation is kind of a mixed bag, there are some tools I ran into that have super insecure default configurations, fortunately the security of the most common software programs has improved a lot compared to where it used to be. Most of the big tools that you’ll run into like Web servers and so on are pretty much secure by default, you would have to actively change the configuration in undesirable ways to make it insecure.&lt;/p&gt;
&lt;p&gt;And I think container programs like Docker and so on also help a lot with security, basically every application I have running on my server has its own docker container. The Caddy reverse proxy works as the glue between these containers. &lt;/p&gt;
&lt;p&gt;Docker is a way of packaging software programs with needed libraries and dependencies, it functions in a very VM like way – there is a high level of isolation between the different containers by default. This isolates security issues, if one of the services running on the server gets owned it’s hard for the hacker to privilege escalate to the rest of the server, so it’s possible to just deal with the security issue by just nuking that one container and starting fresh.&lt;/p&gt;
&lt;p&gt;Additionally, since there are a lot of docker images that are packaged either by the developers of the software or by someone else upstream, it’s pretty easy to find a docker container where everything’s packaged into a pretty secure by default container.&lt;/p&gt;
&lt;p&gt;For backups, I use the Duplicati tool, set to make daily backups of the server. It’s possible to back up to a portable hard drive, or to another server with Duplicati on it that’s off-site. I haven’t taken any of these purist paths, and I have taken the more non-self hosted route of uploading my data to a cloud storage provider (in this case Wasabi).&lt;/p&gt;
&lt;p&gt;Duplicati is capable of encrypting the backups before they go to the cloud storage provider, or friend’s server, or whatever else you’re using for your remote backup.&lt;/p&gt;
&lt;p&gt;There are two ways to connect the server to the outside world.&lt;/p&gt;
&lt;p&gt;The traditional way, what I used, is to get a static IP address from your ISP. AT&amp;amp;T, who I use for my Internet, sells static IP addresses in a /29 block, that is six usable IP addresses, unfortunately, they won’t give you just one static IP address. Additionally, I still have access to one dynamic IP address from them.&lt;/p&gt;
&lt;p&gt;My router/gateway/modem gets assigned one of the static IP addresses, the home server gets assigned another, basically every other device on my network gets put behind the dynamic IP address.&lt;/p&gt;
&lt;p&gt;Even for standard home dynamic IP addresses, IP address geolocation, at least what you can do from publicly available data sources, isn’t super accurate, the main thing you’ll get with almost perfect accuracy is country and which ISP you’re using.&lt;/p&gt;
&lt;p&gt;If it’s a standard dynamic IP address, it will probably be able to take a really good guess at what city the user of that IP addresses is in, and a pretty inaccurate guess at what neighborhood they are in. Short of getting the ISPs logs of IP address allocations or customer records, you’ll never be able to map this IP address one to one with a physical geographic location.&lt;/p&gt;
&lt;p&gt;For the static IP addresses, as far as I can tell AT&amp;amp;T (and my guess would be most other major ISPs) allocate all of their static IP addresses from one big pool of IP addresses without giving much consideration to geographic area. I haven’t seen any of the IP address geolocation services accurately guess anything else other than what country I am in.&lt;/p&gt;
&lt;p&gt;I don’t consider the privacy implications of a static IP address that huge. Probably the main risk of using a static IP address is that it is known to the public and the network it’s on becomes susceptible to denial of service attacks.&lt;/p&gt;
&lt;p&gt;The barrier of entry to conduct a fairly crippling denial of service attack on a small server or network is pretty low. Taking down your typical home server on a fiber Internet connection is definitely something the typical unskilled script kiddie can do.&lt;/p&gt;
&lt;p&gt;The more newfangled way of connecting your server to the internet is to use a tunneling service.&lt;/p&gt;
&lt;p&gt;Ngrok and PageKite are two pretty good examples of these types of services. Your server opens a connection to the tunneling service, and the tunneling service assigns an IP address to your traffic (or subdomain that can be attached to a domain name as a CNAME record).&lt;/p&gt;
&lt;p&gt;All of these tunneling services hide the server’s IP address from the open Internet.They also have the added security benefit of adding another step between spinning a service up and exposing it to the open Internet, making it harder to accidentally expose a service that shouldn’t be attached to the public Internet.&lt;/p&gt;
&lt;p&gt;The one I’ve done the most experimentation with has been Cloudflare Tunnel. The biggest problem with this service is that it kind of adds another ISP-like intermediary between your server and the user. This is a step back in terms of avoiding over dependence on centralized services, but since the data itself lives on a server you control, it’s still an improvement over your standard content silos or proprietary services. I didn’t use this type of service in my original post for this exact reason.&lt;/p&gt;
&lt;p&gt;Cloudflare goes a bit further than many of the other tunneling services terms of the amount of integration with your site – and not only routes the data, but also takes over the SSL certificate and does a lot of filtering and analysis on the traffic. This does provide a lot of useful security features, but it does mean that Cloudflare does have access to all the traffic going in and out of your server, and it can view SSLed traffic in unencrypted form.&lt;/p&gt;
&lt;p&gt;Cloudflare tunnel is probably the option I’d recommend to people who don’t have a super in depth technical knowledge.&lt;/p&gt;
&lt;p&gt;Concerns about this one company having control over an increasing amount on the Internet aside, it’s a very powerful service with a very powerful free tier. Other than the core tunneling service, it handles a lot of things.&lt;/p&gt;
&lt;p&gt;It handles reverse proxying, i.e. it can do what I use the Caddy web server for, that is acting as the glue between the various services running on your server.&lt;/p&gt;
&lt;p&gt;It can put private services that aren’t supposed to be accessible to the whole world behind an authentication portal, this can act as a source of two factor authentication for self hosted web apps that don’t necessarily support two factor authentication natively. It can provide a wrapper around SSH allowing external access through a web app, but with additional authentication.&lt;/p&gt;
&lt;p&gt;It provides a lot of security features. It can provide DDOS protection.The DDOS protection basically eliminates the threat posed by script kiddy style DDOS attacks, and if the rest of your server is configured correctly it can mitigate even very powerful denial of service attacks. It can rate limit bots accessing your site, which reduces some security threats. Particularly with the paid tier, it can provide a web application firewall which provides some attempts to block known exploits from being used on your site.&lt;/p&gt;
&lt;p&gt;It also provides a content delivery network, meaning that Cloudflare’s servers store and send out frequently accessed static content instead of sending a request to your server every time. This basically eliminates the type of scalability issues that I spent most of my original post talking about.&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>Self-Hosting a Text Heavy Website is a Solved Problem (Even on a Home Server)</title><link href="/posts/self-hosting-text-website/" rel="alternate"/><published>2022-09-12T00:00:00+00:00</published><updated>2022-09-12T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-09-12:/posts/self-hosting-text-website/</id><summary type="html">&lt;p&gt;A while ago I got a mini PC and turned it into a home Web server. This turns out to be a remarkably effective way to host a website. The blog you’re reading right now is hosted on on this mini PC. And this home server is connected to …&lt;/p&gt;</summary><content type="html">&lt;p&gt;A while ago I got a mini PC and turned it into a home Web server. This turns out to be a remarkably effective way to host a website. The blog you’re reading right now is hosted on on this mini PC. And this home server is connected to my standard home Internet. And it’s not just my website, I host a lot of other services in order to improve the privacy of my data.&lt;/p&gt;
&lt;p&gt;It’s a good alternative to the increasing centralization of the Internet.&lt;/p&gt;
&lt;p&gt;I decided to do some testing to figure out how much traffic my set up can handle. Thereby confirming if a small and cheap mini PC connected to a home Internet connection is enough to host someone’s whole personal Internet presence.&lt;/p&gt;
&lt;p&gt;The Server&lt;/p&gt;
&lt;p&gt;The server I am using is a fairly inexpensive Beelink mini PC. It has 8 GB of RAM and a 256 GB mSata SSD, the exact model I bought doesn’t seem to be for sale right now, but a roughly equivalent device from the same manufacturer seems to be going for about $170 on Amazon right now.&lt;/p&gt;
&lt;p&gt;I feel like this is a good example of the performance range to expect from the type of device that someone would try building a home server on. It’s a fairly attainable level of computing power to just set aside for this application, particularly once you consider the expense that comes with things like cloud services or web hosting, or the indirect costs that comes with putting your data where it can be harvested or sold to advertisers.&lt;/p&gt;
&lt;p&gt;The Software
My home server runs Debian Linux with the Caddy web server. Most of the other services running on that server run under Docker containers. Almost everything on the server is freely downloadable open source software.&lt;/p&gt;
&lt;p&gt;The Internet Connection
My Internet connection is a 1 Gbps (Symmetrical up/down) fiber connection. I have also bought a block of static IP addresses, however this isn’t strictly necessary for hosting a Web server, there are many tunneling services that will provide your server a good way to receive connections from the outside Internet. One such service I’ve experimented with in the past is Cloudflare Tunnel.&lt;/p&gt;
&lt;p&gt;Despite the past experiments, my set up is not behind any proxying services or CDNs. It’s a direct connection from the users to the server.&lt;/p&gt;
&lt;p&gt;The main reasons to get a static IP address block are if you want more flexibility, ability to host services that require ports other than standard HTTP or HTTPS, or if you want an alternative to centralized services that would otherwise have to be used.&lt;/p&gt;
&lt;p&gt;The Website&lt;/p&gt;
&lt;p&gt;Right now, the website you’re reading is built with the Hugo static site generator. This creates a fairly lightweight website, lighter weight than say a WordPress blog, although in the past I’ve successfully hosted a WordPress website on the same server.&lt;/p&gt;
&lt;p&gt;While I haven’t done the same level of stress testing that I’ve done with the Hugo site, I feel that WordPress is definitely usable for a personal website on this server.&lt;/p&gt;
&lt;p&gt;How I Tested the Maximum Load the Server Can Take&lt;/p&gt;
&lt;p&gt;I used two services to load test the server. The first is Loadforge, the second is Loadster. These are both paid commercial services to test how much traffic a website can take.&lt;/p&gt;
&lt;p&gt;I configured these services to test the usage pattern where first the user opens a post on the blog, and then clicks on the homepage of the blog and loads it.&lt;/p&gt;
&lt;p&gt;I picked this usage pattern because I think it describes vaguely what a user would do in the case of what I think is probably the highest level of usage that a normal person would encounter on their personal blog – a post going viral and suddenly getting a large influx of traffic driven by external websites. Something like the famed “Reddit hug”.&lt;/p&gt;
&lt;p&gt;Results&lt;/p&gt;
&lt;p&gt;As tested by both services, my home server has the ability to handle about 300 HTTPS requests per second. If the load goes much beyond that the rate of errors getting returned by the server increases dramatically, and response times to queries slow down drastically.&lt;/p&gt;
&lt;p&gt;The limiting factor is pretty clearly the servers ability to handle that many simultaneous requests. Internet bandwidth didn’t seem to matter much, during the testing the bandwidth usage didn’t exceed 50 megabits per second. Meaning that while I have a fairly high end Internet connection, there’s a lot of leeway and that most people who want to host their own blog on a home server could do pretty well if they use a slower Internet connection.&lt;/p&gt;
&lt;p&gt;Based on watching the performance of unrelated tasks on a different computer on the network, I also don’t feel that the performance of the router or the modem was a bottleneck either. However I haven’t really been able to determine what the bottleneck is. RAM usage and CPU usage didn’t really seem to hit the limits of the server.&lt;/p&gt;
&lt;p&gt;I don’t really have the resources to test the exact parameters and limits more, since the server load testing services I have found to be reliable are quite expensive to run. I don’t really have the budget to throw more resources towards this experimentation than I already have.&lt;/p&gt;
&lt;p&gt;But this is enough for basically any plausible use. It is enough to have a website that can withstand getting posted on the front page of Reddit (&lt;a href="https://www.tylermw.com/visualizing-a-reddit-hug-of-death-and-how-to-reddit-proof-your-website-for-pocket-change/"&gt;according to one source I found&lt;/a&gt;, the 99th percentile level of load that comes from being posted on the front page of Reddit is about 25 users a second. For that particular website, each user makes about 15 requests to the server. Meaning that the highest level of load that that particular website was put under was something around 360 requests per second.&lt;/p&gt;
&lt;p&gt;That is still a little bit over what my server was able to be benchmarked at during the load testing.&lt;/p&gt;
&lt;p&gt;However, based on my tweaking and experimentation, a well optimized blog can probably get substantially below that, as long as most users don’t dig deep into the archive. For example, a page load on my site only causes three requests to be made to the site. Additionally, tools like CDNs would substantially improve performance also.&lt;/p&gt;
&lt;p&gt;So, my conclusion is that, yes, a self hosted blog that is well optimized can be hosted on a standard home Internet connection using a cheap computer as a server.&lt;/p&gt;
&lt;p&gt;Hosting text heavy content in a decentralized way, is therefore basically a solved problem. The computing power and Internet connectivity available to the typical person means that anyone can self host a website without needing to rent server space, use a content silo, or pay to have someone else host it.&lt;/p&gt;
&lt;p&gt;However, once you include a lot of rich multimedia the bandwidth requirements start to skyrocket pretty quickly, and depending on how the website is structured, there could be a lot more requests to the HTTP server. I think recent advances in decentralized Internet technology might come to play with higher bandwidth content. Sharing large files effectively and in a distributed way seems to be to wheelhouse of technologies like IPFS, while the task that is easily handled by standard HTTP, that is, hosting lots of small text files is the Achilles’ heel of IPFS and similar. I feel that there is a good potential for a mixed solution combining both traditional technologies and some of these newer technologies.&lt;/p&gt;</content><category term="Notes"/></entry><entry><title>Old Roll of HP5 — Berkeley, CA 2022</title><link href="/posts/berkeley-hp5-2022/" rel="alternate"/><published>2022-06-01T00:00:00+00:00</published><updated>2022-06-01T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-06-01:/posts/berkeley-hp5-2022/</id><summary type="html">&lt;p&gt;From an old roll of HP5 taken sometime in 2022 around Berkeley, CA and the San Francisco Bay Area, developed in 2023.&lt;/p&gt;</summary><content type="html">&lt;p&gt;From an old roll of HP5 taken sometime in 2022 around Berkeley, CA and the San Francisco Bay Area, developed in 2023.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Berkeley Photo" src="/images/photography/berkeley-hp5-2022/PhotoLibrary__1970__01__000537870030 (1).jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Berkeley Photo" src="/images/photography/berkeley-hp5-2022/PhotoLibrary__1970__01__000537870033.jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Berkeley Photo" src="/images/photography/berkeley-hp5-2022/000537870022 (1).jpg"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Berkeley Photo" src="/images/photography/berkeley-hp5-2022/PhotoLibrary__1970__01__000537870034 (1).jpg"&gt;&lt;/p&gt;</content><category term="general"/><category term="photography"/><category term="film"/><category term="berkeley"/><category term="san-francisco"/><category term="black-and-white"/></entry><entry><title>Quadratic Voting Does Not Scale</title><link href="/posts/quadratic-voting-does-not-scale/" rel="alternate"/><published>2022-04-21T00:00:00+00:00</published><updated>2022-04-21T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-04-21:/posts/quadratic-voting-does-not-scale/</id><summary type="html">&lt;p&gt;Quadratic voting is a potential voting method that has gotten a fair amount of discussion in various places, one of the most notable presentations on this is in &lt;a href="https://books.google.com/books/about/Radical_Markets.html?id=3ciXDwAAQBAJ&amp;amp;source=kp_book_description"&gt;Radical Markets&lt;/a&gt;. While the game theoretic justification for this voting method is sound under optimal conditions, with low information/transactional costs, and …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Quadratic voting is a potential voting method that has gotten a fair amount of discussion in various places, one of the most notable presentations on this is in &lt;a href="https://books.google.com/books/about/Radical_Markets.html?id=3ciXDwAAQBAJ&amp;amp;source=kp_book_description"&gt;Radical Markets&lt;/a&gt;. While the game theoretic justification for this voting method is sound under optimal conditions, with low information/transactional costs, and perfectly rational actors, I believe that there are flaws in this idea that make it unusable in most real world circumstances where it is being proposed. It is a system that is perfect on paper, but unsuited to the real world.&lt;/p&gt;
&lt;p&gt;A flaw of many real world voting systems is that there is not a good way to allow voters to provide information about the relative importance of issues. This means that people who only have a weak preference on an issue will be in effect over represented in political outcomes on that issue. Quadratic voting is a proposal to fix this issue.&lt;/p&gt;
&lt;p&gt;In a QV ballot a voter has a number of points that they can allocate across issues. Allocating more points to an issue makes the vote on that issue weighted more. The value of each point declines as you add more points to an issue. Accordingly, there is an incentive to split your points across multiple issues.&lt;/p&gt;
&lt;p&gt;I won’t go into too many details about the justification and game theory here because its been covered by other sources quite a bit. I would assume that if you are reading this blog post, then you probably have some knowledge here. However, I have provided a bit of a summary at the end of this blog post — focusing on the quadratic funding variant because it is somewhat easier to build an intuition around.&lt;/p&gt;
&lt;p&gt;What do I see as the problems with this proposal?&lt;/p&gt;
&lt;p&gt;In summary, it runs into issues with very large elections, and breaks down when people don’t act as Homo economicus purely rational self-interested actors.&lt;/p&gt;
&lt;p&gt;For now, I will leave you with two examples of what I am talking about.
- How to optimally allocate voting points&lt;/p&gt;
&lt;p&gt;The optimal strategy in a QV system would be to allocate votes in proportion to the value of a vote, not the subjective importance of the outcome of that particular issue.&lt;/p&gt;
&lt;p&gt;By value of a vote, I mean the probability that the voter will be the pivotal voter that decides the outcome of the election * the importance of the outcome.&lt;/p&gt;
&lt;p&gt;This can result in the game theoretic optimal allocation of voting points being fairly counterintutaive in cases where both large and small elections are on the same ballot.&lt;/p&gt;
&lt;p&gt;Lets imagine that there are two elections on the ballot 1) the election for the U.S President, where you have a one in a million chance of becoming the pivotal voter, and 2) the county dogcatcher where you have a one in 10 thousand chance of becoming the pivotal voter. Lets say you care about the outcome of the presidential election 10,000 more than the dogcatcher.&lt;/p&gt;
&lt;p&gt;Because of the probability of being the pivotal voter in each case the value of a vote in the dogcatcher election is 100 times more. Therefore, the strategically optimal way to fill out your ballot is to allocate approximately 10 times as many points to the dogcatcher.&lt;/p&gt;
&lt;p&gt;In a real world election using this method, most people may not be that extreme in how strategically they vote — and there will probably be wide variance in how people allocate their votes taking into account the pivotal voter probability. This mix of strategic and non strategic voting will eat at the efficiency benefits of the system.
- High minimum threshold for issue importance&lt;/p&gt;
&lt;p&gt;Consider the &lt;a href="https://vitalik.ca/general/2019/12/07/quadratic.html"&gt;funding-matching version of quadratic voting&lt;/a&gt; (picking this variant because it is particularly easy to follow the math here).&lt;/p&gt;
&lt;p&gt;Lets imagine that there is a public good that 1 million people donate one cent to. In this case, if quadratic funding was used to allocate matching funds, $10 billion would be allocated — $10,000 per capita.&lt;/p&gt;
&lt;p&gt;This scales upward with population, if there were five million donors donating one cent each — 250 billion would be allocated to the project — $50,000 per-capita.&lt;/p&gt;
&lt;p&gt;And if someone for various reasons donates a larger sum, that will be magnified to an absurd degree — making any voting behavior other than being perfectly rational and self-interested a system breaker.&lt;/p&gt;
&lt;p&gt;As you can see, quadratic funding/voting can only really be used for very huge issues and in fairly small communities given the practical limits of how people think — defeating one of its main features, allowing more day to day political participation in every-day political decisions without the limitations of direct democracy under more standard voting systems.&lt;/p&gt;
&lt;p&gt;Additional notes / Overview of Game Theory Justification for QV&lt;/p&gt;
&lt;p&gt;Here is a brief intuition for how QV works / where the theoretical justification comes from.&lt;/p&gt;
&lt;p&gt;Imagine you are in a home owner’s association that is considering building a swimming pool. You and the other 100 members each get $100 of value out of the pool. This means that you would be willing to pay up to $100 to have the pool built.&lt;/p&gt;
&lt;p&gt;If contributions were completely voluntary, the marginal value of each dollar you contribute would be equally distributed across everyone in the association. Ie. if you contribute enough to completely build the pool, you would only get $100 of value out of that.&lt;/p&gt;
&lt;p&gt;In effect, for each dollar of value that you create with your personal contribution, you only get one cent of personal value.&lt;/p&gt;
&lt;p&gt;Therefore, if you were a perfectly rational and self interested actor you would only want to pay this if the pool’s total cost was $100 or less. This applies even if the total value for everyone of the pool was much more than that.&lt;/p&gt;
&lt;p&gt;But in a world where everyone is a perfectly rational actor, individual willingness to pay can be a very useful source of information on what people’s preferences are.&lt;/p&gt;
&lt;p&gt;Lets imagine that your HOA comes up with an idea — using fees to create a matching fund for projects submitted by members.&lt;/p&gt;
&lt;p&gt;Is there in effect a way to translate the individual willingnes to contribute money into a estimate of the total value of the project?&lt;/p&gt;
&lt;p&gt;Lets also say that everyone is behaving like a perfectly self-interested perfectly rational actor. Furthermore, lets say that theb good/service being funded is non-excludable — its not walled off to only the people who contribute.&lt;/p&gt;
&lt;p&gt;For that swimming pool example, what ratio of matching donation would it take to make it rationally self interested for yourself to donate enough such that your donation + the matching funds = *the social value of the project).&lt;/p&gt;
&lt;p&gt;For the example of a project where the benefits are split across 100 people, the correct matching is $99 dollars for each donated dollar.&lt;/p&gt;
&lt;p&gt;The optimal maximum funding for a project where each person donates a dollar is actually, 100 * 100 or $10,000&lt;/p&gt;
&lt;p&gt;You can by this logic derive a general formula here for the case where everyone’s preferences are identical.&lt;/p&gt;
&lt;p&gt;Optimal project funding is equal to the individual donation amount times the square of the population size.&lt;/p&gt;
&lt;p&gt;Which is the core idea and why quadratic voting is quadratic.&lt;/p&gt;
&lt;p&gt;For the matching funding game, you can derive a general rule for cases where indvidual preferences (and therefore donation ammounts) vary. The total funding = the square of the sum of the square roots of the donations.&lt;/p&gt;
&lt;p&gt;Similar logic can be used for voting on funding projects with abstract points instead of matched monetary contributions (the aforementioned formula can be used to calculate the relative importance projects, and the funding avaliable can be allocated proportanate to this).&lt;/p&gt;
&lt;p&gt;And finally, similar logic can be applied to voting on issues/canidates. A quadratic voting ballot would allow voters to allocate points to each issue — with the weight given to that issue being equal to the square root of the number of points allocated.&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>Thoughts on Mastodon and Avoiding Content Silos</title><link href="/posts/thoughts-on-mastodon-and-avoiding-content-silos/" rel="alternate"/><published>2022-03-27T00:00:00+00:00</published><updated>2022-03-27T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-03-27:/posts/thoughts-on-mastodon-and-avoiding-content-silos/</id><summary type="html">&lt;p&gt;I recently set up a Mastodon instance (username @theo@theopjones.com)&lt;/p&gt;
&lt;p&gt;The 500 character text limit on Mastodon does seem a lot better than Twitter’s shorter character limit.&lt;/p&gt;
&lt;p&gt;500 characters amounts to about 100 words, which is in the good middle range between really short content, and the longer …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently set up a Mastodon instance (username @theo@theopjones.com)&lt;/p&gt;
&lt;p&gt;The 500 character text limit on Mastodon does seem a lot better than Twitter’s shorter character limit.&lt;/p&gt;
&lt;p&gt;500 characters amounts to about 100 words, which is in the good middle range between really short content, and the longer form content that is good on WordPress and similar full-featured blogging engines. This is a type of use case that Tumblr is really good at, as Tumblr is gradually becoming a dying site, this does mean that Mastodon may be a suitable replacement.&lt;/p&gt;
&lt;p&gt;In my ideal world, most people on the internet would use open source software running on commodity infrastructure. I want a world in which where you decide to host your content fundamentally doesn’t matter and there is competition in content hosting.&lt;/p&gt;
&lt;p&gt;I want a world where people can just drop their current hosting provider without too much difficulty.&lt;/p&gt;
&lt;p&gt;A world where identity isn’t tied to a particular host. The internet does have a way to do decentralized identity – DNS. But most people don’t have a domain name that is the home for their content.&lt;/p&gt;
&lt;p&gt;The big thing that worries me about Mastodon from a structural perspective, is the fact that Mastodon simultaneously
- is generally structured in a way that means the vast majority of users won’t run their own instance or at least hire someone else to run an individual instance for them- has a primary mode of content moderation built around instance administrators blocking other instances.&lt;/p&gt;
&lt;p&gt;This could easily replicate the situation with email where there are very much first-tier email hosts. With email, Google and Microsoft have by far the best email deliverability. It seems possible that there could be similarly dominant instances of Mastodon that eventually turn it into a de facto centralized service.&lt;/p&gt;
&lt;p&gt;It is possible to come up with ways to design a decentralized platform that doesn’t have this issue.&lt;/p&gt;
&lt;p&gt;Spam, harassment and other similar ubiquitous problems in the social media ecosystem are fundamentally due to the fact that it’s way too easy to get your content in front of someone who has not opted into interacting with you.&lt;/p&gt;
&lt;p&gt;Sending someone a message on a Internet service is usually effectively cost free. And it is similarly easy to get on someone’s activity feed.&lt;/p&gt;
&lt;p&gt;If I were designing things, content filtering would be user based – and split in the three categories.
- There are the people that the user has directly opted into seeing.- There are the people who are trusted by a user that the user trusts.- And there’s the rest of the world.&lt;/p&gt;
&lt;p&gt;The first two categories would be allowed to pass through fairly effortlessly.&lt;/p&gt;
&lt;p&gt;The rest of the world, would have to do something costly, like a digital stamp or proof of work or something else that acts as a limit on excessive posting.&lt;/p&gt;
&lt;p&gt;This is pretty far from Mastodon.&lt;/p&gt;
&lt;p&gt;Additionally, one of the more fuzzy and hard to put exactly in to words issues with Mastodon that I have noticed is the nature of the people running things. I’m definitely not sure I have high faith in the judgment of the typical person who runs a Mastodon instance.&lt;/p&gt;
&lt;p&gt;I’m just seeing a lot of people who revel in the idea of the tech industry throwing around its weight to reshape society.&lt;/p&gt;
&lt;p&gt;A lot of the mindset I am seeing just feels like the same part of the tech industry that got social media into this mess. It feels entirely possible that a lot of power is in the hands of people who could be a lot more fickle and arbitrary then the people who run Twitter or Facebook.&lt;/p&gt;
&lt;p&gt;And also a lot of people for whom their main objections to the way that Facebook and Twitter are ran is that these platforms are not exerting enough control over their users.&lt;/p&gt;
&lt;p&gt;In a ideal world the judgement of where the lines of acceptable behavior in public discourse are would also be decentralized and democratized — instead of decisions being handed down from above.&lt;/p&gt;
&lt;p&gt;From a bird’s eye design and ideas perspective, I like what what I see in the Indieweb project (as mentioned before) &lt;a href="https://href.li/?https://indieweb.org/"&gt;https://indieweb.org/&lt;/a&gt; The emphasis on personal domains instead of shared instances is good. There is also an emphasis on trying to build software that works on commonly available infrastructure like LAMP stack hosting. And also the idea behind the Vouch anti-spam protocol &lt;a href="https://href.li/?https://indieweb.org/Vouch"&gt;https://indieweb.org/Vouch&lt;/a&gt; But all of the actually existing software here has been a pile of half-working kludges in my testing. While Mastodon actually works.&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>The Costs and Benefits of Urban Development are Distributed Very Unequally</title><link href="/posts/urban-development-costs/" rel="alternate"/><published>2022-02-05T00:00:00+00:00</published><updated>2022-02-05T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-02-05:/posts/urban-development-costs/</id><summary type="html">&lt;p&gt;Urban development in big cities is very controversial, and there are politically powerful movements that oppose almost all new construction in big cities.&lt;/p&gt;
&lt;p&gt;There is one explanation that in my opinion just doesn’t hold water. This explanation focuses on incumbent property owners who want to increase the value of …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Urban development in big cities is very controversial, and there are politically powerful movements that oppose almost all new construction in big cities.&lt;/p&gt;
&lt;p&gt;There is one explanation that in my opinion just doesn’t hold water. This explanation focuses on incumbent property owners who want to increase the value of their property. According to this story, incumbent property owners will be for restrictions on new development in order to constrict the supply of available property, therefore driving up the price and the value of their property.&lt;/p&gt;
&lt;p&gt;The reason why this explanation doesn’t hold water is that the urban cores with the most opposition to development like San Francisco and New York City also have some of the lowest rates of property ownership and compared to the United States average some of the lowest percentages of people who live in owner occupied property versus renting their house. These are markets completely dominated by renters. And renters would have very little personal economic interest in driving up home prices. Some of the markets with more permissive regulations towards development also have some of the highest rates of owner-occupied housing in country. (footnote &lt;a href="#dfref-footnote-1"&gt;1&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Additionally, opposition to development in these cities seems primarily driven by groups that claim to represent tenants, not groups that represent property owners. (footnote &lt;a href="#dfref-footnote-2"&gt;2&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;My explanation for where opposition to urban development comes from is based around the fact that the costs and benefits of development are distributed very unequally. Incumbent renters in these markets have very few personal gains from new development except under very long time frames and very large scales.&lt;/p&gt;
&lt;p&gt;Externalities of Urban Development&lt;/p&gt;
&lt;p&gt;Populist movements that oppose new construction make the claim that new development will cause gentrification which will drive out incumbent renters and displace current residents at the expense of wealthier newcomers. They state that new development will be a net negative for the people who currently live in neighborhoods where new construction occurs.&lt;/p&gt;
&lt;p&gt;But is there actual evidence for this, or is this argument just pure economic illiteracy?&lt;/p&gt;
&lt;p&gt;I think there is a fair amount of evidence for this.&lt;/p&gt;
&lt;p&gt;A 2019 article discusses the economic implications of development. (footnote&lt;a href="#dfref-footnote-3"&gt;3&lt;/a&gt;) Increased density in urban areas has quite a few benefits, in part because this means a lot more people will be in one place and the activities that benefit from having a lot of people in one place will become much more efficient. There are also social benefits to having a lot of people in one place. Average wages will go up and people will find more job opportunities along with having an easier time searching for a job, there will be more innovation, cities will become less car dependent and public transit will be more effective, denser urban areas produce less environmental impact than sparsely populated areas.&lt;/p&gt;
&lt;p&gt;Property values, however, will be increased. Paradoxically, land in urban areas may become more expensive if there are more uses for this land. In fact, a lot of the economic benefits will manifest in property values. This is great if you own property in one of these neighborhoods, it’s not so great if you rent.&lt;/p&gt;
&lt;p&gt;The article concludes, “the effect on rent exceeds the effect on wages. In a spatial equilibrium framework … there may be a collateral net-cost to renters and first-time buyers if residents are not perfectly mobile and housing supply is inelastic.”&lt;/p&gt;
&lt;p&gt;New development can be a windfall for property owners even beyond what economic theory would predict because of the sausage making of urban planning and the operation of municipal governments. In practice, each new development is a trench fight between those opposing development and developers. The permits to develop your land can be immensely valuable. Much more valuable than the land itself in many cases. Giving a property owner these permits gives said developer a huge windfall. The property owner gets to in effect charge a monopolist’s price on a new development because this developer is the only person to have the right permits.&lt;/p&gt;
&lt;p&gt;There is also a dynamic where many of the impacts of development that are positive occur very far away from the new development, while the negative impacts of this new development occur close to home. These positive impacts are very diffuse and subtle, while the negative impacts are immediate for those affected.&lt;/p&gt;
&lt;p&gt;A 2015 paper concluded that the benefits of new housing development are huge. (footnote &lt;a href="#dfref-footnote-4"&gt;4&lt;/a&gt; ) Wages would increase drastically on a national scale, and the economy would’ve grown 50% more than it had in the period between 1964 in 2009 if zoning regulations were more permissive. A handful of major metropolitan areas are the main sources of economically destructive restrictive housing policy.&lt;/p&gt;
&lt;p&gt;Improving housing policy would create an economic boon on a national level but it is also true that the cost would be reflected on a local level, as the bulk of this new housing development would’ve occurred in a select few urban areas.&lt;/p&gt;
&lt;p&gt;The economic benefits of new housing development are huge, and the United States’ major metropolitan areas desperately need more housing and therefore more permissive regulations on the construction of said housing (or at least some other mechanism to actually get this housing built). But the opposition a new housing construction doesn’t come out of the blue.&lt;/p&gt;
&lt;p&gt;The distributional impacts of new housing construction cannot be ignored and are the primary source of a lot of opposition to new development. Housing development must be done in a way that makes sure that the typical American benefits, and this development should be done in a way that benefits renters and enables more Americans to attain homeownership.&lt;/p&gt;
&lt;p&gt;In a way, the discussion of this issue in the economics press and in mainstream news articles, comes from the perspective of a subset of the American population. This is the subset of the American population that works in the fields of employment that are most likely to benefit from higher density, and the subsets of the American population are most likely to own property.&lt;/p&gt;
&lt;p&gt;In a way, the economics literature and the elite opinion in general comes from the viewpoint of those most likely to benefit from new development given the rest of the current regulatory environment.- - - - - -1 According to the data here Los Angeles, New York City, the San Francisco Bay Area are the metropolitan areas with the lowest percentage of the population owning their house. The areas of the nation with the highest rates of homeownership, tend to be concentrated in the South and the Midwest. This page is archived at &lt;a href="https://web.archive.org/web/20220205193255/https:/advisorsmith.com/data/states-and-cities-with-the-highest-homeownership-rates/"&gt;https://advisorsmith.com/data/states-and-cities-with-the-highest-homeownership-rates/&lt;/a&gt; &lt;a href="#ref-footnote-1" title="back to document"&gt;↩&lt;/a&gt;2 See the paper &lt;em&gt;Resisting the Politics of Displacement in the San Francisco Bay Area: Anti-gentrification Activism in the Tech Boom 2.0&lt;/em&gt; by Florian Opillard for a discussion of populist movements that oppose new construction. An internet archive version is at &lt;a href="#ref-footnote-2" title="back to document"&gt;↩&lt;/a&gt;3 &lt;em&gt;The economic effects of density: A synthesis&lt;/em&gt; by Gabriel Ahlfeldt and Elisabetta Pietrostefani, . An internet archive link is at &lt;a href="#ref-footnote-3" title="back to document"&gt;↩&lt;/a&gt;4 Housing Constraints and Spatial Misallocation by Chang-Tai Hsieh &amp;amp; Enrico Moretti . An internet archive link can be found at &lt;a href="#ref-footnote-4" title="back to document"&gt;↩&lt;/a&gt;###&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>Censorship Degrades Public Trust</title><link href="/posts/censorship-degrades-public-trust/" rel="alternate"/><published>2022-02-01T00:00:00+00:00</published><updated>2022-02-01T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-02-01:/posts/censorship-degrades-public-trust/</id><summary type="html">&lt;p&gt;I recently watched the two controversial Joe Rogan episodes. Frankly, what I heard on those episodes was that out of the norm for current political discourse. I wasn’t in anything close to hundred percent agreement with those guests had to say, of course.&lt;/p&gt;
&lt;p&gt;I would say the controversial guests …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently watched the two controversial Joe Rogan episodes. Frankly, what I heard on those episodes was that out of the norm for current political discourse. I wasn’t in anything close to hundred percent agreement with those guests had to say, of course.&lt;/p&gt;
&lt;p&gt;I would say the controversial guests did make good points. I would say the discussion was about 50% reasonable points (many of which haven’t been discussed very much elsewhere) and about 50% crackpottery.&lt;/p&gt;
&lt;p&gt;I think censorship is counterproductive. When a large portion of the population is sympathetic to what your opponents say, you can’t censor your way to public consensus and maintain public trust.&lt;/p&gt;
&lt;p&gt;And even if you could enforce the correct viewpoint on the public, censorship is fundamentally a Faustian bargain where society is creating extremely dangerous infrastructure of mass surveillance and social control. The type of centralized authority and centralized infrastructure that is required to force the censorship that we’ve seen recently on the Internet is intrinsically dangerous. It’s an extreme act of hubris to think that if you give the right people that type of power, you’ll get an utopia.&lt;/p&gt;
&lt;p&gt;Another issue of widespread censorship is that suppressing discussion affects moderate speakers more than the extremists. Basically, when you’re an extreme critic of policy you’re going to get the ire of people no matter what you do. But when you’re more moderate, people are going to tolerate you as long as you shut up on the parts where you disagree with the party line.&lt;/p&gt;
&lt;p&gt;And I think that there is a dynamic where people hear cognizant points from these speakers and when the discussion has been suppressed, they often first hear those cognizant points from the controversial people.&lt;/p&gt;
&lt;p&gt;This gives a lot of credibility to the somewhat eccentric crackpots even when they are full of shit. I think that’s why a lot of people are interested in hearing these controversial podcasts, podcasts like Joe Rogan’s podcast are one of the rare places that you can hear actual discussion of some of these issues, instead of parroting of a party line is in many ways incoherent, arbitrary and rapidly changing.&lt;/p&gt;
&lt;p&gt;I don’t think anyone thinks that it’s the best possible source of commentary, I think a lot of people do think it’s the only place where you won’t hear commentary that’s internally lockstep of everyone else.&lt;/p&gt;
&lt;p&gt;And the whole idea of building public trust by censorship and basically telling people that they’re not allowed to have opinions on policies that are affecting their lives, is self-effacing fundamentally policymakers taking this approach will necessarily offend a large portion of the population and they will degrade public trust even more. It creates an adversarial relationship between policymakers and the public. And it creates a world where policymakers are too used to barking orders at the population, instead of finding ways to build public trust. This also ignores the vast variety of perspectives that are actually relevant to coming up with the best policy response. Different Americans are affected by the policies in different ways, policy elites will have their own biases and interests that are different than those of the typical American. This means that tight control over policy discussions will shut out many perspectives in a way that goes far beyond enforcing scientific objectivity or truth.&lt;/p&gt;
&lt;p&gt;I also think that as a whole US coronavirus policy has been highly corrupted by the fact that policymakers and media outlets who decided to treat China’s response as the objectively ideal response, or at least a baseline for one response should look like, instead of the policy responses of Asian democracies like Taiwan or South Korea. The implication of this has been that US policymakers and media outlets have been acting like leaders from a communist dictatorship and using measures that would only reasonably be sustainable in an authoritarian state, instead of coming up with measures that would be reasonable for a pluralistic democracy.&lt;/p&gt;</content><category term="Blog"/></entry><entry><title>Thoughts on Cryptocurrency and Web 3.0</title><link href="/posts/thoughts-on-cryptocurrency-and-web-30/" rel="alternate"/><published>2022-01-30T00:00:00+00:00</published><updated>2022-01-30T00:00:00+00:00</updated><author><name>Theo Jones</name></author><id>tag:None,2022-01-30:/posts/thoughts-on-cryptocurrency-and-web-30/</id><summary type="html">&lt;p&gt;I’m going to provide some thoughts on cryptocurrencies, NFTs, and the concept of a “web 3.0”. I’m not particularly enthusiastic about a lot of the things under that umbrella, I think the technology as exists today either has fundamental flaws in many cases or is a solution …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I’m going to provide some thoughts on cryptocurrencies, NFTs, and the concept of a “web 3.0”. I’m not particularly enthusiastic about a lot of the things under that umbrella, I think the technology as exists today either has fundamental flaws in many cases or is a solution in search of a problem.&lt;/p&gt;
&lt;p&gt;Fundamentally, these technologies are basically ways to replace centralized intermediaries. Let’s say you take a check to a bank, the bank will validate that the writer of the check has enough money in their balance, and will validate that the check was made on the correct authority, and will keep track of all the balances of the bank accounts under the bank’s management. The role of the bank in this transaction is to basically act as a trusted intermediary, the bank is a third-party that all account holders, both you and the person writing the check, have some trust with.&lt;/p&gt;
&lt;p&gt;How block chains and other decentralized web technologies replace these intermediaries is to have a very large number of people do this transaction validation and then aggregate the consensus of all these people. The point of the block chain is that there is a record of every transaction that’s publicly available, and that the number of people validating each transaction is so large that it’s impossible for a single individual to get control of every validator.&lt;/p&gt;
&lt;p&gt;The fundamental technical problem here, even in use cases where this makes sense like money, is that these decentralized protocols will inherently be extremely inefficient compared to centralized alternatives. Things like the resource usage that comes with cryptocurrency mining are symptoms of this problem. Not only is there that massive inefficiency of having literally hundreds of thousands of nodes on the network to validate transactions. There also must be a lot of magic behind the scenes in order to validate that each person claiming to be a node on the network has real computational power, instead of being a fake bot.&lt;/p&gt;
&lt;p&gt;This computational overhead, as I said earlier, means that the decentralized protocols will be vastly inefficient compared to highly centralized protocols. It’s much easier from a technical perspective to just bake trust into an intermediary. This means that the use case of these technologies will be naturally limited to cases where it’s worse to trust an intermediary.&lt;/p&gt;
&lt;p&gt;This will be things like transactions that are illegal, or discouraged by mainstream banks. Particularly with the latter, I do think this is almost a benefit of the technology. There are many cases where banks have fallen to political pressure or just their internal will and interrupted legitimate transactions, or cases where governments have been way too hasty to close down transactions. I do think there is a false critique of a lot of cryptocurrencies that just assumes that any transaction that normal banks don’t like or that governments don’t like is an illegitimate transaction, and that there is no value to anything that would enable these transactions.&lt;/p&gt;
&lt;p&gt;But the system of mass surveillance and mass social control that the fully digitized modern banking system is creating is really a net negative for society in many ways. I do think it would be wonderful to have some way to do financial transactions on the Internet that has a lot of the features of cash. There is something to be said for the fact that cash transactions can’t be easily surveilled, or that there is no central authority that can stop you from doing a cash transaction (at least without going to a lot of trouble to engage in litigation or similar). However, the great inefficiency of crypto currencies means there will be a massive case of adverse selection here. This technology fundamentally won’t replace transactions in the banking system as we know them.&lt;/p&gt;
&lt;p&gt;Additionally, there are fairly plausible systems for digital cash that ultimately involve normal intermediaries like banks to actually issue the digital cash. (footnote &lt;a href="#dfref-footnote-1"&gt;1&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;While it would probably require substantial regulatory change to enable these systems, this technology feels fundamentally better. You can get a lot of the good benefits of decentralization, including making it hard for centralized authorities to track or disrupt transactions, without the massive inefficiencies.&lt;/p&gt;
&lt;p&gt;I noticed a similar thing with &lt;a href="https://ipfs.io/"&gt;IPFS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For those who don’t know, IPFS is a BitTorrent like decentralized protocol for file sharing. It has files addressable by a unique identifier for that file. Multiple nodes can store each file, when the user wants to download a file there is a networkwide search for the nodes that hold those files. There is also a connection between IPFS and a few blockchain based systems for encouraging people to host files, but it feels like the block chain stuff is tacked on here as a marketing gimmick rather than actually being a useful part of the protocol. However, IPFS does seem to be marketed as part of the whole decentralized web/web 3.0 thing. So, I think it’s fair to include IPFS broadly in this discussion.&lt;/p&gt;
&lt;p&gt;The thing is, IPFS is vastly slower than HTTP. The overhead for searching for files is huge. It’s truly difficult to in a fully decentralized way find which nodes host a file. Therefore, downloading a file that isn’t super popular is an extremely time intensive process. This holds especially true for files that are not very popular. This overhead is particularly crippling when it comes to downloading lots of small files, like a website. A while ago, I did some experimentation to see if it would be possible to host my personal websites on top of IPFS using a bridge like the one provided by Cloudflare to connect the IPFS files to the mainstream Internet. My experimentation came to the conclusion that the performance is so abysmal that its nowhere near possible to do this reliably. And I think this is fairly intrinsic to the decentralized nature of this protocol.&lt;/p&gt;
&lt;p&gt;I think because of this IPFS will be relegated to the same role as old-school filesharing systems like BitTorrent or Napster or Kaza or eMule. That is, illegal or socially condemned use cases like copyright infringement or adult content. There is fundamentally a good reason why these classic filesharing services became known primarily as pirate havens. The inefficiency is so crippling that when there is a viable alternative to using them, people will take that alternative. I strongly doubt that any decentralized service will come close enough to the performance of DNS and HTTP.&lt;/p&gt;
&lt;p&gt;The way that a lot of enthusiasts get around this issue is to chuck decentralization out the window, use a standard centralized web hosting service to actually distribute content to the vast majority of users, and wear the skin of the decentralized technology as a marketing gimmick. I would put services like Fleek in this category.&lt;/p&gt;
&lt;p&gt;Art NFTs as far as I can tell, are basically useless. It’s pure artificial scarcity and speculation. You could not make a better parody of the conspicuous consumption part of the art world if you tried. I haven’t seen any indication that these things actually do anything useful.&lt;/p&gt;
&lt;p&gt;Another fundamental issue of a lot of crypto projects is that the creators of these projects seem to fundamentally forget that at some point the digital assets will need to interact with the real world.&lt;/p&gt;
&lt;p&gt;Despite the fact I think the technology feels like fundamentally a dead end, I feel somewhat sympathetic when I see a lot of critics this coming from the perspective of authoritarianism. A lot of people seem to fundamentally have no idea why people would be disappointed with the current world and try to look for alternatives.- - - - - -1 See chapter 6.4 of Bruce Schneier’s book Applied Cryptography: Protocols, Algorithms and Source Code in C for a somewhat dated, but good overview of digital cash with centralized intermediaries. (relevant part starting on page 139 in the 2015 edition) &lt;a href="#ref-footnote-1" title="back to document"&gt;↩&lt;/a&gt;&lt;/p&gt;</content><category term="Notes"/></entry></feed>