Command ‘brew’ not found

Let me guess: you just installed Homebrew on your Linux system because you were going to use it to install some other software. Instead, when you tried to install the software, you got something like this:

$ brew install <software-name>
Command 'brew' not found

This response means the command can’t find the brew application binary. This happened because the Homebrew installation omits an essential step: adding the path of the brew binary to the Linux $PATH variable.

To fix this, you must add an instruction to your ~/.profile or ~/.bashrc configuration file that adds the path of the brew binary to the Linux $PATH variable.

So, what is the path of the brew binary?

Earlier, when you installed Homebrew, the output showed the location of the brew binary. Optional: Scroll up through your command history to see if it is still visible. For example:

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
==> Checking for `sudo` access (which may request your password).
==> This script will install:
/home/linuxbrew/.linuxbrew/bin/brew
/home/linuxbrew/.linuxbrew/share/doc/homebrew
/home/linuxbrew/.linuxbrew/share/man/man1/brew.1
/home/linuxbrew/.linuxbrew/share/zsh/site-functions/_brew
/home/linuxbrew/.linuxbrew/etc/bash_completion.d/brew
/home/linuxbrew/.linuxbrew/Homebrew

Press RETURN to continue or any other key to abort

If you have root privileges, brew installed to /home/linuxbrew/.linuxbrew/bin/brew. Otherwise, if you don’t have root privileges, it installed to ~/.linuxbrew/bin/brew.

In any case, the following commands (which I found slightly buried in the Homebrew documentation) will sort this out. They find which path has the brew binary and adds it to your .profile configuration file. Paste the following commands in your terminal.

test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile

After this, restart your terminal. When you do this, .profile adds the path to your system’s $PATH variable.

Now, verify that the brew command works by using it to install some software. Please let me know how this works for you. If not, I’ll try adding some troubleshooting steps.

Now…why was I installing Homebrew? Ah yes, I installed it so I could install the GitHub CLI:

rolfedh@rolfedh-HP-Z2-Mini-G3-Workstation:~$ brew install gh
==> Homebrew is run entirely by unpaid volunteers. Please consider donating:
  https://github.com/Homebrew/brew#donations
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> Updated Formulae
Updated 48 formulae.
Updating Homebrew...

==> Downloading https://ghcr.io/v2/linuxbrew/core/gh/manifests/2.0.0
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/linuxbrew/core/gh/blobs/sha256:ac34664fe701dc
==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh
######################################################################## 100.0%
==> Pouring gh--2.0.0.x86_64_linux.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /home/linuxbrew/.linuxbrew/etc/bash_completion.d
==> Summary
đŸș  /home/linuxbrew/.linuxbrew/Cellar/gh/2.0.0: 97 files, 27.8MB
rolfedh@rolfedh-HP-Z2-Mini-G3-Workstation:~$ 

minikube ugh! microk8s yay!

I just spent hours trying to get minikube up and running on Linux Mint 20.2 Cinnamon. Endless installation and dependency problems (even though I’ve installed it on other platforms in the past).

Finally, I installed microk8s using snap. No problem.

rolfedh@rolfedh-HP-Z2-Mini-G3-Workstation:~$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    ha-cluster           # Configure high availability on the current node
    istio                # Core Istio service mesh services
    knative              # The Knative framework on Kubernetes.
    metrics-server       # K8s Metrics Server for API access to service metrics
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    jaeger               # Kubernetes Jaeger operator with its simple config
    keda                 # Kubernetes-based Event Driven Autoscaling
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    openebs              # OpenEBS is the open-source storage solution for Kubernetes
    openfaas             # openfaas serverless framework
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    traefik              # traefik Ingress controller for external access

My favorite Kubernetes books

Note: I did not include any affiliate links in this post.

Nigel Poulton’s Mastering Kubernetes bundle on Leanpub, $12.99.

A couple of years ago, I bought the ebook and audiobook versions of Nigel’s The Kubernetes Book. I became a huge fan of his humor and ability to deliver key information about complex topics.

This weekend, I found an interview with Nigel on the Leanpub Frontmatter podcast. I enjoyed hearing about how his father saved up for his first computer, how he got started in tech, how he started developing courses and writing books, and how he and his family are doing currently.

Inspired by all that, I decided to look for his current books on Leanpub and found the aforementioned Mastering Kubernetes bundle, which contains the 2021 editions of The Kubernetes Book and Quick Start Kubernetes.

You might ask, why pay for a book about Kubernetes, which already has comprehensive free well-written documentation? Aside from liking the guy, I bought Nigel’s books because he does a great job of summarizing and helping me retain the key information. His work saves me countless hours of reading. He doesn’t just present dry facts, he shares meaningful insights and opinions about the platform.

(Nigel, if you’re reading this post, skip the next sentence because I don’t want you to raise your prices.) If reading his books saves me only 15 minutes and increases my mastery of Kubernetes, they are worth far more than the small price I paid for them.

I also like supporting Leanpub. Authors earn 80% royalties on their books and courses. The platform enables authors to set sliding scale prices for their works. And the platform encourages authors to publish early and publish often — to better gauge their readers’ interests and needs and therefore deliver information that has more value.

Vale: The unexpected team

The past two weeks, I created a Slack channel called #vale-at-red-hat and invited folks at work to join. Then I invited them to become collaborators on the repo.

It worked! A bunch of folks signed up to become full collaborators.

To work this way, you have to give up sole ownership and control. Invite folks to collaborate and contribute as equals. Be generous with recognition and credit for work done!

Let’s see what happens next!

Vale notes #3: Things start rolling

Last week, I did separate walk-throughs with two writers. My intention was to gain insight into the issues a typical user might encounter, and to use that information to improve the “getting started” repo I had created. I helped them install, configure, and start using Vale on their systems.

Installing Vale by using brew was problematic, so we installed the precompiled Vale binary instead. We also copied the .vale.ini configuration file and /styles directory to their doc project and updated the configuration to work with .adoc files.

However, when we ran Vale against one of their .adoc content files, we got a mysterious error. Searching online didn’t help us solve the issue right away. I made a mental note to try to reproduce the error later. The writer had been preparing a presentation on Vale for his documentation project team, but given these setbacks they decided to delay that presentation.

The second writer completed the installation process on their own. Our meeting was more of a conversation than a walkthrough. It seemed like they were ready for a style that was more Red Hat-specific than the generic ones I provided.

This weekend, I put my insights from those two walkthroughs to work by completely updating the getting started repo, https://github.com/rolfedh/studious-fortnight:

  • Removing the previous /styles folder and .vale.ini file.
  • Adding the .vale styles folder and .vale.ini that Fabrice and Yana developed for the Che eclipse documentation project.
  • Updating and expanding README.md with new information to help you get started with using Vale.
  • Adding a “Troubleshooting common errors” topic to help newcomers identify and correct common issues.
  • Added a “Vale at Red Hat Blog” blog to the repo.

That blog will cover information similar to project and release notes. In contrast, these “vale notes” posts will focus more on my personal insights.

Vale notes #2: Talk to experts and stakeholders

In addition to eating my own dogfood, I started finding and talking to experts and stakeholders.

For experts, I went to folks inside and outside Red Hat. At Red Hat, I talked to Fabrice Flore-Thebault and Yana Hontyk. Last year, they presented on using Vale with their documentation sets: Eclipse Che and CodeReady Workspaces. I talked to them about their workflow, the issues they had fixed, and results.

Outside Red Hat, I pulled together an unconference session at Write the Docs Portland conference with Mike Jang from GitLab and Jodie Putrino at NGINX. They spoke about their differing approaches to rolling it out at their organizations. Lynette Miles at TAG1 Consulting also contributed.

For stakeholders:

  • I talked with my manager, who was supportive.
  • I arranged a meeting for Yana and Fabrice to present their work to a small group of writers, content strategists, and managers who had expressed an interest in rolling out Vale. The group expressed a lot of interest, and we agreed there should be some follow-up actions, like additional pilot programs.

As an aside, Fabrice and Yana’s presentation included the following impressive graph. It shows how they iterated on both the documentation and Vale style rules to achieve nearly zero errors, both real errors and false positives, over the course of almost two years:

To prepare for the follow-up, I have created a repo that contains a preliminary set of Vale config files and styles: https://github.com/rolfedh/studious-fortnight (definitely a work-in-progress).

Vale notes #1: Eating my dog food

I’ve heard complaints about the peer review process at a variety of organizations where I’ve seen it in use. To simplify, they sound like this:

  • Writers: When I fix a set of issues and resubmit the PR, the reviewer(s) come up with a new set of issues.
  • Reviewers: I keep flagging the same set of stupid !%$%*$# issues.
  • Everyone: Ugh. This takes too much time and effort.

In some cases, the peer review process can be demoralizing and spark interpersonal conflicts.

Looking for a solution to this issue, I turned to Vale, a style linter that has been gained traction at a variety of organizations, such as GitLab, NGINX, and elsewhere. At Red Hat, a couple of doc projects have been using it for over a year and there seems to be growing interest in expanding its use.

My first step was to start using Vale to get hand-on experience using it: eat your own dog food, etc. In my enthusiasm, I popped $57 for a Vale Server license.

Installing Vale Server wasn’t hard. Finding and configuring the pair of plugins to make it work with my Atom editor was a little confusing. Setting up the single plugin for VS Code was easy.

I installed all the built-in styles for Vale Server, but then had trouble applying them to my docs. Installation is not enough. One must also create a project in Vale Server and configure it to use the styles.

Overall, the process was a bit tricky enough that I realized rolling it out to a team of writers would require documented procedures, tutorial videos, and live support.

It’s also unclear whether my organization would commit to purchasing Vale Server licenses at first. So, I would need to figure out how to roll out Vale, the CLI tool, instead of Vale Server. To make the distinction clear, I’ll refer to Vale as Vale CLI from now on.

Vale CLI is simpler to install and configure than Vale Server. However, I believe Vale CLI alone doesn’t integrate with editors like Atom and VS Code. With Vale CLI, you get all your feedback by running the vale command against your file or files from the CLI. For example:

$ vale README.md

The output looks something like this:

Typical feedback from Vale CLI

In some ways, it’s less distracting to get with this command line feedback after you write a block of content. Vale Server highlights issues in your text editor when you save your current text. In other words, with Vale Server, the feedback is both more immediate and in some ways distracting. Six one way, half-dozen the other.

Git rid of old branches :-)

Toss ’em out!

Every so often, I clean up old working branches I don’t need any more.

After I’ve written or revised content, and the pull request has been merged from my fork of the repo into the main branch of the organization’s repo, it’s time to get rid of the old working branches.

TLDR/Copy and paste

Here’s a summary of the commands for you to copy and paste.

cd <repo-directory>/
git checkout <main branch>
git fetch upstream <main branch>
git branch --merged
> Make sure the branch is dead
git branch -D <branch-name>
git push origin :<branch-name>

Verbose

Sections:

  • List the merged branches
  • Make sure the working branches are really dead
  • Delete the dead branches
List the merged branches

I start by listing the merged branches in my local repo:

$ cd openshift-docs/       # Change to the project/repo directory
$ git checkout main        # Check out the main or master branch
Already on 'main'
Your branch is up to date with 'upstream/main'.
$ git fetch upstream main  # Fetch information about branches from your upstream repo 
remote: Enumerating objects: 46, done.
remote: Counting objects: 100% (36/36), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 19 (delta 15), reused 12 (delta 10), pack-reused 0
Unpacking objects: 100% (19/19), 3.11 KiB | 176.00 KiB/s, done.
From github.com:openshift/openshift-docs
 * branch                main     -> FETCH_HEAD
   54901a001..b051f3c46  main     -> upstream/main
$ git branch --merged      # List local branches that are merged in the upstream repo
  RHDEVDOCS-2465-replace-docker
  RHDEVDOCS-2514
  RHDEVDOCS-2609
  RHDEVDOCS-2618
  RHDEVDOCS-2740-rn
  bz#1873372
* main
Make sure the working branches are really dead

Early on, when I start a new project, I use the ID of the Jira or GitHub issue to name the working branch and PRs. This makes it simple to know which issue, branch, and PR belong to each other.

When I’ve finished using a working branch, to make sure I don’t need it any more, I search my closed pull requests for the issue ID. I review the pull request to make sure that the PR that was merged into the main branch was also cherry-picked into all relevant release branches. I also look at the issue to make sure its state is “closed.”

When I’ve confirmed that a branch is not only merely dead, but really most sincerely dead, it’s time to…

Delete the dead branches

I return to my terminal and delete the branch by entering:

$ git branch -D <branch-name>

Then I push the deletion to origin by entering:

$ git push origin :<branch-name>

Notes

  • Many repos still use master as the name of their primary branch. In this post, I have changed the name to main and will continue to do so in future posts.
  • Please share your comments, questions, and suggestions for improving this post, below!

Using voice to text to write blog posts

I am sitting in our car with the dog while my wife has gone into the food store wearing a mask to buy groceries. I’m experimenting today to see how well writing a blog post using voice-to-text on my phone works.

As far as I know, there aren’t any good inexpensive or free ways to do this on my Linux-based desktop computer, which I usually use for writing blog posts. But I find the voice-to-text feature that my phone has works extraordinarily well.

I’m sure there are similar options for users running Windows or Mac OS, but it doesn’t matter. This post isn’t about the availability of voice-to-text on different platforms.

So what I’m finding is that it works pretty well. All you have to do is spend a few minutes thinking through the purpose of the blog post. Then, you spend a few moments before each long sentence or paragraph thinking through what you want to say next. Then you narrate it.

Editing works the same way it does all the time, which is to say that you touch the screen where you want to insert text, or you select words that you want to delete.

I find that I tend to do a lot less editing this way because my spoken phrases tend to be very natural and clear.

To summarize, I find the process pretty natural and easy. It seems to go a lot faster than typing blog posts. I’ll try composing blog posts this way more often.

Job aid: Git cherry-pick a commit and manually resolve a conflict

This post is short version of Git: Cherry-pick a commit into a branch and resolve a merge conflict. Replace `upstream/enterprise-4.8` with whatever your target branch is.

I copy/paste these commands into my terminal.

git checkout master
git branch -D enterprise-4.8
git checkout --track upstream/enterprise-4.8
git status

Verify that “Your branch is up to date with ‘upstream/enterprise-4.8’.”

git cherry-pick <commit hash>

Go to the pull request that has the merge failure (e.g., like this example). Copy the commit hash. In the terminal, replace <commit hash> with the real one. Enter the command.

Ignore “CONFLICT (content): Merge conflict in <path/filename>

In Atom editor (or whatever), manually resolve the merge conflict.
Save and commit the changes as “Manual CP of RHDEVDOCS-<jira#> #<pr#>”.

git status

Confirm “Your branch is ahead of ‘upstream/enterprise-4.8’ by 1 commit.”

git push -f origin enterprise-4.8

Go to origin/enterprise-4.8 in GitHub and create the pull request with the following description.

https://issues.redhat.com/browse/RHDEVDOCS-2617
Previously merged as https://github.com/openshift/openshift-docs/pull/29491/files
[enterprise-4.8]

Copy the URL of this new PR and paste it to a comment in the Jira issue (e.g., in RHDEVDOCS-2617).