Mauricio Lomelin

Improving the Developer Experience with Shell Scripts

Published: 2026-02-17

I am a big fan of Developer Experience (DevEx), and I believe in workflow optimization as a way of protecting my focus. I’d rather spend my time and mental energy on high-level challenges than on repetitive minutiae, so I follow a simple rule: if a task repeats frequently enough, write a script.

For terminal-based tasks, shell scripting provides a framework for writing tools that orchestrate command-line utilities with consistent and predictable logic. Shell scripts optimize your workflows by offloading the cognitive load of minor details. In practice, they:

I manage my personal shell scripts using a simple design framework:

  1. Central Script Location. All my personal scripts reside in ~/_local/bin/ and ~/_local/lib/. This follows my overall standards for managing my personal code and data.

  2. Standard Application Script Template. Every application script is based on a template. This enforces consistent structure, error handling, and logging - I don’t have to start new scripts from scratch!

  3. Common Initialization Script. Every script sources the same initialization script. This ensures all my scripts share a unified environment - a consistent baseline for shell settings, utility functions, and global variables.

  4. Pragmatic Style Guide. Every scripts follows a simple style guide for naming conventions, code formatting, and design patterns. This makes it easier to reuse code across scripts, and to maintain a growing library of scripts.

In this post, I cover these points in more detail.

Shell Scripts and Workflow Optimization

While shell scripts are designed to automate tasks and reduce the likelihood of human error, their true value lies in optimizing your workflow so you can reclaim mental bandwidth to focus on solving big-picture problems.

Consider, for example, initializing a new remote repo. While the task appears straightforward, the steps and effort to complete it depend on context: does your IDE have the extensions to do it via a GUI? Are you starting from scratch or from a local project? Is your local project already under version control or not? Is there a naming conflict with a remote repo? Do you remember the commands to use? A script can handle this logic and ensure the right commands are executed for each scenario, saving me the time to look up and execute every command.

I context-switch all the time - across projects, across technology stacks, and across mental models. Because I’m not performing every task daily, I often find myself having a “wait, how do I do this again?” moment. Scripts capture the expertise for performing tasks so I don’t have to re-learn how to do them over and over again. They save me the time and effort of figuring things out, and they reduce my cognitive load. By encapsulating multi-step workflows under single commands, I create an abstraction layer so I only have to remember the name of the script for the task I need.

Beyond daily tasks, scripts also serve as living documentation. For example, I did not write a script for setting up this blog because I see myself creating blogs all over the place. I wrote it to codify a complex process. It is a reference for reusable patterns and, ultimately, a guide to my future self. Any time I revisit a script in the future, I know I am starting from a solution that works.

I have curated a small library of scripts over time and I now manage them holistically and systematically as part of my global setup. The rest of this post describes how I manage my personal shell scripts.

Script Location

All my personal scripts are centralized under the ~/_local/ folder.

Application Script Template

Application scripts in my ~/_local/bin/ folder generally follow a common template. Using a consistent structure reduces my cognitive load when opening scripts months later, when creating new ones, or scavenging for code patterns across other scripts in my library.

The template manages environment initialization, implements default argument parsing, and provides markers for app-specific implementation (i.e., all TODO entries), so working on a new app feels more like a “fill-in-the-blank” form.

Below is a snapshot of my application script template ~/_local/bin/script-template.zsh.

#!/usr/bin/env zsh
# -----------------------------------------------------------------------------
# SPDX-FileCopyrightText:   (c) 2025 Mauricio Lomelin <maulomelin@gmail.com>
# SPDX-License-Identifier:  MIT
# SPDX-FileComment:         Application Script
# SPDX-FileComment: <text>
#   This is the base template for Zsh shell scripts.
#   Configure the script by addressing all "TODO" tasks.
#   TODO: Delete this "SPDX-FileComment" block and all "TODO" tasks before use.
# </text>
# -----------------------------------------------------------------------------

# Initialize the script environment (use portable `dirname` and `printf`).
source "$(dirname "${0}")/../lib/init.zsh" || {
    printf "\e[91mError: Failed to initialize script environment.\e[0m\n"
    exit 1
}

# Prevent execution if the script is being sourced.
if [[ ${ZSH_EVAL_CONTEXT} == *:file* ]]; then
    echo "\e[91mError: The script [${(%):-%x}] must be executed, not sourced.\e[0m"
    return 1    # Abort sourcing and return to the caller with error.
fi

# Initialize private registry.
typeset -gA _APP
_APP[BATCH_REGEX]="^(true|false)$"
_APP[DEFAULT_BATCH]=false
_APP[AFFIRMATIVE_REGEX]="^[yY]([eE][sS])?$"
_APP[DEFAULT_VERBOSITY]=3
log_set_verbosity "${_APP[DEFAULT_VERBOSITY]}"
# TODO: Define additional constants and settings here.

# Display help documentation and exit. Invoked as needed.
function usage() {
# ----------------- 79-character ruler to align usage() text ------------------
    cat << EOF
Usage:

    ${ZSH_ARGZERO:A:t} [-v=<level>] [-b] [-h]

Description:

    # TODO: Write a brief description of the script here.

Options:

    -h, --help
        Display this help message and exit.

    -b, --batch
        Force non-interactive mode to perform actions without confirmation.
        Defaults to [${_APP[DEFAULT_BATCH]}] if not present.

    -v=<level>, --verbosity=<level>
        Sets the display threshold for logging level.
        Defaults to [${_APP[DEFAULT_VERBOSITY]}] if not present or invalid.

        +-----------------------+---------------------+
        |                       |   Verbosity Level   |
        |  Log Message Display  +---------------------+
        |                       |  0   1   2   3   4  |
        +-----------+-----------+---------------------+
        |           | 0/Alert   |  Y   Y   Y   Y   Y  |
        |           | 1/Error   |  N   Y   Y   Y   Y  |
        | Log Level | 2/Warning |  N   N   Y   Y   Y  |
        |           | 3/Info    |  N   N   N   Y   Y  |
        |           | 4/Debug   |  N   N   N   N   Y  |
        +-----------+-----------+---------------------+

    # TODO: Document additional script parameters/flags here.
EOF
    exit 0
}

# Implement core logic. Invoked by main().
function run() {

    # Map function arguments to local variables.
    # TODO: Map additional function arguments to local variables here.
    local batch="${1}"

    # TODO: Implement script's core logic here.
}

# Parse and validate CLI arguments. This is the script's entry point.
function main() {

    # Parse all CLI arguments.
    # TODO: Declare local variables for additional parameters/flags here.
    local help batch verbosity
    local -a args=( "${@}" ) args_used=() args_ignored=()
    while (( $# )); do
        case "$1" in
            (-h|--help)          help=true           ; args_used+=(${1}) ;;
            (-b|--batch)         batch=true          ; args_used+=(${1}) ;;
            (-v=*|--verbosity=*) verbosity="${1#*=}" ; args_used+=(${1}) ;;
            # TODO: Parse additional parameters/flags here.
            (*)                                        args_ignored+=(${1}) ;;
        esac
        shift
    done

    # Display usage information if requested.
    if [[ "${help}" == true ]]; then usage; fi

    # Validate and set the verbosity mode.
    log_set_verbosity "${verbosity}"
    verbosity=$(log_get_verbosity)

    # Validate batch mode and set to default if invalid.
    if [[ -z ${batch} ]]; then
        batch=${_APP[DEFAULT_BATCH]}
    else
        if [[ ! ${batch} =~ ${_APP[BATCH_REGEX]} ]]; then
            log_warning "Invalid batch flag [${batch}]. Setting to default [${_APP[DEFAULT_BATCH]}]."
            batch=${_APP[DEFAULT_BATCH]}
        fi
    fi

    # TODO: Validate/initialize additional parameters/flags here.

    # Display all processed arguments.
    log_info_header "# TODO: Give the script a short, friendly name here."
    log_info "Default settings:"
    log_info "  Batch mode:  [${_APP[DEFAULT_BATCH]}]"
    log_info "  Verbosity:   [${_APP[DEFAULT_VERBOSITY]}]"
    # TODO: Include default settings for additional parameters/flags here.
    log_info "Arguments processed:"
    log_info "  Input:       [${args}]"
    log_info "  Used:        [${args_used}]"
    log_info "  Ignored:     [${args_ignored}]"
    log_info "Effective settings:"
    log_info "  Batch mode:  [${batch}]"
    log_info "  Verbosity:   [${verbosity}]"
    # TODO: Include values for additional parameters/flags here.

    # Prompt user for confirmation, unless in batch mode.
    if [[ "${batch}" == true ]]; then
        log_warning "Batch mode enabled. Proceeding with script."
    else
        read "response?Proceed? (y/N): "
        if [[ ! ${response} =~ ${_APP[AFFIRMATIVE_REGEX]} ]]; then
            log_info "Exiting script."
            exit 0
        fi
    fi

    # Check that all variables passed to run() exist.
    # TODO: Check additional variables passed to run() here.
    if [[ -z "${batch}" ]]; then
        log_error "Invalid internal state. Aborting script."
        exit 1
    fi

    # Execute the core logic.
    # TODO: Pass additional variables to run() here.
    run "${batch}"
}

# Invoke main() with all CLI arguments.
main "${@}"
Personal Notes

Initialization Script

All application scripts begin by sourcing a common initialization script that configures the environment and sources all libraries. It’s the line in the template that starts with source "$(dirname "${0}")/../lib/init.zsh".

The initialization script ensures the environment runs under zsh, configures error handling options, and validates the namespaced architecture by checking for function name collisions before sourcing all script libraries.

Below is a snapshot of my initialization script ~/_local/lib/init.zsh.

#!/usr/bin/env zsh
# -----------------------------------------------------------------------------
# SPDX-FileCopyrightText:   (c) 2025 Mauricio Lomelin <maulomelin@gmail.com>
# SPDX-License-Identifier:  MIT
# SPDX-FileComment:         Initialization Script
# -----------------------------------------------------------------------------

# Fail fast if not running under zsh.
#   - ${ZSH_NAME} is only set if the script is running under zsh.
#   - Test condition using single brackets for portability.
#   - Use "printf" for portability.
if [ -z "${ZSH_NAME}" ] ; then
    printf "\e[91m"
    printf "Error: This script requires zsh.\n"
    printf "\n"
    printf "  - To run it in a zsh shell, either:\n"
    printf -- "      - Invoke it with zsh:  \`$ zsh script.zsh\`\n"
    printf -- "      - Execute it directly: \`$ chmod +x script.zsh ; ./script.zsh\`\n"
    printf "\n"
    printf "  - To use a different shell, modify scripts accordingly.\n"
    printf "\n"
    printf "\e[0m"
    return 1
fi

# Prevent direct execution. This script is designed to be sourced.
if [[ ${ZSH_EVAL_CONTEXT} != *:file* ]]; then
    echo "\033[91mError: This script must be sourced, not executed.\033[0m"
    return 1
fi

# Enable strict error handling and debugging.
set -e  # Exit on errors.
set -u  # Exit on undefined variables.
set -o pipefail  # Fail if any command in a pipeline fails.
#set -x  # DEBUG: Enable xtrace command tracing for debugging.

# Source common libraries.
#   - Source libraries only if no function name collisions are found.
#   - Run checks inside an anonymous function to keep the global scope clean.
function () {

    # Common libraries.
    local lib_dirpath="${${(%):-%x}:A:h}"
    local -a libs=(
        "./lib_log.zsh"
        "./lib_sys.zsh"
        "./lib_err.zsh"
        # TODO: Add new libraries here.
    )

    # -----------------------------------------------------------------------------
    # Syntax:   _extract_function_names_from_file <file>
    # Args:     <file>      A file name.
    # Outputs:  An array of function names found inside <file> using regexes.
    # Returns:  Default exit status.
    # -----------------------------------------------------------------------------
    function _extract_function_names_from_file() {
        local file=${1:-}
        # Define RegEx patterns to extract function names (fnames) from files.
        local re_pre="^[[:space:]]*function[[:space:]]+"   # Left of fname.
        local re_fn="[a-zA-Z0-9_]+"                        # fname.
        local re_post="[[:space:]]*\(\)[[:space:]]+\{.*$"  # Right of fname.
        # Get an array of function names from the given file.
        local -a fnames=( ${(f)"$( grep -E "${re_pre}${re_fn}${re_post}" "${file}" | sed -E "s/${re_pre}// ; s/${re_post}//" )"} )
        echo "${fnames}"
    }

    # Create a function name registry.
    #   - Functional schema: fnmap[fn] => { (int)count, (str)sources }
    #   - Implement with parallel associative arrays managed with local functions.
    local -A fn_count   # The number of sources for a given fn.
    local -A fn_sources # The list of sources a given fn is found in.

    # -----------------------------------------------------------------------------
    # Syntax:   _fnmap_add_source_to_function_names <source> [<fname> ...]
    # Args:     <source>    A source string.
    #           <fname>     A list of function names.
    # Outputs:  None
    # Returns:  Default exit status.
    # Details:
    #   - Appends <source> to the list of sources of each function name in the
    #     fnmap registry. If no function name index is found, one is added.
    #   - Updates the count of <source>s added to a function name.
    # -----------------------------------------------------------------------------
    function _fnmap_add_source_to_function_names() {
        local source=${1}           # Map source argument to local variable.
        local -a fns=( "${@:2}" )   # Map list of function names to an array.
        local fn
        for fn in ${fns}; do
            fn_count[${fn}]=$(( ${fn_count[${fn}]:-0} + 1 ))
            fn_sources[${fn}]=${fn_sources[${fn}]:-}${fn_sources[${fn}]:+, }${source}
        done
    }

    # -----------------------------------------------------------------------------
    # Syntax:   _fnmap_validate_fnames
    # Args:     None.
    # Outputs:  An error message for every duplicate in the function name registry.
    # Returns:  0 on success (no duplicates found); 1 on error (duplicates found).
    # -----------------------------------------------------------------------------
    function _fnmap_validate_fnames() {
        local fn duplicates=false
        for fn in ${(k)fn_count}; do
            if (( fn_count[${fn}] > 1 )); then
                echo "\e[91mError: Function name duplicates detected:"
                echo "    Function:\t${fn}()"
                echo "    Sources:\t${fn_sources[${fn}]}"
                echo "==> Revise function names to avoid collisions."
                echo "\e[0m"
                duplicates=true
            fi
        done
        if [[ "${duplicates}" == true ]]; then
            return 1    # Exit function with an error.
        else
            return 0    # Exit function with status ok.
        fi
    }

    # -----------------------------------------------------------------------------
    # Syntax:   _fnmap_print
    # Args:     None.
    # Outputs:  Pretty-prints the function name registry to stderr, in JSON format:
    #           { "fnmap": { "<fname>": { "count": int, "sources": str } } }
    # Returns:  Default exit status.
    # -----------------------------------------------------------------------------
    function _fnmap_print() {
        local fn
        echo "{" >&2
        echo "  \"fnmap\": {" >&2
        for fn in ${(k)fn_count}; do
            echo "    \"${fn}\": {" >&2
            echo "      \"count\": ${fn_count[${fn}]}," >&2
            echo "      \"sources\": \"${fn_sources[${fn}]}\"" >&2
            echo "    }" >&2
        done
        echo "  }" >&2
        echo "}" >&2
    }

    # Process all libraries.
    local lib
    local -a fns
    for lib in ${libs[@]}; do
        fns=( $(_extract_function_names_from_file "${lib_dirpath}/${lib}") )
        _fnmap_add_source_to_function_names "${lib}" "${fns[@]}"
    done

    # Process the executable script.
    fns=( $(_extract_function_names_from_file "${ZSH_ARGZERO:A}") )
    _fnmap_add_source_to_function_names "${ZSH_ARGZERO:A:t}" "${fns[@]}"

    # Process the environment.
    fns=( ${(k)functions} )
    _fnmap_add_source_to_function_names "(environment)" "${fns[@]}"

    # DEBUG: Print function name registry.
    #_fnmap_print

    # Validate function names and exit function if collisions are detected.
    _fnmap_validate_fnames || return 1

    # Source common libraries if no collisions were detected.
    for lib in "${libs[@]}" ; do
        source "${lib_dirpath}/${lib}"
    done

} || return 1   # Catch and return any errors to the caller.
Personal Notes

Style Guide

My scripts share a consistent look, feel, and flow. This is because every script I write follows a set of rules that define a common design language. This is my personal style guide. It embraces defensive coding techniques and structural guidelines that make my code easier to write, read, and maintain.

The rules are straightforward and apply to both application and library scripts:

My scripts use a namespaced architecture. Since shell scripts share a global namespace, this is a defensive practice I leverage to reduce potential naming conflicts across scripts, to make library functions discoverable, and to reduce global namespace pollution by encapsulating all global variables. Here is what this entails:

I did not create these rules in a vacuum. I learned by writing and maintaining many scripts, and by reviewing scripts written by others. This, however, is the first time I have written them out. I have never needed to because these all make sense to me, they feel natural, and they make my code easier to maintain. It was straightforward to look at my code and generate the style guide I have been implicitly following. It feels good to document them as a guide to my future self.

How would I recommend you develop your own? Review scripts you’ve written in the past, look at useful scripts you’ve run into, be critical of scripts you’re working on now, and seek resources online (some large companies publish their own style guides). You will quickly learn how to organize and document your code, how to implement best practices, and what guidelines make sense for you. Don’t be academic about it - be pragmatic. The goal is have a set of rules and guidelines that help you write code that remains readable and maintainable months or years from now.