Mauricio Lomelin

Improving the Developer Experience with Shell Scripts

Published: 2026-02-17  •  Last Updated: 2026-04-01

Update (2026-04-01): Updated the application script template to make greater use of framework utility functions, revised the style guide for LLM-driven workflows, and made minor tweaks and edits.

I am a big fan of Developer Experience (DevEx), and I believe in workflow optimization as a way of protecting my focus. I’d rather spend my time and mental energy on high-level challenges than on repetitive minutiae, so I follow a simple rule: if a task repeats frequently enough, write a script.

For terminal-based tasks, shell scripting provides an environment to orchestrate command-line utilities with consistent and predictable logic. Shell scripts optimize your workflows by offloading the cognitive load of minor details. In practice, they:

I manage my personal shell scripts using a simple architecture:

  1. Central Script Location. All my personal scripts reside in ~/_local/bin/ and ~/_local/lib/. This follows my overall standards for managing my personal code and data.

  2. Standard Application Script Template. Every application script is based on a boilerplate template. This ensures that every new script starts off with a consistent structure, error handling, and logging - I don’t have to start new scripts from scratch!

  3. Common Script Framework. Every script sources a central framework initialization script to ensure a standardized execution environment. This bespoke framework sets up a unified environment - a consistent baseline for shell settings, common utility functions, and global variables. This simplifies the writing and maintenance of scripts.

  4. Pragmatic Style Guide. Every scripts follows a simple style guide for naming conventions, code formatting, and design patterns. This makes it easier to reuse code across scripts, and to maintain a growing library of scripts.

In this post, I cover these points in more detail.

Shell Scripts and Workflow Optimization

While shell scripts are designed to automate tasks and reduce the likelihood of human error, their true value lies in optimizing your workflow so you can reclaim mental bandwidth to focus on solving big-picture problems.

Consider, for example, initializing a new remote repo. While the task appears straightforward, the steps and effort to complete it depend on context: does your IDE have the extensions to do it via a GUI? Are you starting from scratch or from a local project? Is your local project already under version control or not? Is there a naming conflict with a remote repo? Do you remember the commands to use? A script can handle this logic and ensure the right commands are executed for each scenario, saving me the time to look up and execute every command.

I context-switch all the time - across projects, across technology stacks, and across mental models. Because I’m not performing every task daily, I often find myself having a “wait, how do I do this again?” moment. Scripts capture the expertise for performing tasks so I don’t have to re-learn how to do them over and over again. They save me the time and effort of figuring things out, and they reduce my cognitive load. By encapsulating multi-step workflows under single commands, I create an abstraction layer so I only have to remember the name of the script for the task I need.

Beyond daily tasks, scripts also serve as living documentation. For example, I did not write a script for setting up this blog because I see myself creating blogs all over the place. I wrote it to codify a complex process. It is a reference for reusable patterns and, ultimately, a guide to my future self. Any time I revisit a script in the future, I know I am starting from a solution that works.

I have curated a small library of scripts over time and I now manage them holistically and systematically as part of my global setup. The rest of this post describes how I manage my personal shell scripts.

Script Location

All my personal scripts are centralized under the ~/_local/ folder.

Application Script Template

Application scripts in my ~/_local/bin/ folder generally follow a common template. Using a consistent structure reduces my cognitive load when opening scripts months later, when creating new ones, or scavenging for code patterns across other scripts in my library.

The template manages environment initialization, implements default argument parsing, and provides markers for app-specific implementation (i.e., all TODO entries), so working on a new app feels more like a “fill-in-the-blank” form.

Below is a snapshot of my application script template ~/_local/bin/script-template.zsh.

#!/usr/bin/env zsh
# -----------------------------------------------------------------------------
# SPDX-FileCopyrightText:   (c) 2025 Mauricio Lomelin <maulomelin@gmail.com>
# SPDX-License-Identifier:  MIT
# SPDX-FileComment:         Application Script
# SPDX-FileComment: <text>
#   This is the base template for Zsh shell scripts.
#   Configure the script by addressing all "TODO" tasks.
#   TODO: Delete this "SPDX-FileComment" block and all "TODO" tasks before use.
# </text>
# -----------------------------------------------------------------------------

# Initialize script framework (use `dirname` and `printf` for portability).
source "$(dirname "${0}")/../lib/framework/init.zsh" || {
    printf "\e[91mError: Failed to initialize script framework.\e[0m\n"
    exit 1
}

# Prevent script sourcing.
if [[ ${ZSH_EVAL_CONTEXT} == *:file* ]]; then
    log::warning "The script [${(%):-%x}] must be executed, not sourced."
    return 1
fi

# Initialize private registry.
typeset -gA _APP=(
    [DEFAULT_VERBOSITY]=${REG[DEFAULT_VERBOSITY]}
    [DEFAULT_BATCH]=${REG[DEFAULT_BATCH]}
    [DEFAULT_HELP]=${REG[DEFAULT_HELP]}
    # TODO: Define additional app-specific constants and settings here.
)

# Display help documentation and exit. Invoked as needed.
function usage() {
# ----------------- 79-character ruler to align usage() text ------------------
    cat << EOF
Usage:

    # TODO: Add additional parameters/flags to the command interface below.
    ${ZSH_ARGZERO:A:t} [-v=<level>] [-b] [-h]

Description:

    # TODO: Write a brief description of the script here.

Options:

    # TODO: Document additional script parameters/flags here.

    -v=<level>, --verbosity=<level>
        Sets the display threshold for logging level.
        Defaults to [${_APP[DEFAULT_VERBOSITY]}] if not present or invalid.

            Log Message    |   Verbosity Level
              Display      |  0   1   2   3   4
        -------------------+--------------------
                0/Alert    |  Y   Y   Y   Y   Y
          Log   1/Error    |  N   Y   Y   Y   Y
         Level  2/Warning  |  N   N   Y   Y   Y
                3/Info     |  N   N   N   Y   Y
                4/Debug    |  N   N   N   N   Y

    -b, --batch
        Force non-interactive mode to perform actions without confirmation.

    -h, --help
        Display this help message and exit.
EOF
    exit 0
}

# Implement core logic. Invoked by main().
function run() {

    # Map function arguments to local variables.
    # TODO: Map additional function arguments to local variables here.
    local batch="${1}"

    # TODO: Implement script's core logic here.
    log::info_header "log::info_header()"
    log::info "log::info()"
    log::debug "log::debug()"
    log::warning "log::warning()"
    log::error "log::error()"
    log::alert "log::alert()"
}

# Parse and validate CLI arguments. This is the script's entry point.
function main() {

    # Parse all CLI arguments.
    # TODO: Declare local variables for additional parameters/flags here.
    local help batch verbosity
    local -a args=( "${@}" ) args_used=() args_ignored=()
    while (( $# )); do
        case "$1" in
            (-h|--help)          help=true           ; args_used+=(${1}) ;;
            (-b|--batch)         batch=true          ; args_used+=(${1}) ;;
            (-v=*|--verbosity=*) verbosity="${1#*=}" ; args_used+=(${1}) ;;
            # TODO: Parse additional parameters/flags here.
            (*)                                        args_ignored+=(${1}) ;;
        esac
        shift
    done

    # Set verbosity level.
    log::set_verbosity "${_APP[DEFAULT_VERBOSITY]}" # Set level to app default.
    log::set_verbosity "${verbosity}"               # Try to set to user input.
    verbosity=$(log::get_verbosity)                 # Get actual level.

    # Log script identifier to mark the start of all logging.
    log::info_header "# TODO: Give the script a short, friendly name here."

    # Handle help requests before validating other inputs.
    help=$(dat::validate_bool "help flag" "${help}" "${_APP[DEFAULT_HELP]}") || return 1
    if dat::is_true "${help}"; then usage; fi

    # Validate all other inputs.
    batch=$(dat::validate_bool "batch flag" "${batch}" "${_APP[DEFAULT_BATCH]}") || return 1
    # TODO: Validate/initialize additional parameters/flags here.

    # TODO: Perform input checks that would result in a sys::abort() here.

    # Display all processed arguments.
    log::info "Arguments processed:"
    log::info "  Input:        [${args}]"
    log::info "  Used:         [${args_used}]"
    log::info "  Ignored:      [${args_ignored}]"
    log::info "Default settings:"
    # TODO: Include default settings for additional parameters/flags here.
    log::info "  Verbosity:    [${_APP[DEFAULT_VERBOSITY]}]"
    log::info "  Batch:        [${_APP[DEFAULT_BATCH]}]"
    log::info "  Help:         [${_APP[DEFAULT_HELP]}]"
    log::info "Effective settings:"
    # TODO: Include values for additional parameters/flags here.
    log::info "  Verbosity:    [${verbosity}]"
    log::info "  Batch:        [${batch}]"
    log::info "  Help:         [${help}]"

    # TODO: Perform input checks that would result in a log::warning() here.

    # Prompt user for confirmation, unless in batch mode.
    if dat::is_true "${batch}"; then
        log::warning "Batch mode enabled. Proceeding with script."
    else
        read "response?Proceed? (y/N): "
        if ! dat::is_yes "${response}"; then
            sys::terminate "User declined to proceed."
        fi
    fi

    # Check that all variables are populated before executing core logic.
    # TODO: Add all run() arguments to the array below.
    local -a args=( "${batch}" )
    if [[ "${#args}" != "${#args:#}" ]]; then
        sys::abort "Invalid state: Check args."
    fi

    # Execute core logic.
    run "${args[@]}"
}

# Invoke main() with all CLI arguments.
main "${@}"
Personal Notes

Framework Initialization Script

All application scripts begin by sourcing a framework initialization script that configures the environment and sources all framework libraries. It’s the line in the template that starts with source "$(dirname "${0}")/../lib/framework/init.zsh".

The initialization script ensures the environment runs under zsh, configures error handling options, and validates the namespaced architecture by checking for function name collisions before sourcing all framework script libraries.

Below is a snapshot of my initialization script ~/_local/lib/framework/init.zsh.

#!/usr/bin/env zsh
# -----------------------------------------------------------------------------
# SPDX-FileCopyrightText:   (c) 2025 Mauricio Lomelin <maulomelin@gmail.com>
# SPDX-License-Identifier:  MIT
# SPDX-FileComment:         Framework Initialization Script
# -----------------------------------------------------------------------------

# Fail fast if not running under zsh.
#   - ${ZSH_NAME} is only set if the script is running under zsh.
#   - Use single brackets and `printf` for portability.
if [ -z "${ZSH_NAME}" ] ; then
    printf "\e[91m"
    printf "Error: This script requires zsh.\n"
    printf "\n"
    printf "  - To run it in a zsh shell, either:\n"
    printf -- "      - Invoke it with zsh:  \`$ zsh script.zsh\`\n"
    printf -- "      - Execute it directly: \`$ chmod +x script.zsh ; ./script.zsh\`\n"
    printf "\n"
    printf "  - To use a different shell, modify scripts accordingly.\n"
    printf "\n"
    printf "\e[0m"
    return 1
fi

# Prevent direct execution.
if [[ ${ZSH_EVAL_CONTEXT} != *:file* ]]; then
    echo "\033[91mError: This script must be sourced, not executed.\033[0m"
    return 1
fi

# Enable strict error handling and debugging.
#   - Use full option names ("The Z Shell Manual" v5.9, § 16, pg. 111).
setopt ERR_EXIT        # Exit on errors.
setopt NO_UNSET        # Exit on undefined variables.
setopt PIPE_FAIL       # Fail if any command in a pipeline fails.
setopt TYPESET_SILENT  # Silence variable re-declarations.
setopt NO_CASE_MATCH   # Enable case-insensitive pattern matching.
setopt EXTENDED_GLOB   # Enable extended globbing features.
#setopt XTRACE         # DEBUG: Trace command execution and expansion.

# Source framework libraries.
#   - Source framework libraries only if no function name collisions are found.
#   - Run checks inside an anonymous function to keep the global scope clean.
function () {

    # Framework libraries.
    local lib_dirpath="${${(%):-%x}:A:h}/lib"
    local -a libs=(
        "reg--global-registry.zsh"
        "log--logging.zsh"
        "sys--system.zsh"
        "dat--data-types.zsh"
        "env--environment-info.zsh"
        "err--error-handling.zsh"
        "ded--graveyard.zsh"
        "cfg--config-mgmt.zsh"
        # TODO: Add new libraries here.
    )

    # -----------------------------------------------------------------------------
    # Syntax:   _extract_function_names_from_file <file>
    # Args:     <file>      A file name.
    # Outputs:  An array of function names found inside <file> using regexes.
    # Status:   Default status.
    # Notes:    This function extracts function names to check for name collisions.
    #           Locally scoped functions are likely intended to shadow an original.
    #           If we assume indented function definitions to be locally scoped and
    #           non-indented to be original, indented definitions escape detection.
    # -----------------------------------------------------------------------------
    function _extract_function_names_from_file() {
        local file=${1:-}
        # Define RegEx patterns to extract function names (fnames) from files.
        local re_pre="^function[[:space:]]+"                # Left of fname.
        local re_fn="[a-zA-Z0-9_:]+"                        # fname.
        local re_post="[[:space:]]*\(\)[[:space:]]+\{.*$"   # Right of fname.
        # Get an array of function names from the given file.
        local -a fnames=( ${(f)"$( grep -E "${re_pre}${re_fn}${re_post}" "${file}" | sed -E "s/${re_pre}// ; s/${re_post}//" )"} )
        echo "${fnames}"
    }

    # Create a function name registry.
    #   - Functional schema: fnmap[fn] => { (int)count, (str)sources }
    #   - Implement as parallel associative arrays managed by local functions.
    local -A fn_count   # The number of sources for a given fn.
    local -A fn_sources # The list of sources a given fn is found in.

    # -----------------------------------------------------------------------------
    # Syntax:   _fnmap_add_source_to_function_names <source> [<fname> ...]
    # Args:     <source>    A source string.
    #           <fname>     A list of function names.
    # Outputs:  None
    # Status:   Default status.
    # Details:
    #   - Appends <source> to the list of sources of each function name in the
    #     fnmap registry. If no function name index is found, one is added.
    #   - Updates the count of <source>s added to a function name.
    # -----------------------------------------------------------------------------
    function _fnmap_add_source_to_function_names() {
        local source=${1}           # Map source argument to local variable.
        local -a fns=( "${@:2}" )   # Map list of function names to an array.
        local fn
        for fn in ${fns}; do
            fn_count[${fn}]=$(( ${fn_count[${fn}]:-0} + 1 ))
            fn_sources[${fn}]=${fn_sources[${fn}]:-}${fn_sources[${fn}]:+, }${source}
        done
    }

    # -----------------------------------------------------------------------------
    # Syntax:   _fnmap_validate_fnames
    # Args:     None.
    # Outputs:  An error message for every duplicate in the function name registry.
    # Status:   returns 0 (success) if no duplicates found.
    #           returns 1 (error) if duplicates found.
    # -----------------------------------------------------------------------------
    function _fnmap_validate_fnames() {
        local duplicates=false
        local fn
        for fn in ${(k)fn_count}; do
            if (( fn_count[${fn}] > 1 )); then
                echo "\e[91mError: Function name duplicates detected:"
                echo "    Function:\t${fn}()"
                echo "    Sources:\t${fn_sources[${fn}]}"
                echo "==> Revise function names to avoid collisions."
                echo "\e[0m"
                duplicates=true
            fi
        done
        if [[ "${duplicates}" == true ]]; then
            return 1    # Exit function with an error.
        else
            return 0    # Exit function with status ok.
        fi
    }

    # -----------------------------------------------------------------------------
    # Syntax:   _fnmap_print
    # Args:     None.
    # Outputs:  Pretty-prints the function name registry to stdout, in JSON format:
    #           { "fnmap": { "<fname>": { "count": int, "sources": str } } }
    # Status:   Default status.
    # -----------------------------------------------------------------------------
    function _fnmap_print() {
        echo "{"
        echo "  \"fnmap\": {"
        local fn
        for fn in ${(k)fn_count}; do
            echo "    \"${fn}\": {"
            echo "      \"count\": ${fn_count[${fn}]},"
            echo "      \"sources\": \"${fn_sources[${fn}]}\""
            echo "    }"
        done
        echo "  }"
        echo "}"
    }

    # Process all libraries.
    local -a fns
    local lib
    for lib in ${libs[@]}; do
        fns=( $(_extract_function_names_from_file "${lib_dirpath}/${lib}") )
        _fnmap_add_source_to_function_names "${lib}" "${fns[@]}"
    done

    # Process the executable script.
    fns=( $(_extract_function_names_from_file "${ZSH_ARGZERO:A}") )
    _fnmap_add_source_to_function_names "${ZSH_ARGZERO:A:t}" "${fns[@]}"

    # Process the environment.
    fns=( ${(k)functions} )
    _fnmap_add_source_to_function_names "(environment)" "${fns[@]}"

    # DEBUG: Print function name registry.
    #_fnmap_print

    # Validate function names and exit function if collisions are detected.
    _fnmap_validate_fnames || return 1

    # Source common libraries if no collisions were detected.
    local lib
    for lib in "${libs[@]}" ; do
        source "${lib_dirpath}/${lib}"
    done

} || return 1   # Catch and return any errors to the caller.
Personal Notes

Style Guide

My scripts share a consistent look, feel, and flow because every script I write follows a set of rules that define a common design language. This is my personal style guide. It embraces defensive coding techniques and structural guidelines that make my code easier to write, read, and maintain.

The rules are straightforward and apply to both application and library scripts:

My scripts use a namespaced architecture. Since shell scripts share a global namespace, this is a defensive practice I leverage to reduce potential naming conflicts across scripts, to make library functions discoverable, and to reduce global namespace pollution by encapsulating all global variables. Here is what this entails:

When writing the actual logic of a script, there are a few rules and common patterns that make them easier to maintain.

In general:

Closing Remarks

I did not create these rules in a vacuum. I learned by writing and maintaining many scripts, and by reviewing scripts written by others. This, however, is the first time I have written them out. I have never needed to because these all make sense to me, they feel natural, and they make my code easier to maintain. It was straightforward to look at my code and generate the style guide I have been implicitly following. It felt good to document the style guide as a guide to my future self.

A well documented style guide is also essential for both team environments and LLM-driven workflows. It acts as a “system prompt” for your project. By defining the standards and constraints needed to generate code that fits with the architecture, you ensure that all new code adheres to your project’s design language. The net gain is that new code is aligned with your existing codebase, reducing the cost of refactoring and integration.

How would I recommend you develop your own? Review scripts you’ve written in the past, look at useful scripts you’ve run into, be critical of scripts you’re working on now, and seek resources online (some large companies publish their own style guides). You will quickly learn how to organize and document your code, how to implement best practices, and what guidelines make sense for you. Don’t be academic about it - be pragmatic. The goal is have a set of rules and guidelines that help you, your team, or an LLM write code that remains readable and maintainable months or years from now.