o11c

joined 2 years ago
[–] [email protected] 1 points 2 years ago

The default handling is pretty important.

What I find more interesting are 1. the two-argument form of iter, and 2. the __getitem__ auto-implementation that causes there to be two incompatible definitions of Iterable.

(btw your comments are using accidental formatting; use backticks: __next__)

[–] [email protected] 3 points 2 years ago (2 children)

The problem with pathlib is that it normalizes away critical information so can't be used in many situations.

./path should not be path should not be path/.

Also the article is wrong about "Path('some\\path') becomes some/path on Linux/Mac."

[–] [email protected] 0 points 2 years ago (1 children)

If by "down" you mean completely down, it's probably just you; I don't think I've ever seen a failure that didn't go away after refresh.

Flakiness for specific features (often subscribe/block, and once notifications) have been more common.

[–] [email protected] 3 points 2 years ago

Two of the most expensive things a shell does are call fork and call execve for an external program. pwd is a builtin (at least for bash) but the former still applies. $PWD exists even if you don't want that shortening; just like your backticks be sure to quote it once so it doesn't get expanded when assigning to PS1.

In general, for most things you might want to do, you can arrange for variables to be set ahead of time and simply expanded at use time, rather than recalculating them every time. For example, you can hook cd/pushd/popd to get an actually-fast git prompt. Rather than var=$(some_function) you should have some_function output directly to a variable (possibly hard-coded - REPLY is semi-common; you can move the value later); printf -v is often useful. Indirection should almost always be avoided (unless you do the indirect-unset bash-specific hack or don't have any locals) due to shadowing problems (you have to hard-code variable name assumptions anyway so you might as well be explicit).

[–] [email protected] 3 points 2 years ago* (last edited 2 years ago) (2 children)

That doesn't seem sensible. Moving the cursor will confuse bash and you can get the same effect by just omitting the last \n.

Note that bash 5.0, but not earlier or later versions, is buggy with multiline prompts even if they're correct.

Your colors should use 39 (or 49) for reset.

Avoid doing external commands in subshells when there's a perfectly good prompt-expansion string that works.

You seem to be generating several unnecessary blank lines, though I haven't analyzed them in depth; remember that doing them conditionally is an option, like I do:

#PS1 built up incrementally before this, including things like setting TTY title for appropriate terminals
PS0='vvv \D{%F %T%z}\n'
PS1='^^^ \D{%F %T%z}\n'"$PS1"
prompt-command-exit-nonzero()
{
    # TODO map signal names and <sysexits.h> and 126/127 errors?
    # (128 also appears in some weird job-control cases; there are also
    # numerous cases where $? is not in $PIPESTATUS)
    # This has to come first since $? will be invalidated.
    # It's also one of the few cases where `*` is non-buggy for an array.
    local e=$? pipestatus="${PIPESTATUS[*]}"
    # Fixup newline. Note that interactive shells specifically use stderr
    # for the prompt, not stdin, stdout, or /dev/tty
    printf '\e[93;41m%%\e[39;49m%'"$((COLUMNS-1))"'s\r' >&2
    # if e or any pipestatus is nonzero
    if [[ -n "${e/0}${pipestatus//[ 0]}" ]]
    then
        if [[ "$pipestatus" != "$e" ]]
        then
            local pipestatus_no_SIGPIPE="${pipestatus//141 /}"
            local color=41
            if [[ -z "${pipestatus_no_SIGPIPE//[ 0]}" ]]
            then
                color=43
            fi
            printf '\e[%smexit_status: %s (%s)\e[49m\n' "$color" "$e" "${pipestatus// / | }" >&2
        else
            printf '\e[41mexit_status: %s\e[49m\n' "$e" >&2
        fi
    fi
}
PROMPT_COMMAND='prompt-command-exit-nonzero'

[–] [email protected] 3 points 2 years ago* (last edited 2 years ago)

I've done something similar. In my case it was a startup script that did something like the following:

  • poll github using the search API for PR labels (note that this has sometimes stopped returning correct results, but ...).
    • always do this once at startup
    • you might do this based on notifications; I didn't bother since I didn't need rapid responsiveness. Note that you should not do this for the specific data from a notification though; it's only a way to wake up the script.
    • but no matter what, you should do this after N minutes, since notifications can be lost.
  • perform a git fetch for your main development branch (the one you perform the real merges to) and all pull/ refs (git does not do this by default; you'll have to set them up for your local test repo. Note that you want to refer to the unmerged commits for these)
  • if the set of commits for all tagged PRs has not changed, wait and poll again
  • reset the test repo to the most recent commit from your main development branch
  • iterate over all PRs with the appropriate label:
    • ordering notes:
      • if there are commits that have previously tested successfully, you might do them first. But still test again since the merge order could be different. This of course depends on the level of tests you're doing.
      • if you have PRs that depend on other PRs, do them in an appropriate order (perhaps the following will suffice, or maybe you'll have some way of detecting this). As a rule we soft-forbid this though; such PRs should have been merged early.
      • finally, ordering by PR number is probably better than ordering by last commit date
    • attempt the merge (or rebase). If a nop, log that somewhere. If not clean, skip the PR for now (and log that), but only mark this as an error if it was the first PR you've merged (since if there's a conflict it could be a prior PR's fault).
    • Run pre-build stuff that might need to create further commits, build the product, and run some quick tests. If they fail, rollback the repo to the previous merge and complain.
    • Mark the commit as apparently good. Note that this is specifically applying to commits not PRs or branch names; I admit I've been sloppy above.
  • perform a pre-build, build and quick test again (since we may have rolled back and have a dirty build - in fact, we might not have ended up merging anything!)
  • if you have expensive tests, run them only here (and treat this as "unexpected early exit" below). It's presumed that separate parts of your codebase aren't too crazily entangled, so if a particular test fails it should be "obvious" which PR is relevant. Keep in mind that I used this system for assumed viable-work-in-progress PRs.
  • kill any existing instance and launch a new instance of the product using the build from the final merged commit and begin accepting real traffic from devs and beta users.
  • users connecting to the instance should see the log
  • if the launched instance exits unexpectedly within M minutes AND we actually ended up merging anything into the known-good branch, then reset to the main development branch (and build etc.) so that people at least have a functioning test server, but complain loudly in the MOTD when they connect to it. The condition here means that if it exits suddenly again the whole script goes up and starts again, which may be necessary if someone intentionally tried to kill the server to force a new merge sequence but it was too soon.
    • alternatively you could try bisecting the set of PR commits or something, but I never bothered. Note that you probably can't use git bisect for this since you explicitly do not want to try commit from the middle of a PR. It might be simpler to whitelist or blacklist one commit at a time, but if you're failing here remember that all tests are unreliable.
[–] [email protected] 3 points 2 years ago

Trailing return types are strictly superior since they allow the type to be written in terms of arguments, if necessary.

Deduction can sometimes take its place but often makes the code less obvious.

A minor side-effect is that it makes all function names line up, without introducing line breaks (which break grep). If only we could do the same for variables ...

[–] [email protected] 1 points 2 years ago

Honestly you probably should think about how to translate them. Python at least rolls its own .mo parser so it can support multiple languages in a single process; it's much more difficult in C unless you push it to the clients (which requires pushing the parameterization as well).

Non-.pot-based internationalization formats are almost always braindead and should be avoided.

[–] [email protected] 1 points 2 years ago

Note that by messing with a particular module's __path__ you can turn it into a "package" that loads from arbitrary directories.

[–] [email protected] -1 points 2 years ago* (last edited 2 years ago)

No. Duck types (including virtual subclasses) considered harmful; use real inheritance if your language doesn't provide anything strictly better.

It is incomparably convenient to be able to retroactively add "default implementations" to interface functions (consider for example how broken readinto is in Python). Some statically-typed languages let you do that without inheritance, but no dynamically-typed language can.

This reads more as a rant against inheritance (without any explanation whatsoever) than a legitimate argument.

[–] [email protected] 1 points 2 years ago

It's kind of sad that "200 ms" counts as "good" these days ...

[–] [email protected] 0 points 2 years ago

The problem with IDEs is that they are less than the sum of their parts.

There are no old Emacs users on the internet because their fingers are no longer capable of typing. Unfortunately, most other text editors use keybindings reminscent of emacs.

Neovim doesn't offer much value over classic vim, and is much more prone to being outdated on some random system you might use.

Don't put too much in your .vimrc, since then you'll have trouble when you move to another computer. I can survive with just 3 lines in my .vimrc (easy to memorize), with a 4th likely:

" because I hate default <Esc> location and behavior
set virtualedit=onemore
nnoremap <CR> i<CR>
inoremap <CR> <C-O>:stopinsert<CR>
" If you press it fast enough, don't break undo.
inoremap <CR><CR> <CR>

That's not all I end up with if I use vim extensively on a single system, but the other stuff can be added as I get bothered (line numbering, tab behavior, TTY title, backups, undos, modeline, mouse, search, wrapping)

(admittedly I don't bother with LSP integration; if you rely on that you probably need more)

view more: ‹ prev next ›