Nono.MA

JUNE 30, 2020

You can measure the time elapsed during the execution of TypeScript commands by keeping a reference to the start time and then subtracting the current time at any point on your program from that start time to obtain the time elapsed between two points in time.

const start = new Date().getTime();

// Run some code..

let elapsed = new Date().getTime() - start;

Let's create two helper functions to get the current time (i.e. now) and the elapsed time at any point from that moment.

// Returns current time
// (and, if provided, prints the event's name)
const now = (eventName = null) => {
    if (eventName) {
      console.log(`Started ${eventName}..`);
    }
    return new Date().getTime();
}

// Store current time as `start`
let start = now();

// Returns time elapsed since `beginning`
// (and, optionally, prints the duration in seconds)
const elapsed = (beginning = start, log = false) => {
    const duration = new Date().getTime() - beginning;
    if (log) {
        console.log(`${duration/1000}s`);
    }
    return duration;
}

With those utility functions defined, we can measure the duration of different events.

// A promise that takes X ms to resolve
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

// Measure duration (while waiting for 2 seconds)
(async function demo() {
    const waitInSeconds = 2;
    let beginning = now(`${waitInSeconds}-second wait`);
    // Prints Started 2-second wait..
    await sleep(waitInSeconds * 1000);
    elapsed(beginning, true);
    // Prints 2.004s
})();

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, React, and TypeScript.

JUNE 8, 2020

Just came across this machine learning (and TensorFlow) glossary which "defines general machine learning terms, plus terms specific to TensorFlow."

JUNE 2, 2020

In trying to use Artisan::call($command, $arguments) to execute a command exposed by my Laravel package—Folio—I was running into this issue.

The command "folio:clone" does not exist.

My commands were working on the terminal, by calling php artisan folio:clone, for instance, but they were not working programmatically, calling something like this.

Artisan::call('folio:clone 123 "New Title"');

Artisan::command was not a solution as it serves to register commands and not to execute them.

By looking into the FolioServiceProvider.php (the service provider of my own package) I noticed the $this->app->runningInConsole() check. My commands were being registered in the console but were not exposed elsewhere (that is, in the application itself).

I'd guess this is a security and performance measure. Commands that don't need to be available to the Laravel app are not registered for it.

Solution

The solution was simply registering the commands I want to be callable from my Laravel sites outside of the if statement that checks for $this->app->runningInConsole().

While eight commands are only available to run on the console, there's one available to both the console and the application's runtime.

if ($this->app->runningInConsole()) {
    $this->commands([
        \Nonoesp\Folio\Commands\GenerateSitemap::class,
        \Nonoesp\Folio\Commands\MigrateTemplate::class,
        \Nonoesp\Folio\Commands\TextAndTitleToJSON::class,
        \Nonoesp\Folio\Commands\ItemPropertiesExport::class,
        \Nonoesp\Folio\Commands\ItemPropertiesImport::class,
        \Nonoesp\Folio\Commands\ItemRetag::class,
        \Nonoesp\Folio\Commands\InstallCommand::class,
        \Nonoesp\Folio\Commands\CreateUserCommand::class,
    ]);      
}

$this->commands([
    \Nonoesp\Folio\Commands\ItemClone::class,
]);

In my case, I'm the maintainer of the package and could easily work around this limitation by taking the command I want to use in Laravel out of the if statement.

But you can register commands yourself in your app's $commands array in app/Console/Kernel.php. See the following example.

// app/Console/Kernel.php
protected $commands = [
    \Nonoesp\Folio\Commands\CreateUserCommand::class,
];

While the CreateUserCommand is only registered to the console by the package, I can explicitly make it available for my entire application calling it with Artisan::call('folio:user {email} {password}') (which is this command's signature).

Thanks!

I hope you found this useful. Feel free to ping me at @nonoesp, join the mailing list, or check out other Laravel posts and code-related publications.

MAY 14, 2020

I recently got Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron as a recomendation from Keith.

This second version updates all code samples to work with TensorFlow 2, and the repository that accompanies the book—ageron/handson-ml2—is also updated frequently to catch up with the latest updates.

Just the Python notebooks on that GitHub repository are super helpful to get an overall on state-of-the-art machine learning and deep learning techniques, from the basics of machine learning and classic techniques like classification, support vector machines, or decision trees to the latest techniques to code neural networks, customizing and trained them, loading and pre-processing data, natural language processing, computer vision, autoencoders and gans, or reinforcement learning.

MAY 13, 2020

#Graph2Plan

Nice work from Shenzhen, Carleton, and Simon Fraser Universities, titled Graph2Plan: Learning Floorplan Generation from Layout Graphs, along the lines of #HouseGAN. Via @alfarok.

Our deep neural network Graph2Plan is a learning framework for automated floorplan generation from layout graphs. The trained network can generate floorplans based on an input building boundary only (a-b), like in previous works. In addition, we allow users to add a variety of constraints such as room counts (c), room connectivity (d), and other layout graph edits. Multiple generated floorplans which fulfill the input constraints are shown.

Read the paper on Arxiv.

MAY 10, 2020

We propose In-Domain GAN inversion (IDInvert) by first training a novel domain-guided encoder which is able to produce in-domain latent code, and then performing domain-regularized optimization which involves the encoder as a regularizer to land the code inside the latent space when being finetuned. The in-domain codes produced by IDInvert enable high-quality real image editing with fixed GAN models.

MAY 5, 2020

Connect directly to RunwayML models with only a few lines of code to build web apps, chatbots, plugins, and more. Hosted Models live on the web and can be used anytime, anywhere, without requiring RunwayML to be open!

[…]

We've also released a JavaScript SDK alongside the new Hosted Models feature. Use it to bring a Hosted Model to your next project in just 3 lines of code.

APRIL 26, 2020

I managed to make this work by unlinking openssl.

https://github.com/wting/autojump/issues/540

Then reinstalling python.

brew reinstall python@2

I was having this issue when trying to install Google Cloud SDK. After doing the previous steps, I could run the installer without a problem.

./google-cloud-sdk/install.sh

APRIL 19, 2020

David Ha trained SketchRNN with a flowchart dataset. You can test his live demo (mobile friendly) and his multi-prediction demo (not mobile-friendly).

The source code is available on GitHub.

APRIL 18, 2020

Polyscope is a C++ & Python viewer for 3D data like meshes and point clouds. Here's a code sample in Python from their site. (A C++ equivalent is also available at polyscope.run.)

import polyscope as ps

# Initialize polyscope
ps.init()

### Register a point cloud
# `my_points` is a Nx3 numpy array
ps.register_point_cloud("my points", my_points)

### Register a mesh
# `verts` is a Nx3 numpy array of vertex positions
# `faces` is a Fx3 array of indices, or a nested list
ps.register_surface_mesh("my mesh", verts, faces, smooth_shade=True)

# Add a scalar function and a vector function defined on the mesh
# vertex_scalar is a length V numpy array of values
# face_vectors is an Fx3 array of vectors per face
ps.get_surface_mesh("my mesh").add_scalar_quantity("my_scalar", 
        vertex_scalar, defined_on='vertices', cmap='blues')
ps.get_surface_mesh("my mesh").add_vector_quantity("my_vector", 
        face_vectors, defined_on='faces', color=(0.2, 0.5, 0.5))

# View the point cloud and mesh we just registered in the 3D UI
ps.show()

APRIL 11, 2020

nodemon is a tool that helps develop Node.js based applications by automatically restarting the node application when file changes in the directory are detected.

How to install it globally?

npm install -g nodemon

Then you can use it anywhere as nodemon script.js and, every time script.js changes, the execution will restart with the new code, avoiding repetitive calls to node script.js, for instance.

MARCH 31, 2020

Pretty impressed by ImgIX's JSON format. Long story short, it provides a JSON metadata file for each resource you are serving via ImgIX (say, an image or a video) as in the example below just by appending fm=json to any image request.

For this image.

https://nono.imgix.net/img/u/profile-nono-ma.jpg

You'd load the JSON at.

https://nono.imgix.net/img/u/profile-nono-ma.jpg?fm=json

What's nice is that with the imgix.js JavaScript library you can fetch the JSON file for each picture and—before loading any image data—decide how to deliver the image. In this CodePen provided by ImgIX, they crop a set of images to fit a certain aspect ratio (which ends up saving data as they cropped bits of each image are never loaded) avoiding also having to play with image background CSS tricks in order to fit an image to a given aspect ratio (as the image already has it!).

{
    "Exif": {
        "PixelXDimension": 1500,
        "DateTimeDigitized": "2019:11:20 10:43:28",
        "PixelYDimension": 1500,
        "ColorSpace": 65535
    },
    "Orientation": 1,
    "Output": {},
    "Content-Type": "image\/jpeg",
    "JFIF": {
        "IsProgressive": true
    },
    "DPIWidth": 144,
    "Content-Length": "180595",
    "Depth": 8,
    "ColorModel": "RGB",
    "DPIHeight": 144,
    "TIFF": {
        "ResolutionUnit": 2,
        "DateTime": "2019:11:20 11:17:40",
        "Orientation": 1,
        "Software": "Adobe Photoshop CC 2019 (Macintosh)",
        "YResolution": 144,
        "XResolution": 144
    },
    "PixelWidth": 1500,
    "PixelHeight": 1500,
    "ProfileName": "Display"
}

And here's the JavaScript included in that CodePen that does all the magic.

// Demonstrate the use of the `fm=json` parameter to resize images
// to a certain aspect ratio, using ES6.

let ratio = 16 / 9;
let maxSize = 300;

let placeImages = function () {
    jQuery('.imgix-item').each((i, value) => {
        let $elem = jQuery(value);
        // We pull down the image specific by the 'data-src' attribute
        // of each .imgix-item, but append the "?fm=json" query string to it.
        // This instructs imgix to return the JSON Output Format instead of 
        // a manipulated image.
        let url = new imgix.URL($elem.attr('data-src'), { fm: "json" }).getUrl();

        jQuery.ajax(url).success((data) => {
            let newWidth, newHeight;

            // Next, we compute the new height/width params for 
            // each of our images.
            if (data.PixelHeight > data.PixelWidth) {
                newHeight = maxSize;
                newWidth = Math.ceil(newHeight / ratio);
            } else {
                newWidth = maxSize;
                newHeight = Math.ceil(newWidth / ratio);
            }

            // Now, we apply these to our actual images, setting the 'src'
            // attribute for the first time.
            $elem.get(0).src = new imgix.URL($elem.attr('data-src'), {
                w: newWidth,
                h: newHeight,
                fit: "crop"
            }).getUrl();
        })
    });
}

jQuery(document).ready(placeImages);

MARCH 30, 2020

gs -dNOPAUSE -sDEVICE=pdfwrite \
-sOUTPUTFILE=/output/path/combined.pdf \
-dBATCH /input/path/to/pdfs/*.pdf

MARCH 24, 2020

2020.06.03

I've found that if the creation or starting of a notebook takes longer than 5 minutes the notebook will fail, plus re-creating the conda environment every time you start an existing notebook makes the wait really long. Another solution which I'm preferring now is to use these persistent-conda-ebs scripts—on-create.sh and on-start.sh—provided by Amazon Sagemaker as examples. To keep it short, they download Miniconda and create an environment on-create with whatever Python version you choose, you can customize your environment (say, installing Python packages with pip or conda inside of it), and then that environment is persistent across sessions and future starts that will run the on-start script and have your notebook running in 1–2 minutes. Hope that helps! That's the way I'm using lifecycle configurations now.


2020.03.24

Here's something I learned about Amazon SageMaker today at work.

You can create notebook instances with different instance types (say, ml.t2.medium or ml.p3.2xlarge) and use a set of kernels that have been setup for you. These are conda (Anaconda) environments exposed as Jupyter notebook kernels that execute the commands you write on the Python notebook.

What I learned today that I didn't know is that you can create your own conda environment and expose them as kernels so you're not limited to run with the kernels offered by Amazon AWS.

This is the sample environment I setup today. These commands should be run on a Terminal window in a SageMaker notebook but they most likely can run on any environment with conda installed.

# Create new conda environment named env_tf210_p36
$ conda create --name env_tf210_p36 python=3.6 tensorflow-gpu=2.1.0 ipykernel tensorflow-datasets matplotlib pillow keras

# Enable conda on bash
$ echo ". /home/ec2-user/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc

# Enter bash (if you're not already running in bash)
$ bash

# Activate your freshly created environment
$ conda activate env_tf210_p36

# Install GitHub dependencies
$ pip install git+https://github.com/tensorflow/examples.git

# Now you have your environment setup - Party!
# ..

# When you're ready to leave
$ conda deactivate

How do we expose our new conda environment as a SageMaker kernel?

# Activate the conda environment (as it has ipykernel installed)
$ conda activate env_tf210_p36

# Expose your conda environment with ipykernel
$ python -m ipykernel install --user --name env_tf210_p36 --display-name "My Env (tf_2.1.0 py_3.6)"

After reloading your notebook instance you should see your custom environment appear in the launcher and in the notebook kernel selector.

What if you don't want to repeat this process over and over and over?

You can create a lifecycle configuration on SageMaker that will run this initial environment creation setup every time you create a new notebook instance. (You create a new Lifecycle Configuration and paste the following code inside of the Create Notebook tab.)


#!/bin/bash

set -e

# OVERVIEW
# This script creates and configures the env_tf210_p36 environment.

sudo -u ec2-user -i <<EOF

echo ". /home/ec2-user/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc

# Create custom conda environment
conda create --name env_tf210_p36 python=3.6 tensorflow-gpu=2.1.0 ipykernel tensorflow-datasets matplotlib pillow keras -y

# Activate our freshly created environment
source /home/ec2-user/anaconda3/bin/activate env_tf210_p36

# Install git-repository dependencies
pip install -q git+https://github.com/tensorflow/examples.git

# Expose environment as kernel
python -m ipykernel install --user --name env_tf210_p36 --display-name My_Env_tf_2.1.0_py_3.6

# Deactivate environment
source /home/ec2-user/anaconda3/bin/deactivate

EOF

That way you won't have to setup each new notebook instance you create. You'll just have to pick the lifecycle you just created. Take a look at Amazon SageMaker notebook instance Lifecycle Configuration samples.

MARCH 20, 2020

Maker.js: Parametric CNC Drawings Using JavaScript

Microsoft recently open sourced, twenty five days ago, Maker.js — a JavaScript library to create drawings on the browser for CNC and laser cutting.

I love the playground site they made to share parametric scripts (say, of a smiley face, a hello, world text, a floor plan, and more). See all demos.

From their website:

  • Drawings are a simple JavaScript object which can be serialized / deserialized conventionally with JSON. This also makes a drawing easy to clone.
  • Other people's Models can be required the Node.s way, modified, and re-exported.
  • Models can be scaled, distorted, measured, and converted to different unit systems.
  • Paths can be distorted.
  • Models can be rotated or mirrored.
  • Find intersection points or intersection angles of paths.
  • Traverse a model tree to reason over its children.
  • Detect chains formed by paths connecting end to end.
  • Get the points along a path or along a chain of paths.
  • Easily add a curvature at the joint between any 2 paths, using a traditional or a dogbone fillet.
  • Combine models with boolean operations to get unions, intersections, or punches.
  • Expand paths to simulate a stroke thickness, with the option to bevel joints.
  • Outline model to create a surrounding outline, with the option to bevel joints.
  • Layout clones into rows, columns, grids, bricks, or honeycombs.

Via @alfarok's GitHub stars.

MARCH 18, 2020

Here are some loose thoughts on what I've been tinkering with for the past months or years.

As of lately, I've been working on Folio—the content-management system this site runs on—to add new features, fix bugs here and there, and make it easier to deploy it across multiple websites—including mine and client sites. The system keeps getting better and better and I repeatedly ask myself whether I'm reinventing the wheel in many areas. I make use of great third-party packages developed with care and thoughtfulness by other developers. It's incredible when they work but awful when other developers stop updating and your code breaks. I now pay more and more attention to those GitHub starts (★) and pick carefully what packages to implement. Software rots.

I've learned a lot about managing my own Linux machines—both from scratch or from an existing image—to have a new site ready within minutes (at least, when I don't hit an unknown problem that steals a few hours from my day). I'm mainly deploying apps with Nginx and Laravel but I've also learned to run and deploy Node.js apps (with PM2), how to serve Docker images, or how to run Python and Golang programs.

I'm trying to thoroughly document all the troubleshooting I go through to not have to dig the internet to fix a bug I've fixed some time before. While it's obvious how to fix a bug you encountered yesterday, some bugs don't show up again for long, and you can save hours of work by keeping good notes.

A recent practice I've started playing with is creating automation files. I'm slowly getting acquainted with "Makefiles," text files that describe commands which are a list of command calls that will be executed after you type make command-name on your Terminal. These commands not only run on Linux machines but on the macOS terminal, so I can run most of my automation scripts both on the desktop and Linux servers. Here's a sample Makefile to setup a Digital Ocean droplet.

I build Folio mainly for myself. There are many systems like it but this one is entirely built by me, and it helps me learn new programming features as well as modern patterns and techniques by browsing other people's code when I use their packages. Many will hate Folio—simply because it runs on PHP—but I believe Laravel is making this programming language great again. (Trust me, I did PhpNuke sites back in 2003 and this is light years ahead.) Laravel feels like an updated version of Ruby on Rails.

I'm migrating most of my sites to Digital Ocean. Their droplet system (without hidden costs) is great. I'm still to see where to put Getting Simple's podcast audio files. A powerful package by the Spatie team makes backing up websites a breeze: I can schedule automatic database and file backups at desired time intervals, even uploading them to multiple disks (such as Digital Ocean Spaces, Dropbox, or Amazon S3).

I've recently started using Imgix to distribute my images and remove that load from website servers. The image-processing REST API they offer makes is flexible and removes many headaches and time lost manually editing images with Photoshop or other applications, might it be to apply simple effects, sharpening, resizing, or even adding watermarks or padding. And their CDN makes their distribution faster.

I rely less and less on TypeKit for web fonts, as I either serve the font files or use Google Fonts. There are also beautiful typefaces from type foundries that I might use soon. (Milieu Grotesque comes to mind.)

A big highlight (that sadly only runs on macOS) is Laravel Valet. After spending months learning how to configure Nginx blocks and crashing the system multiple times I found this simple tool that handles everything for you. There's a bit of reading to do to fully understand what it does but I'd summarize its benefits to two commands: valet link and valet unlink. With one command your computer serves a PHP app at http://app-name.test from the directory in which you run the command and the second command stops serving it. You can quickly register a folder to serve an app (say, with valet link nono) and quickly go to its URL to test the site locally (at http://nono.test). Behind the scenes, Valet uses dnsmasq, php, and nginx. Not having to deal with them on a daily basis makes things easy (even though I like to learn what's behind these systems and how to do it the manual way in case there's a need for it).

Another thing I'm loving are Cron jobs. They can be set either on Linux (as I do on Digital Ocean) or macOS with crontab -e. You also have to learn a bit about how Cron works but the short story is that it lets you schedule tasks: command executions at whatever time interval (or time of the day) you want. For instance, * * * * * curl https://google.com would ping Google every minute. And you can go from every minute to certain times of the day or fractions of the hour. Laravel copes with it by letting you schedule commands with a clean, high-level API (for instance, ->everyMinute() or ->dailyAt('17')). All you do is set a Cron job to execute the Laravel schedule every minute and Laravel decides what commands to run when.

Last but not least, I'd highlight the importance of logging. Most environments have ways to log executions and errors and this is the easiest way to find out what's breaking your system. I've added a route in Folio's administration in order to visualize logs from anywhere and, at a lower level, Nginx lets you log access and errors as well.

I'm constantly learning, and Folio and my website are my playground.

As I started saying, these are loose thoughts of many of the tech I've been exploring over the past year. I'm also learning about Docker, TensorFlow, Runway, and much more, and frequently keeping my notes on Dropbox Paper. Some of my experiments are kept in the open on GitHub, and I've recently started sharing what I'm learning on this YouTube playlist.

What are you tinkering with?

MARCH 18, 2020

try:
    %tensorflow_version 2.x
except Exception:
  pass

import tensorflow as tf

Note that %tensorflow_version is only available in Colab and not in regular Python.

FEBRUARY 26, 2020

I was getting this error in Laravel as I created a new app using the latest version (that is 6.2). Not sure why, the class would work locally but not remotely (on a deployment running in Ubuntu 18.04.3 on DigitalOcean to be precise).

I was using \ResourceBundle::getLocales('') to get a list of valid locales present on a website and then using in_array($needle, $array) to check whether a given locale is valid in PHP.

Here's how I fixed it.

  • composer require symfony/intl to install Symfony's Intl component.
  • Replaced my in_array calls with \Symfony\Component\Intl\Locales::exists($translation).

FEBRUARY 21, 2020

In C#, .NET, and Visual Studio, you can use the Random Class to generate random numbers.

First, you create a random number generator.

var random = new Random(1); // where 1 is our seed

Then you request the next random number (or next random double).

// Next random number
var aRandomNumber = random.Next();

// Next random double
var aRandomDouble = random.NextDouble();

FEBRUARY 13, 2020

Set editor to nano (or your editor of choice).

export EDITOR=nano
crontab -e

Edit the file, for instance, create a folder with a timestamp every minute in your desktop folder.

* * * * * cd ~/Desktop && mkdir `date +\%y\%m\%d_\%H\%M\%S`

FEBRUARY 10, 2020

Let say you stage all your Git changes and then commit them.

git add --all
git commit -m "Edit REDME.md"

There's a typo on REDME — should read README — and we want to "amend" this error.

git commit --amend

The commit --amend command lets you edit the commit message in the vim editor.

You can also change the message by specifying the new message in the command line with the -m argument.

git commit --amend -m "Edit README.md"

As the commit message is part of the commit itself, editing the message alters the commit hash, which means that if you've already pushed a commit to a remote, the remote won't let you push the new edit directly. But you can force that to happen.

git push --force branch-name

FEBRUARY 6, 2020

npx and create-react-app make it easy to create a new app running React and TypeScript.

npx create-react-app my-app --template typescript

Then you go into the folder and run the app.

cd my-app
npm start

You can create a JavaScript (non-TypeScript) app by removing the last bit—--template typescript. And you can also run the app with yarn start.

If, as I was, you're not getting the app working, you might have an older global installation of create-react-app. In my case I installed it with npm install -g create-react-app (which I could verify by running create-react-app -V on the Terminal. To make sure npx uses the latest version of create-react-app you need to uninstall the global version installed with npm.

npm uninstall -g create-react-app

Before you go

If you found this useful, you might want to join my mailing lists; or take a look at other posts about code, React, and TypeScript.

FEBRUARY 5, 2020

git push origin $(git branch | grep \* | cut -d ' ' -f2)

JANUARY 23, 2020

$output = preg_replace('!\s+!', ' ', $input);

From StackOverflow.

OCTOBER 11, 2019

If you want to add the cs files from one Visual Studio solution (or project) into another, without duplicating the cs files or moving them over, you can simply use the Add > Existing file.. option, making sure that instead of selecting the files and clicking Add (which would copy the files over to your project folder duplicating them) you click on the small arrow and click Add As Link. The files will be linked to your project as a reference to the other project, and editing them will change their code for both Visual Studio projects.

OCTOBER 8, 2019

The short answer is that you can't. But there's a workaround.

Let's say you have a newsletter design as follows, which shows the post image, followed by its title and full content.

*|RSSITEM:IMAGE|*

*|RSSITEM:TITLE|*

*|RSSITEM:CONTENT_FULL|*

The *|RSSITEM:IMAGE|* RSS merge tag will pull in the image and assign to the the created img element the width and height corresponding to the original size of the image. Something like this (which is an example extracted from my Mailchimp campaign using the design template above).

<img class="mc-rss-item-img" src="https://nono.ma/img/u/sketch-170202_cambridge-clary-st.jpg" height="1868" width="2500" style="height: 1868;width: 2500;">

The problem is that we don't want to explicitly specify the height and width values of our image, as its oversized compared to the width of our design, which in my example was 600px plus margins, which was leaving 564px for the width of the image itself.

Here's the workaround: You create the image's img HTML element yourself, formatting it as you want, and use the RSSITEM:ENCLOSURE_URL—which according to Mailchimp "Displays the URL for the attached file [of your post]." In my case, the RSS feed is using the enclosure tag to send the image URL.

// RSS feed item enclosure tag
<enclosure url="https://nono.ma/img/u/sketch-170202_cambridge-clary-st.jpg" type="image/jpeg"/>

Then I can use that URL in my Mailchimp design as follows (also adding the post's title and full content below the image).

<img src="*|RSSITEM:ENCLOSURE_URL|*" style="max-width: 100%">

*|RSSITEM:TITLE|*

*|RSSITEM:CONTENT_FULL|*

SEPTEMBER 11, 2019

As a security mechanism, Windows natively lets you unzip files by right-clicking and choosing Extract All.. Well, you can skip the manual unblocking of the files by using 7-Zip to extract the files instead of Windows’ mechanism.

We want to get rid of a message you might have seen over and over when downloading files from the internet. "This file came from another computer and might be blocked to help protect this computer," next to an Unblock button. If you know about this, it's fine to unblock a couple files, but it's really annoying when you download a zip containing dozens of files which you have to unblock individually, one by one.

Download and install 7-Zip and unzip downloaded plugins with right-click > 7-Zip > Extract here, or any of the other extraction options.

This issue has given me a lot of headaches when installing both Dynamo and Grasshopper plugins, but you might run into this issue in other environments as well. I'm glad there's an alternative to unlocking each file--might it be DLL, gha, gh, dyn, PDF, exe or files in another formats--separately, unzipping a group of files, all unlocked, at once.

SEPTEMBER 10, 2019

By default NPM—the Node Package Manager—uses its own public registry (at https://registry.npmjs.org) to pull down packages when you run npm install or npm update inside of a project.

You can specify different registries at multiple levels or scopes to override these default value (and other configuration settings). Ideally, when working on projects that require a specific registry—due to security or maybe just because some packages are inside of a private repository—you would set NPM to look for packages on that registry inside of the project folder.

Command level (only affects the command itself)

npm install --registry=https://some.registry.url

Project level (these would be the contents of the .npmrc file inside of the project folder)

registry=https://some.registry.url
@namespace:registry=https://some.other.registry.url

Global (these would be the contents of the .npmrc file inside of your system user folder)

registry=https://some.registry.url
@namespace:registry=https://some.other.registry.url

How to check your configuration is working

Run the npm config list command to see what variables are set. If you run this in a project folder with an .npmrc file you should see its overriding configuration settings in the folder, and the system global settings after that.

If you run the command in any other folder (without an .npmrc file with custom settings) you should see whatever is set in the global .npmrc and then the default configuration values of NPM.

SEPTEMBER 10, 2019

A cheat-sheet for mathematical notation in [JavaScript] code form.

SEPTEMBER 9, 2019

Today I've automated the backup of the configuration, database, and static files of all the websites I manage. Two hours and a half that will save me a lot of time in the coming future, and remove stress when weird things happen. The backup—of six websites in three different servers running Laravel—downloads a copy of the database, the .env files, and the static files (a zip with the contents of the public folder) of each site.

I'll probably open source these scripts in the near future.

One new thing I learned was creating bash functions, like this one.

# create a variable with current date, formatted as yymmdd_HHMMSS
DATE_NOW=$(date '+%y%m%d_%H%M%S') 

# function that zips something and removes it
zip_and_remove() { cd $1 && zip "$2.zip" $2 && rm $2 && cd .. }

# function that downloads a file via ssh then calls the previous one
download_zip_remove() { scp $1 $2/$3 && zip_and_remove $2 $3  }

# a function call
download_zip_remove root@192.168.1.2:/var/www/site.com/folio/.env $DESTINATION $(echo $DATE_NOW)_$SITENAME.env

Want to see older publications? Visit the archive.

Listen to Getting Simple .