chxo internets

  • Archive
  • RSS

(Yet Another Reason) Why We Need IPv6 Now: HTTPS

Google announced recently that sites using HTTPS (secure web connections, aka SSL) may be given preferential search rankings, and there is a general widespread call to use HTTPS everywhere. Gandi.net is offering a free security certificate with every new domain registration, and presumably other registrars will follow suit.

But the big bottleneck to HTTPS adoption for smaller sites is that it is not easy to use with the most common kind of virtual hosting: name-based virtual hosting. That’s where you have many different sites on one server, and they all share the same IP address.

In name-based virtual hosting, when the server receives a request for a web page, it checks to see what domain name is being asked-for and then serves up the correct page. Unfortunately, with HTTPS, the domain name is encrypted along with the rest of the request, so the encrypted connection must be set up, with the correct certificate, before the name can be determined by the server. It’s a classic chicken and egg problem.

There are two ways around this, neither of which scales very well:

1) Use a different IP address for each domain.
2) Use a single certificate that is valid for multiple domain names.

Number 1 doesn’t scale because IPv4 addresses are a finite resource. ISPs and cloud providers are already getting antsy about handing them out.

And number 2 doesn’t scale because certificate authorities limit the number of alternate names you can add to any one certificate. 20 is a common limit. There is also an administrative burden of matching websites to certificates to configurations as customers sign up and leave, which is a bit like playing Tetris.

Switching (finally!) to IPv6 would solve the scarcity problem and allow us to assign a unique IP address to each website, which in turn allows each customer to bring their own TLS certificate to the table. 

I hope that our evolving common understanding of Internet security and the need for HTTPS connections everywhere (which is constantly being reinforced!) will give end-user ISPs the push they finally need to implement end-to-end IPv6.

  • 2 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Working on a new website

We’re working on a new website. On a few new websites, actually, but the one I’m talking about will replace the tidy little one-page site that’s currently at http://www.chxo.com/

I have no idea when it will be ready. Before December, I hope. Finding time between all of the other projects is tough, especially when those other projects are paying the bills. 

It will be responsive, and mobile-friendly, and easy to use, just like all the other new sites we’re building. And it will replace (or at least supplement) a lot of the estimating and time-tracking functions we’re currently using FreeAgent to manage.

  • 4 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Chxo Hosting and HeartBleed

The good news is that until mid-March, we were using a version of OpenSSL that was not vulnerable to the HeartBleed bug.

However, in March we started migrating sites to a newer server OS that included the infected version. So for about 20 days, until our OpenSSL was updated on April 9, our sites were vulnerable.

As of April 11, we have re-keyed all of our SSL Certificates, and changed important passwords related to our domains and infrastructure.

Should you change your password?

Given the relatively small window of opportunity for attackers to exploit the bug on our servers, and the availability of much more prominent sites to attack, we are not requiring you to change your password.

If you logged in to your website in the last 3 weeks, there is a chance that attackers may have been able to capture your password or other private information during that time. You should go ahead and change your password.

If your website password is the same as one you use on other vulnerable sites, such as GMail, Dropbox, Yahoo, or GoDaddy, then you should change your password. Attackers have almost certainly used this bug against prominent web services in the last 2-3 years.

If you have any questions or concerns about this issue, please contact me.

  • 7 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

A New Chxo

Chxo.com started in 2001, as a developer of open source website software. We built the Berylium content management system, and Fotola.com, and a few long-gone websites for commercial clients such as Lifetime Television and Lightswitch. But I wanted to do more than just create websites.

For the last 10 years, I had the honor of being Director of the Center for Internet Innovation at the Fund for the City of New York. There, my colleagues and I built some really amazing things, and I got to work with a lot of really great people who are doing their best to improve the quality of our lives — all of us, not just people with lots of money or access to the latest, greatest technology. 

But the world changes, and organizations change, and it’s time for me to re-start Chxo.com as a software development company. I want to provide friendly web services to individuals, families, and small organizations. It’s what I love to do, and I’m very much looking forward to doing it here.

There will be a new website. And new services. And a bunch of useful web apps. Stay tuned, as we used to say…

  • 7 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Restoring Safari Bookmarks from Time Machine (Mavericks Edition)

So iCloud mixed together all your bookmarks, or your kid deleted them or something. Isn’t it a good thing you use Time Machine? Yes.

How do you get your Safari bookmarks out of a Time Machine backup?

  1. Tell Finder that you want to see hidden files:
    defaults write com.apple.finder AppleShowAllFiles -bool true
  2. Go to your Home folder, then Library/Safari - look for the file Bookmarks,plist.
  3. Click the Time Machine icon in the menubar and select “Enter Time Machine”
  4. Go back to a known-good copy of Bookmarks.plist
  5. Select it and click Restore (lower right)
  6. Tell Finder to hide hidden files again:
    defaults write com.apple.finder AppleShowAllFiles -bool false
  7. Now go to Safari and enjoy your restored bookmarks!

Could Apple make this easier? Of course. There could be a “Restore previous Bookmarks…” option in Safari’s bookmarks. But it used to be worse. At least in Mavericks we don’t need to restart Finder or Safari after making changes to their preferences.

  • 8 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

How to Restrict Linux Users to Only SFTP (without scponly)

Until recently, I’ve been restricting users to SFTP (or SCP) by setting their shell to scponly. But then Debian removed scponly from their distribution. It turns out there’s another, better, way to do it using a Match block in sshd_config, and some directives that strictly limit what matching users can do.

But first, a word about SFTP:

"Compared to the SCP protocol, which allows only file transfers, the SFTP protocol allows for a range of operations on remote files – it is more like a remote file system protocol. An SFTP client's extra capabilities compared to an SCP client include resuming interrupted transfers, directory listings, and remote file removal.” — Wikipedia

Apparently the extra overhead makes SFTP a little slower than SCP for file transfers, but SCP is a power user program. Any user with a GUI will use SFTP, since the GUI will need directory listings at a minimum. And if you mount an SSHFS filesystem, SFTP is the underlying protocol.

Using the Internal SFTP Server

Traditionally, sshd’s SFTP server was implemented by a separate program called sftp-server, which was defined in sshd_config using the Subsystem directive.

Since OpenSSH 4.8 (we’re now at 6.x), sftp-server  has been linked in sshd itself. This means that sshd can respond directly to SFTP requests, without needing to run another command as the user. This closes some potential security and operational gaps (like motd breaking sftp, or the need for a wrapper shell like scponly to set restrictions). It also means that sftp sessions can occur inside a chroot, without having to make any binaries available inside that jail.

So the first edit you may need to make to your sshd_config is to change the sftp Subsystem to internal-sftp:

#Subsystem sftp /usr/lib/sftp-server
Subsystem sftp internal-sftp

For the purist, this step isn’t technically necessary. You can still use the external sftp-server command and restrict users to that, but why would you want to? It’s the same thing, and internal is easier (and, I believe, safer).

Restricting Users to SFTP Only

Now you can use a Match block in sshd_config to restrict users who belong to a particular group to using only the internal-sftp server when they connect via ssh.

It’s important to note that Match blocks go at the end of sshd_config, because there is (apparently? really?) no EndMatch directive.  The Match applies to all following lines until the next Match or the end of the file. So tack this on to the end of your sshd_config:

# sftponly users
Match group sftponly
AllowTcpForwarding no
X11Forwarding no     ForceCommand internal-sftp

You can see that this does a few things. It prevents the ssh connection from using TCP forwarding, so users can’t use your server to proxy websites or carry out forwarded attacks on other hosts. For good measure, it prevents X11 forwarding (if you have to ask). And most importantly, it forces the internal-sftp server to be the only “command” run by ssh.

ForceCommand overrides an command specified on the ssh client command line, and it also overrides any command specified in the user’s authorized_keys file. They are locked down, barring any exploits in sshd itself.

Keeping Users in a Chroot Jail

Sometimes, restricting users to SFTP is enough. But if you don’t need to allow access to the rest of the filesystem, why would you? You usually only want users to be able to access files within their home directory, so use a chroot to keep them there. Otherwise they can go wandering all over your server, looking at configuration files and /etc/passwd and /dev and /proc…

The ChrootDirectory directive :

Specifies the pathname of a directory to chroot(2) to after
authentication.  All components of the pathname must be root-
owned directories that are not writable by any other user or
group.  After the chroot, sshd(8) changes the working directory
to the user's home directory.

Since the pathname must be owned by root, this may prevent some admins from chrooting to the user’s home directory, which is classically (if dangerously?) owned by the user, and definitely writable by the user.

One recommendation is to create one or more directories inside of the user’s home directory to use for file transfer. This is the safest approach, as it prevents the sftponly user from creating new top-level directories and files. It makes their unix home directory a sort of demilitarized zone controlled by root, where the user can write to existing files (with permission) but cannot create new files there.

# sftponly users, chrooted
Match group sftponly
ChrootDirectory /home/%u
AllowTcpForwarding no
X11Forwarding no
ForceCommand internal-sftp

Same Match directive as before, but with ChrootDirectory added. The %u in the path expands to the user’s username. You could also use %h for the home directory, instead of /home/%u, in case home directory location varies by user.

In Conclusion

At first I was upset that Debian had ditched scponly in 7.x (Wheezy). But using sshd_config to restrict file transfer users to internal-sftp and chroot them to their home directories is a much better and safer solution. It requires a few extra steps when creating a new file transfer user, such as:

usermod -a -G sftponly username
chown root:root /home/username
mkdir /home/username/files
chown username /home/username/files

.. but those are easily scripted, and simple enough to perform by hand if you only add new users like this occasionally.

  • 9 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Copy full name and address to the clipboard in Mavericks Mail.app

One of the little things that has annoyed a lot of people about Mail.app over the years is how when you try to copy an email address from the To: or Cc: lines, it copies the full name, like “Chris Snyder <csnyder@chxo.com>”.

In Mavericks, the OS X team changed the behavior to copy just the email address, and there was jubilation in the land. Or so I’m told.

But what if you want the name, too? There seems to be no way to get it, meaning that you have to painstakingly type out someone’s full name and hope you don’t make a typo. Nobody likes to see their name misspelled!

The solution: hold down the Option key. “Copy Address” will magically change to “Copy Name and Address”.

This is a very Mac-like solution, and while I say boo for hiding it, I’m glad it’s there!

  • 10 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

OS X Mavericks: Slow Network Filesystem?

Workaround for the Slow Open / Save Dialog Box Problem in OS X Mavericks

— this worked for me, but seems a bit heavy-handed. I mean, wtf? I have to turn of network share automount? Well okay, if it saves me 30 seconds every time I browse to a new directory on a network share.

  • 11 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

E-mail Address Books are Not a Social Graph

"NSA collects millions of e-mail address books globally”
via Bruce Schneier via Washington Post.

They capture address books and contact lists in transit over unsecured or MiTMed webmail responses. Yahoo!, waaaaay too late to the game with https (come on guys!) is a prime offender, but they manage to capture Hotmail, Facebook, and GMail contacts as well.

This gets under my skin, in a way that some of the other surveillance doesn’t. E-mail address books — leave Facebook aside for now — are built dynamically from emails received and responded to. As anyone who is on a LISTSERV knows, you end up with a lot of contacts in your address book who you do not actually know.

In fact, there are all kinds of reasons you might respond, innocently enough, to an email from someone you don’t know. I’ve gotten a lot of misdirected email over the years, and responded to some of the more sentient messages with a “sorry, wrong address” reply. Now that sender is in my address book. Or consider that one of the best things about email, as opposed to walled-garden services like AOL and Facebook, is that anyone in the world can contact you to ask a question or make a comment about something you’ve done or said or offerred. People write to me, out of the blue, about open-source software I wrote ten years ago. I don’t know who they are, but I’m happy to answer their questions if I can. And now they’re in my address book.

Do I think the NSA doesn’t know this? Well, I would hope they do, but consider that Google themselves, who should have quite a bit more knowledge about their own products, infamously decided one day that you would want everyone in your contacts list to see links you’d selective shared with others in another product. No distinction between strangers, acquaintances, friends, work colleagues, clients, ex-lovers, family, fellow parents, friends of parents, members of your church/softball team/book club, or any of the thousands of other shades and distinctions in a real-life social graph.

This is the curse of metadata analysis: it only reveals a tiny piece of the elephant, and your algorithms and operatives have to infer the rest.

  • 11 months ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Debian Linux load average 1.0 mystery solved!

I have a server where for months the CPU load average has never dipped below 1.0. Since the server hosts a busy website that was programmed by someone else, I just figured poor coding practices were to blame. The system didn’t seem particularly slow.

So I was musing on it today, watching top. No processes were claiming unruly amounts of CPU time. I killed Apache, no change. Killed MySQL, no change. Oops, I guess it’s not the website. I ran checksums on top and ps to see if they were hacked binaries installed by a rootkit or something… nope.

But wait: watching top, there’s khubd in a D state, meaning that some USB driver is probably waiting for I/O.

It turns out there was a USB hard drive (formerly used for backups) plugged into the server but powered off. And this was causing the USB thread to freak out.

External hard drive unplugged, CPU load average is now down to 0.03. Now why didn’t I think of that before?

  • 1 year ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+
Check the download progress of App Store apps by clicking on the Purchases tab.
So obvious it&#8217;s silly, and yet I hunted around for a while to find it, via OSX Daily.
Pop-up View Separately

Check the download progress of App Store apps by clicking on the Purchases tab.

So obvious it’s silly, and yet I hunted around for a while to find it, via OSX Daily.

  • 1 year ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Some Brine-Pickling Links

First of all, Awesome Pickle, A Microbe Herder’s Almanac. Nice writing, practical recipes.

Making Sour Pickles by the fermentaion guru Sandor Katz.

A Boing-Boing take on making sauerkraut, including a scary mold bloom.

  • 1 year ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

I haven’t spent nearly enough time working with HTML5 video. Man, it’s easy! No video.js or other player needed, you just drop the tag in the page and go.

One thing that made it easier: Miro’s free Video Converter. Encodes html5-ready mp4, ogg, and webm video for that cross-platform, plays-anywhere experience. 

I’m still upset that you need to post three files to play one video. But the ease of using (and scripting) the MediaElement makes up for it.

  • 1 year ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Freeing a stuck UPS battery

The thing that’s going to send my whole operation to The Cloud is that I will *never* have to deal with another gosh-darned APC UPS again.

To wit: Battery stuck in APC SmartUPS. How do I get it out?

The short answer: you take the case apart. Good luck!

And don’t short the battery terminals while you’re prying it out of there with your all-metal Leatherman tool…

  • 1 year ago
  • Comments
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+
Page 1 of 19
← Newer • Older →

About

A network of memes,
by Chris Snyder

  • RSS
  • Random
  • Archive
  • Mobile
Effector Theme — Tumblr themes by Pixel Union