Chasing a Wifi Ghost

For quite a while, I have not been so confused (and excited) by a technical issue.

The problem is: I got very slow download speed (~10Mbps) on my Windows laptop, when connected to Wifi.

To make it a bit more puzzling:

- The upload speed is much better (~60Mbps).

- Sometimes the download speed may reach ~100Mbps for a few seconds, before quickly fall back to ~10Mbps.

Having been suffering from it for a few months, I finally decide to take on this issue today.

Beginning of the Journey

As usual, there are some troubleshooting routine I should do. Also as usual, they don't help much.

- Try other devices with the same Wifi
- Try to connect to other Wifi
- Try to connect to Ethernet
- Reset the Wifi router
- Run Windows troubleshooter

Well, eventually I did get some information. The download & upload speed is actually normal for all my other devices, which includes Android/iOS phones, MacBook Pro etc. 

This pieced of info narrowd the root cause down to the laptop itself: The router must be working well, and I'm not throttle by my ISP.

The Chase1: Killer

My machine has a Killer wifi card, which had some issues before. At the beginning, I try to examine all Killer componenets thoroughly:

- Try the latest Wifi driver
- Try some older Wifi drivers
- Install / Uninstall Killer software & services

Not surprisingly, nothing worked.

The Chase2: Windows

Next, I tried to exhaust my knowledge about Windows networking,  in order to examine all relevant aspects:
- run sfc /scannow
- Reset TCP settings
- Reset network stack (run ipconfig, netsh etc.)
- Disable SmartByte service (which is actually not available on my machine)
- Disable WMM/QoS (which is actually not available for my router)
- Disable power saving move for the adapter
- Altering Wireless mode (802.11) of the adapter
- Disable network discover
- Disable firewall
- Remove custom DNS servers
- Disable features (e.g. ipv6) of the adapter
- Disable other network adapters
- Disable bluetooth
- Disable Windows TCP autotuning
- Dissable WAN miniport

Again. Nothing worked.

The Ghost

After reading the Killer troubleshooting pages for a few times (one, two), I finally noticed an instruction, to set the "channel width" to 20MHz. This is relevant since I'm using a 5GHz Wifi.

Having absolutely no expection, I changed the settings in the router, and that's how I captured the ghost.
Now I have ~100Mbps download speed, finally. Still not as fast as the Ehternet, but good enough.


Previously the router had been configured to use choose from 20/40/80MHz automatically. The router has been using 80MHz according to my Wifi scanner. If I understand correctly, changing to 20MHz helps because there are many other Wifi signals "nearby": both "physically close" and "similar channel frequencies). This probably can also explain why the download speed was unstable.

It is yet unsolved why my Killer wifi card is not able to handle the broader width, while others can.

Anyway, I'm happy to meet this new cute ghost, and keep it with its friends together.


Determine Perspective Lines With Off-page Vanishing Point

In perspective drawing, a vanishing point represents a group of parallel lines, in other words, a direction.

For any point on the paper, if we want a line towards the same direction (in the 3d space), we simply draw a line through it and the vanishing point.

But sometimes the vanishing point is too far away, such that it is outside the paper/canvas.

In this example, we have a point P and two perspective lines L1 and L2.

The vanishing point VP is naturally the intersection of L1 and L2. The task is to draw a line through P and VP, without having VP on the paper.

I am aware of a few traditional solutions:

1. Use extra pieces of paper such that we can extend L1 and L2 until we see VP.
2. Draw everything in a smaller scale, such that we can see both P and VP on the paper. Draw the line and scale everything back.
3. Draw a perspective grid using the Brewer Method.

#1 and #2 might be quite practical. #3 may not guarantee a solution, unless we can measure distances/proportions.

Below I'll describe a method I learned in high school. It was more of a math quiz than a practical drawing method though, but it is quite fun.

To make it more complicated, there is an extra constraint for the drawer: you can only draw with a ruler without measurements, that is you can draw a straight line between two given points (assume that the ruler is long enough),  but you cannot measure lengths or angles. You can not find middle points either (e.g. by folding the paper).

Here we go:

Step 1:

Draw two lines through P, cutting L1 and L2 at A1, A2, B1 and B2.

Step 2:

Draw two lines (A1, B2) and (A2, B1).
Find the intersection point Q.

Step 3:

Draw a line (Q, P), cutting L1 and L2 at A3 and B3 respectively.

Step 4:

Draw two lines (A1, B3) and (A3, B2).

Find the intersection P'.

Step 5:

Draw a line through P and P'.

This is the desired perspective line through P.
The correctness can be formally proved with some calculation.

On the other hand, if we view Q as the second vanishing point, this figure represents a perspective view of a 2d rectangle (A1, A2, B1, B2). The rectangle is cut into two rectangles by the line (A3, B3). Note that (A1, B2), (A3, B3) and (A2, B1) are parallel lines in the 3d space.

Now it is clear that P is the center of the rectangle (A1, A2, B1, B2), P' is the center of the rectangle (A1, A3, B3, B2). Therefore the lines (A1, A2), (B1, B2) and (P, P') must be all parallel in the 3d space. In the perspective view they must all pass through a common vanishing point. This shows that the line PP' is the answer.

There are 2 special cases.

The first case is when Q is also far outside the paper. This may happen when P is near the center between L1 and L2.

To fix this, we can find the a perspective line between L1 and L2, using the standard bisection technique.

Now the new line is closer to P, we can apply the method above on L2 (or L1) and the new line.

If the new line is still too far from P, a few iterations of bisection should be enough.

The other special case is when P is not between L1 and L2.

In this case we may use standard extension method, to find a new perspective line that is on the other side of P. A few iterations might be needed.

Finally we can apply the method above with the new line and L1.
There are still other cases where this method won't always work. For example, if L1 and L2 are close to the top and bottom edge of the paper, there is not much we can do. After all this was only a math quiz in the first place.


ChromeVFX Prototype

TLDR: ChromeVFX uses Chrome as a MLT filter


In the demo, I'm editing a mlt file with WebVfx filter using Shotcut. Shotcut is connected with a Chrome instance via ChromeVFX.

As shown in the video, every frame is direcly rendered in Chrome and reflected in Shotcut. I can also modify the web page directly in Chrome (zooming in/out).

In this setup Shotcut is running in a Linux VM and Chrome is running in a Windows Host. So technically this is already a remote Chrome. Local or headless Chrome should also work in theory.


I'm been using MLT and WebVfx for a while, they together allow me to render various stuff using web technologies.

WebVfx internally uses QtWebKit to render HTML/JS. QtWebKit obviously uses Qt to enable communication between C++ and Javascript. It is quite easy to pass messages/events in between with the Qt language bindings.

However QtWebKit is not the ideal choice. It has been officially removed from Qt 5.5, althougth we can still compile it from source. It uses an old version of WebKit with some bugs and missing HTML5 features. @annulen has been making efforts to bring it back (up-to-date), but it doesn't seem ready yet. Besides, WebKit doesn't include V8.

There have been discussions to port WebVfx to QtWebEngine or Chromium Embedded Framework, which are Qt and C++ bindings of Chromium. In theory both should work, but in practise it's not that easy. I've been playing with both, and didn't make very far. Both frameworks provide "raw access" to Chromium, which make them very powerful. However meanwhile we have to handle stuff like message loops or coordination among various processes. I just got lost in the docs and code.

Recently I learned about Chrome DevTools Protocol, and decided to give a try.


So my idea is to use Chrome as a MLT filter. Whenever MLT requests to render a frame, we pass all the information to Chrome, let it render, and pass the rendered image back to MLT. There are various benefits doing so:

  • The plugin code no longer depende on Qt or Chromium. The logic is greatly simplified comparing with the current version of WebVfx. It'll be very easy to maintain and distribute the codebase.
  • Chrome provides very good (if not the best) support of latest web standards and high performance (e.g. v8, hardware accerleration). It is available for most platforms.
  • Having a running Chrome by the side of Shotcut is probably the ideal configuration for debugging WebVfx.

ChromeVFX Overview

As mentioned above, the goal is to connect MLT and Chrome. For every render request from MLT, we need to pass the information to Chrome, let it render, and pass the rendered image back.

Chrome DevTools Protocol and puppeteer

This is a "backdoor protocol" in Chrome. If a Chrome instance is running with remote debugging enabled, a client may control and inspect Chrome remotely. The protocol provides most (if not all) features in the built-in developter tools.

The protocol was designed for debugging, hacking and automated testing etc. The official high-level client is called puppeteer, which is written in Node.js.

Connecting MLT with puppeteer

MLT is written in C++ and puppeteer is written in Node.js. To connect them, I used Boost.Interprocess and wrote a wrapper in Node.js C++ Addon. Boost.Interprocess provides shared memory region and message_queue, which is very easy to use.

Since Chrome DevTools Protocol is based on JSON-RPC over WebSocket. Originally I had also planned to talk to Chrome directly from C++. However after some research I realized that it may not be easy to handle all the DevTools details.

In the end this C++ - Node.js channel was surprisingly very easy to implement.

Important Code Snippets

Render Server

It's a Node.js script that connects to Chrome via puppeteer. It forwards requests from MLT to Chrome and pass screenshot the other way around. The event loop looks like this:

IPC for Node.js

The ipc module mentioned aboove is a wrapper of Boost.Interprocess, which looks like this:

MLT filter

Finally, this is modified EffectsImpl::render from WebVfx, which is greatly simplified now:


Chrome Instance

In the demo I'm using a already-running remote Chrome. puppeteer can also starts a new Chrome/Chromium on demand. Headless Chrome/Chromium should also work.


I had been always worried about performance, there are so many layers between Chrome and MLT, which includes network, SSL, IPC and especially PNG encoding and decoding. However it appears fine in my demo, even with a remote Chrome. Of course this can never be as fast as native CEF integration, but in my case the bottleneck is usually the rendering part, which invovles heavy JS code and 3d rendering. So it is already worth it to move from QtWebkit to Chrome.

WebVfx Interface

In the prototype I have implemented only a minmal webvfx interface. This barely makes the demo work. Most features are actually not available:
  • passing parameters (as defined in mlt xml)
  • passing images (existing frame to be processed by the filter)
  • multiple running filters (currently the render server allows only one client)
All of them should be easy to implement, with a better defined IPC protocol.

On the other hand, the WebVfx protocol relies on a global webvfx JS object, which is used to register render function and to indicate MLT that the page has been initialized.
In my demo I used some hacky code via console.log(). I think it should be easy to inject some JS code/object via Node.js. But I'm not sure whether this can be done before a page is loaded.

In the worst case, we may introduce a webvfx js library that each webvfx page should include.


This protype works much better than I had expected. It demonstrates the possibility and potentials of using Chrome as a MLT plugin. ChromeVFX may actually become a useful MLT plugin with more efforts.

Of course a proper CEF integration may achieve the same thing with better performance, but it may or may not be worth it considering the cost of development and mantenance.









不过去年跟朋友讨论之后,我觉得可以做一个量化实验,判断各个电影评分对我能有多大参考意义。简单来说就是我看若干影片,自己打个分,然后跟各个电影评分算相关性。我自己打分分为4档:好看,一般,勉强以及难看,分值分别是2, 1, 0和-1。另外看电影之前我根据网上的信息预测电影的评分,作为比对。




- 自己的评分是参考了网上我查到的各种信息,其中就包括了各类评分和评论
- 我挑选的十七部影片大部分都是预测还行的影片,其中只有一部预测-1,一部预测0,其他预测都是1或者2。所以这并不是均匀的抽样,实际推理来看网上的评分已经帮过过滤掉大部分的烂片了。把那两部预测-1和0的影片去掉以后Pearson相关性是这样的: