2021-02-14

Chasing a Wifi Ghost

For quite a while, I have not been so confused (and excited) by a technical issue.

The problem is: I got very slow download speed (~10Mbps) on my Windows laptop, when connected to Wifi.

To make it a bit more puzzling:

- The upload speed is much better (~60Mbps).

- Sometimes the download speed may reach ~100Mbps for a few seconds, before quickly fall back to ~10Mbps.

Having been suffering from it for a few months, I finally decide to take on this issue today.


Beginning of the Journey

As usual, there are some troubleshooting routine I should do. Also as usual, they don't help much.

- Try other devices with the same Wifi
- Try to connect to other Wifi
- Try to connect to Ethernet
- Reset the Wifi router
- Run Windows troubleshooter

Well, eventually I did get some information. The download & upload speed is actually normal for all my other devices, which includes Android/iOS phones, MacBook Pro etc. 

This pieced of info narrowd the root cause down to the laptop itself: The router must be working well, and I'm not throttle by my ISP.


The Chase1: Killer

My machine has a Killer wifi card, which had some issues before. At the beginning, I try to examine all Killer componenets thoroughly:

- Try the latest Wifi driver
- Try some older Wifi drivers
- Install / Uninstall Killer software & services

Not surprisingly, nothing worked.

The Chase2: Windows

Next, I tried to exhaust my knowledge about Windows networking,  in order to examine all relevant aspects:
- run sfc /scannow
- Reset TCP settings
- Reset network stack (run ipconfig, netsh etc.)
- Disable SmartByte service (which is actually not available on my machine)
- Disable WMM/QoS (which is actually not available for my router)
- Disable power saving move for the adapter
- Altering Wireless mode (802.11) of the adapter
- Disable network discover
- Disable firewall
- Remove custom DNS servers
- Disable features (e.g. ipv6) of the adapter
- Disable other network adapters
- Disable bluetooth
- Disable Windows TCP autotuning
- Dissable WAN miniport

Again. Nothing worked.

The Ghost

After reading the Killer troubleshooting pages for a few times (one, two), I finally noticed an instruction, to set the "channel width" to 20MHz. This is relevant since I'm using a 5GHz Wifi.

Having absolutely no expection, I changed the settings in the router, and that's how I captured the ghost.
Now I have ~100Mbps download speed, finally. Still not as fast as the Ehternet, but good enough.

Summary

Previously the router had been configured to use choose from 20/40/80MHz automatically. The router has been using 80MHz according to my Wifi scanner. If I understand correctly, changing to 20MHz helps because there are many other Wifi signals "nearby": both "physically close" and "similar channel frequencies). This probably can also explain why the download speed was unstable.

It is yet unsolved why my Killer wifi card is not able to handle the broader width, while others can.

Anyway, I'm happy to meet this new cute ghost, and keep it with its friends together.

2020-05-28

Determine Perspective Lines With Off-page Vanishing Point



In perspective drawing, a vanishing point represents a group of parallel lines, in other words, a direction.

For any point on the paper, if we want a line towards the same direction (in the 3d space), we simply draw a line through it and the vanishing point.


But sometimes the vanishing point is too far away, such that it is outside the paper/canvas.

In this example, we have a point P and two perspective lines L1 and L2.

The vanishing point VP is naturally the intersection of L1 and L2. The task is to draw a line through P and VP, without having VP on the paper.


I am aware of a few traditional solutions:

1. Use extra pieces of paper such that we can extend L1 and L2 until we see VP.
2. Draw everything in a smaller scale, such that we can see both P and VP on the paper. Draw the line and scale everything back.
3. Draw a perspective grid using the Brewer Method.

#1 and #2 might be quite practical. #3 may not guarantee a solution, unless we can measure distances/proportions.

Below I'll describe a method I learned in high school. It was more of a math quiz than a practical drawing method though, but it is quite fun.

To make it more complicated, there is an extra constraint for the drawer: you can only draw with a ruler without measurements, that is you can draw a straight line between two given points (assume that the ruler is long enough),  but you cannot measure lengths or angles. You can not find middle points either (e.g. by folding the paper).

Here we go:


Step 1:

Draw two lines through P, cutting L1 and L2 at A1, A2, B1 and B2.

Step 2:

Draw two lines (A1, B2) and (A2, B1).
Find the intersection point Q.

Step 3:

Draw a line (Q, P), cutting L1 and L2 at A3 and B3 respectively.

Step 4:

Draw two lines (A1, B3) and (A3, B2).

Find the intersection P'.

Step 5:

Draw a line through P and P'.

This is the desired perspective line through P.
The correctness can be formally proved with some calculation.

On the other hand, if we view Q as the second vanishing point, this figure represents a perspective view of a 2d rectangle (A1, A2, B1, B2). The rectangle is cut into two rectangles by the line (A3, B3). Note that (A1, B2), (A3, B3) and (A2, B1) are parallel lines in the 3d space.

Now it is clear that P is the center of the rectangle (A1, A2, B1, B2), P' is the center of the rectangle (A1, A3, B3, B2). Therefore the lines (A1, A2), (B1, B2) and (P, P') must be all parallel in the 3d space. In the perspective view they must all pass through a common vanishing point. This shows that the line PP' is the answer.

There are 2 special cases.


The first case is when Q is also far outside the paper. This may happen when P is near the center between L1 and L2.

To fix this, we can find the a perspective line between L1 and L2, using the standard bisection technique.

Now the new line is closer to P, we can apply the method above on L2 (or L1) and the new line.

If the new line is still too far from P, a few iterations of bisection should be enough.



The other special case is when P is not between L1 and L2.

In this case we may use standard extension method, to find a new perspective line that is on the other side of P. A few iterations might be needed.

Finally we can apply the method above with the new line and L1.
There are still other cases where this method won't always work. For example, if L1 and L2 are close to the top and bottom edge of the paper, there is not much we can do. After all this was only a math quiz in the first place.

2019-10-24

ChromeVFX Prototype

TLDR: ChromeVFX uses Chrome as a MLT filter

Demo



In the demo, I'm editing a mlt file with WebVfx filter using Shotcut. Shotcut is connected with a Chrome instance via ChromeVFX.

As shown in the video, every frame is direcly rendered in Chrome and reflected in Shotcut. I can also modify the web page directly in Chrome (zooming in/out).

In this setup Shotcut is running in a Linux VM and Chrome is running in a Windows Host. So technically this is already a remote Chrome. Local or headless Chrome should also work in theory.

Background


I'm been using MLT and WebVfx for a while, they together allow me to render various stuff using web technologies.

WebVfx internally uses QtWebKit to render HTML/JS. QtWebKit obviously uses Qt to enable communication between C++ and Javascript. It is quite easy to pass messages/events in between with the Qt language bindings.

However QtWebKit is not the ideal choice. It has been officially removed from Qt 5.5, althougth we can still compile it from source. It uses an old version of WebKit with some bugs and missing HTML5 features. @annulen has been making efforts to bring it back (up-to-date), but it doesn't seem ready yet. Besides, WebKit doesn't include V8.

There have been discussions to port WebVfx to QtWebEngine or Chromium Embedded Framework, which are Qt and C++ bindings of Chromium. In theory both should work, but in practise it's not that easy. I've been playing with both, and didn't make very far. Both frameworks provide "raw access" to Chromium, which make them very powerful. However meanwhile we have to handle stuff like message loops or coordination among various processes. I just got lost in the docs and code.

Recently I learned about Chrome DevTools Protocol, and decided to give a try.

Motivation


So my idea is to use Chrome as a MLT filter. Whenever MLT requests to render a frame, we pass all the information to Chrome, let it render, and pass the rendered image back to MLT. There are various benefits doing so:

  • The plugin code no longer depende on Qt or Chromium. The logic is greatly simplified comparing with the current version of WebVfx. It'll be very easy to maintain and distribute the codebase.
  • Chrome provides very good (if not the best) support of latest web standards and high performance (e.g. v8, hardware accerleration). It is available for most platforms.
  • Having a running Chrome by the side of Shotcut is probably the ideal configuration for debugging WebVfx.

ChromeVFX Overview


As mentioned above, the goal is to connect MLT and Chrome. For every render request from MLT, we need to pass the information to Chrome, let it render, and pass the rendered image back.

Chrome DevTools Protocol and puppeteer


This is a "backdoor protocol" in Chrome. If a Chrome instance is running with remote debugging enabled, a client may control and inspect Chrome remotely. The protocol provides most (if not all) features in the built-in developter tools.

The protocol was designed for debugging, hacking and automated testing etc. The official high-level client is called puppeteer, which is written in Node.js.

Connecting MLT with puppeteer


MLT is written in C++ and puppeteer is written in Node.js. To connect them, I used Boost.Interprocess and wrote a wrapper in Node.js C++ Addon. Boost.Interprocess provides shared memory region and message_queue, which is very easy to use.

Since Chrome DevTools Protocol is based on JSON-RPC over WebSocket. Originally I had also planned to talk to Chrome directly from C++. However after some research I realized that it may not be easy to handle all the DevTools details.

In the end this C++ - Node.js channel was surprisingly very easy to implement.

Important Code Snippets


Render Server

It's a Node.js script that connects to Chrome via puppeteer. It forwards requests from MLT to Chrome and pass screenshot the other way around. The event loop looks like this:

IPC for Node.js

The ipc module mentioned aboove is a wrapper of Boost.Interprocess, which looks like this:

MLT filter

Finally, this is modified EffectsImpl::render from WebVfx, which is greatly simplified now:

Discussions

Chrome Instance

In the demo I'm using a already-running remote Chrome. puppeteer can also starts a new Chrome/Chromium on demand. Headless Chrome/Chromium should also work.

Performance

I had been always worried about performance, there are so many layers between Chrome and MLT, which includes network, SSL, IPC and especially PNG encoding and decoding. However it appears fine in my demo, even with a remote Chrome. Of course this can never be as fast as native CEF integration, but in my case the bottleneck is usually the rendering part, which invovles heavy JS code and 3d rendering. So it is already worth it to move from QtWebkit to Chrome.

WebVfx Interface

In the prototype I have implemented only a minmal webvfx interface. This barely makes the demo work. Most features are actually not available:
  • passing parameters (as defined in mlt xml)
  • passing images (existing frame to be processed by the filter)
  • multiple running filters (currently the render server allows only one client)
All of them should be easy to implement, with a better defined IPC protocol.

On the other hand, the WebVfx protocol relies on a global webvfx JS object, which is used to register render function and to indicate MLT that the page has been initialized.
In my demo I used some hacky code via console.log(). I think it should be easy to inject some JS code/object via Node.js. But I'm not sure whether this can be done before a page is loaded.

In the worst case, we may introduce a webvfx js library that each webvfx page should include.

Conclusion


This protype works much better than I had expected. It demonstrates the possibility and potentials of using Chrome as a MLT plugin. ChromeVFX may actually become a useful MLT plugin with more efforts.

Of course a proper CEF integration may achieve the same thing with better performance, but it may or may not be worth it considering the cost of development and mantenance.

Links:

2019-10-20

关于电影评分

电影不像游戏,书籍或者其他大多数商品,几乎没有退货这么一说。为了避免踩雷,预判电影好坏就显得非常重要。这里好坏并不是电影艺术水平,社会反响或者制作质量,而是针对单一观看者的喜好。例如,如果我不爱看动作片,那动作电影拍得再好我也不爱看。

不知道是不是因为制作门槛降低了,我感觉现在每年电影太多了,可惜保持不变的是好片的数量而不是比例。似乎游戏产业也有类似现象。

我对电影(包括电视剧)的态度是不看新片,等上映后过几个月或者几年如果还有人记得,还能在网络上提起“这不是XX电影的经典片段吗?”,我才会觉得这电影基本靠谱,再去网上继续调查。在电影上映前和上映时能够作为判断依据的资料不多,预告片大概能算一主要信息,然而我觉得预告片只能大致证明影片的类型,别的不能过多参考。我踩雷的一个例子就是看了一个10分钟左右的预告片,觉得不错去看了一个动作片,然而发现这个片子最精彩的动作部分都在预告片里了。你说预告片骗人了吗,没有。我上当了吗,那肯定上当了。

另外我经常看电影简介,虽然里面是吐槽的为主。很多电影被压成不到15分钟的小故事反而
挺有趣的。极少数的电影,我了解了剧情,了解了结局还去看的,而且看了还很喜欢,比如《カメラを止めるな!》。而大多数的电影通过了简介这么一层过滤也就没了兴趣。

评分则是另一个大致有效的过滤标准,比如国内比较有影响力的豆瓣。“豆瓣评分X.Y”大概是在豆瓣网之外最有效最简短的电影评价。对于评分我一直也觉得参考意义不大,依据是豆瓣的评分是来自于“愿意在豆瓣上评分的人”,而不是来自于所有人(例如在街上随机抽人调查)。“愿意在网站上评分”大致取决于性格以及影片观后感。而我从来都不属于这种人。

不过去年跟朋友讨论之后,我觉得可以做一个量化实验,判断各个电影评分对我能有多大参考意义。简单来说就是我看若干影片,自己打个分,然后跟各个电影评分算相关性。我自己打分分为4档:好看,一般,勉强以及难看,分值分别是2, 1, 0和-1。另外看电影之前我根据网上的信息预测电影的评分,作为比对。

下面是根据十七部电影的统计结果,九部国外八部国内。图表显示了各个评分系统对于我实际观感的Pearson相关系数,数值越高越相关:



可以看出相关性都不咋样,最高的是我自己的预测,豆瓣相比其他的系统要高不少。最有趣的是Metacritic的相关性几乎为0,甚至是负数。

在得出“自己评分比别的系统更靠谱”的结论之前,我又想了想:

- 自己的评分是参考了网上我查到的各种信息,其中就包括了各类评分和评论
- 我挑选的十七部影片大部分都是预测还行的影片,其中只有一部预测-1,一部预测0,其他预测都是1或者2。所以这并不是均匀的抽样,实际推理来看网上的评分已经帮过过滤掉大部分的烂片了。把那两部预测-1和0的影片去掉以后Pearson相关性是这样的:


虽然分值也都不高,但是很多评分都是比我预测要好的,有的虽然是负相关,但也可以拿来用。

所以结论我只能说,在高分区(或者说我初步判断电影可以看)网上评分勉强有点用,但是作用不大。理论上网上的低分可以帮我过滤掉烂片,但是通过我的实验并不能证明。

感觉这在游戏上是类似的,回头也许再做实验验证一下。