<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[WizardTales]]></title><description><![CDATA[Writing about general technical discussions, security, administration, coding and more.]]></description><link>https://blog.wizardtales.com/</link><generator>Ghost 0.11</generator><lastBuildDate>Tue, 07 Apr 2026 11:54:52 GMT</lastBuildDate><atom:link href="https://blog.wizardtales.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A captchas tale]]></title><description><![CDATA[<p><img src="https://blog.wizardtales.com/content/images/2015/12/captcha.png#thumb" alt="captcha"></p>

<p>Did you ever think about implementing captchas for your website? Did you considered using some of the big providers, like google and co?</p>

<p>Before i start continiuing on this, let me clarify that I'm not going to explain how to implement your own captcha, but to think a bit more</p>]]></description><link>https://blog.wizardtales.com/2015/12/18/a-captchas-tale/</link><guid isPermaLink="false">e5c46ec4-72a9-4702-930e-1cff21245d6e</guid><category><![CDATA[security]]></category><category><![CDATA[captcha]]></category><dc:creator><![CDATA[Tobias Gurtzick]]></dc:creator><pubDate>Fri, 18 Dec 2015 11:13:00 GMT</pubDate><media:content url="https://blog.wizardtales.com/content/images/2015/12/captcha.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.wizardtales.com/content/images/2015/12/captcha.png" alt="A captchas tale"><p><img src="https://blog.wizardtales.com/content/images/2015/12/captcha.png#thumb" alt="A captchas tale"></p>

<p>Did you ever think about implementing captchas for your website? Did you considered using some of the big providers, like google and co?</p>

<p>Before i start continiuing on this, let me clarify that I'm not going to explain how to implement your own captcha, but to think a bit more about captchas in general and how efficient they're in their main task: preventing automatism.</p>

<h1 id="deathbycaptchascaptchasolvers">DeathByCaptchasCaptchaSolvers</h1>

<p>Let's start with a really strange Idea, this one is kind of funny, but something that might be really in production. As you might noticed, the headline is intended to sound like a known Service, that is offering a Service to actually circumvent captchas. They use things like advanced OCR and probably some people hacking in captchas all the day. However, what would be if I say, that just everyone and anyone can start his own captcha solving service on the interwebz, without having any knownledge about OCR and all that other stuff at all?</p>

<p>Let's imagine we would build a service like this.</p>

<h2 id="yourfirstowncaptchasolvingservice">Your first own Captcha Solving Service</h2>

<p>Ok, so what do we need to solve the simplest of captchas available? To note I will leave out some other kinds of captchas, like this fancy puzzles you probably know about. Just the simple I have some cats for you, or I have those words or letters with noise for you.</p>

<h4 id="thebackend">The Backend:</h4>

<ul>
<li>An API to submit a captcha</li>
<li>An API to submit the solved captcha was successful</li>
<li>Analytics and some simple techniques to analyze if someone is abusing the successful catpcha API endpoint</li>
</ul>

<h4 id="thefrontend">The frontend:</h4>

<ul>
<li>Ability to take a screenshot from the displayed captcha</li>
<li>The ability to fill out and send the solved captcha</li>
<li>Maybe also simulating mouse clicks</li>
</ul>

<p>Now that we have that short list of things we need, lets go a bit more into detail, I will not loose to much words about the frontend, as this is really just straight forward, but here comes the backend.</p>

<p>Fist of all, when we talk about the backend, the customer clearly does not want to pay for a captcha that has not been solved, thus we provide him an API that is accepting success and failure messages for the solved captcha. Now we have a more satisfied customer, but hey isn't that just to easy to manipulate? In fact, yes. But it is also as easy to fix, but I will come back later to this.</p>

<p>Next we would have a customer submitted captcha in the backend, but how do we solve it?</p>

<p>We have some possibilities, let's count the traditional ones:</p>

<ul>
<li>Solve it on your own</li>
<li>Employ some people to solve them</li>
<li>Use advanced OCR</li>
</ul>

<p>This is pretty much of what nearly all of the solving services do, but imagine to do the following: <br>
Use your customers entered captchas to feed the captchas that are going to be solved by Users of some of your other Services, that require captcha solving. <br>
Yes you heard right, I'm talking about solving captchas by require your users to solve captchas, which has been asked to be solved by your customers.</p>

<p>You have:</p>

<ul>
<li>No employees</li>
<li>You do really nothing yourself</li>
<li>You don't need any OCR</li>
</ul>

<p>Everything you need is a huge Service with a huge userbase, that requires their users to pass some captchas, for example to register for some kind of Event. Or in short, enough traffic to handle all incoming requests.</p>

<p>Ok, but how do we know if that captcha was really solved? Let's go back to the point where we talked about the API Endpoint to mark a captcha as solved or not, it really do not get any complicate in any way. <br>
The customer says yes, it was solved, thus the user receives, yes it was solved. To avoid abuse by the captcha solvers, you simply let many of your users solve the same captcha to prevent that the user is entering something wrong, we check against all solvings and predetect the solution by using the one that was entered most often. <br>
If the customer is still marking the captcha as not solved, all the users get told the captcha was wrong, even if they were right. This not solved captcha now goes into an analysis DB, which you can lookup later, to identify abuses through the customer, while this DB would help afterwards, you could reduce the factor, by just creating the following rule: If the user marks 5 captchas in a row as unsolved, he still is going to pay for 1 of it, if the user continues to mark 20% of all captchas as unsolved, he is going to be blocked, until you have checked the mentioned DB. This way you would have at least kind of insurance, but as this is just an example, we keep it like this without thinking through it any further.</p>

<p>Thats basically really it, you're done. To be fair this is really an unusual way to think of and probably everyone who has enough users to do this, has obviously other priorities. But the idea was funny enought to me to talk and think about it and it also gives an opinion about how (easy) hard it is to circumvent a captcha.</p>

<h1 id="securecaptchas">Secure Captchas</h1>

<p>If you would ask me, if I use any known captcha Services, I would answer yes. While they might  not be the ultimate weapon against Spammers, Bots and other forms of unwanted automatisms, they're as of today quite efficient in preventing or at least slowing them down.</p>

<p>But if you ask me, if I think that captchas have any future, I would clearly state: No, they do not. I might be wrong, but not only this strange kind of idea above would generate problems with captchas. But also OCR is getting better and better and as soon as we reach the point that OCRs are better in solving captchas than we humans are, something has went terribly wrong. </p>

<h2 id="proofofwork">Proof of Work</h2>

<p>I think a more efficient way is to utilize Proof of Work algorithms. We simply combine this with some kind of rate limiting. We're not really limiting the request the user makes, but we let the user do more work if he is asking for an unusual amount of PoW Requests. This will not only slow down any automatism drastically, it also increases the costs. The cost would still need to be low enough, otherwise big organizations would get problems, where many people share the same IP. Some may argue, that IPv6 would help here. Well certainly it wont, it just makes us watching a <em>/64</em> subnet instead of a single IP. Otherwise it would be to easy to circumvent the raise of difficulty, as  just everyone gets a <em>/64</em> block.</p>

<p>There are of course many more edge cases if one wants to use this strategy, like what do we do with computers from the stone age which are not able to solve the fastest of all PoWs, but it would be one way to solve this problem.</p>

<h2 id="otherthoughts">Other thoughts</h2>

<p>While the PoW described above would still remain the User anonymus, another strategy would be to exchange data with the client and verify through signed signatures that we're really communicating with user x. This would be practically very efficient, as users could just be banned in case of abuses, but it wouldn't be ever an idea I would suggest to use. It makes the user completely tracable and is the foundation for abuse through government and <em>corrupt spy agencies</em>, like the NSA.</p>

<h1 id="aretheyanygood">Are they any good?</h1>

<p>Yes totally, lets talk about...</p>

<h2 id="whatcaptchasarereallyusefulfor">what captchas are really useful for</h2>

<p>Captchas do something! </p>

<p>They force the user to take some action to process a wished request. While many think that a checkbox is enough, a captcha really verifies that the user has noticed that something is on this site, which is forcing him to solve this captcha. He actually needs to think about the whole thing and may be, but I doubt it, he wont make a decision he does not want to. <br>
One might also say, wouldn't it fit more to let the user type in the sentence: "Yes I really want to give you all my money!", may be, but in the end the user probably does not think more about this than checking a checkbox.</p>]]></content:encoded></item><item><title><![CDATA[A tale from reverse proxies and traffic encryption]]></title><description><![CDATA[<p><img src="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900-1.png#thumb" alt="proxy1"></p>

<h1 id="defeatingissues">Defeating Issues</h1>

<p>Reverse proxies, such a simple way to accomplish various things. <br>
A cheap solution when it comes to handle DDoS Traffic, as only a few are able to finance a whole network to mitigate traffic. Unless you rely on OVH, which DDoS  protection doesn't use a reverse proxy and</p>]]></description><link>https://blog.wizardtales.com/2015/11/24/the-caveats-of-reverse-proxies/</link><guid isPermaLink="false">22b90609-cd49-4d26-84fd-d1ccc564e79c</guid><category><![CDATA[network]]></category><category><![CDATA[node]]></category><category><![CDATA[github]]></category><category><![CDATA[oss]]></category><dc:creator><![CDATA[Tobias Gurtzick]]></dc:creator><pubDate>Tue, 24 Nov 2015 20:59:00 GMT</pubDate><media:content url="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900-1.png" alt="A tale from reverse proxies and traffic encryption"><p><img src="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900-1.png#thumb" alt="A tale from reverse proxies and traffic encryption"></p>

<h1 id="defeatingissues">Defeating Issues</h1>

<p>Reverse proxies, such a simple way to accomplish various things. <br>
A cheap solution when it comes to handle DDoS Traffic, as only a few are able to finance a whole network to mitigate traffic. Unless you rely on OVH, which DDoS  protection doesn't use a reverse proxy and is to good to be true, but actually really works and is <em>free</em>, you're going to use a reverse proxy which mitigates the traffic and transmit the <em>clean traffic</em> to your backend server(s).</p>

<p>But thats not all, we can also archive multiple other things, for example:</p>

<ul>
<li>Caching</li>
<li>Loadbalancing</li>
<li>Routing</li>
</ul>

<p>Or we're using Cloudflare and it does similar things for us, it caches our content and also acts in the same way as a CDN and delivers content faster to the end-user. And of course Cloudflare is a reverse proxy, too.</p>

<h3 id="thelossofinformation">The loss of Information</h3>

<p>While reverse proxies provide a bunch of advantages, it contains one great disadvantage, as we loose one essential information about the client, <em>his IP Address</em>.</p>

<p>Ok, you'll say now: Well okay no problem, just tell the reverse proxy, in this example we will refer to NGINX, to attach a value like <code>x-forwarded-for</code> with the <em>users IP Address</em> within the http headers and we're done, arent we?</p>

<p>Well actually you're, if you don't need to use this information to load balance traffic <strong>behind</strong> your reverse proxy. Because now this gets a real problem!</p>

<p>Ok, then how to solve this problem? <br>
Well, basically we do a really simple thing. When our balancers relied on the IP provided in the ip packet header as 4 <code>unsigned chars</code> (4x8bit) before, we know rely on an IP provided on layer 4, in this case included in the http headers. Take a look at <a href="http://en.wikipedia.org/wiki/IPv4#Header">Wikipedia</a> to have an overview about the IP Headers. <br>
What we need though, is a parser for the http headers instead. This is quite simple, we receive the packet, extract the information from this packet, balance over this information and resend the packet to this balanced target. This is quite easy to accomplish and has only a small overhead. We get an even smaller overhead if we skip the whole <em>http stack</em> and going to parse just the raw packet to extract our wanted information.</p>

<p>There are a couple of solutions, which are doing exactly this. For NGINX there is from version <strong>1.7.2</strong> upwards a build in way to do this:</p>

<ul>
<li><a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash">http upstream module# hash balancing</a></li>
</ul>

<p>This enables us to use any information we can access in NGINX, which is widely more than just informations from the http headers, this could be everything and any variable accessable in NGINX. <strong>Before</strong> version <strong>1.7.2</strong> we had only the ip_hash available which uses the ip from the ip packet. But there is actually a module for older versions:</p>

<ul>
<li><a href="http://wiki.nginx.org/HttpUpstreamRequestHashModule">httpUpstreamRequestHashModule</a></li>
</ul>

<p>To note, this NGINX solutions provide the whole http stack and thus are a little bit slower, as if you would only search for your information. But you've to decide yourself, how time critical your application is and how much time you want to loose before we actually process anything from the application itself <strong>behind</strong> these proxies.</p>

<p>Or if you don't want to manage the balancing via NGINX, but for example in your Node.JS application, which was in fact the inducement to write this entry as there appeared some interesting questions, you would use modules like <a href="https://github.com/indutny/sticky-session">sticky-session</a> to hash balance all your connections via the IP between the different forks (workers). <br>
Acutally <a href="https://github.com/indutny/sticky-session">sticky-session</a> doesn't provide the ability to use the information from the http header instead, but there is an open PR from me, which is going to be merged somewhat soon. This pull request also contains the mentoined questions that were raised, take a look here if you're intrested: <br>
<a href="https://github.com/indutny/sticky-session/pull/17">https://github.com/indutny/sticky-session/pull/17</a></p>

<h3 id="otherprotocols">Other protocols</h3>

<p>Ok, to now finally answer the questions that were raised in the PR. Let's start with the following:</p>

<blockquote>
  <p>BSSolo - Thanks, I can see now that you have also replaced the HTTP Server insance in the master process (as in the current version of this module) with a TCP server. That would certainly help with supporting other protocols such as web socket. Have you tested your header parser with protocols other than HTTP, or is it unnecessary since Web socket connections are initiated by HTTP upgrade requests?</p>
</blockquote>

<p>Yes, it is possible with any protocol as long as you provide the information in the initial packet of the stream. <br>
Therefore there are many ways we can do this, we go for this two as explanation:</p>

<ol>
<li><p>The first one is, specify a general header like the HTTP Headers, in your protocol definition, which are always included in our initial packet. Thus we can parse the information from these headers.</p></li>
<li><p>We wrap the original initial packet and add this information at the very beginning. Now we just parse this information as usual and only resend the packet from the offset where this information is not contained anymore and the original packet gets restored. If we were in <code>C</code>, we would just copy our <em>pointer</em> and increment the new one by 4 (4 bytes would represent an IPv4 Address). In Node.JS this would be similar, we would delete the first 4 entries. Probably we would just call the <code>slice</code> of <code>Buffer</code> method to create a copy starting from the defined offset.</p></li>
</ol>

<p>Referring to this, the next question is the really interesting one.</p>

<h3 id="ssl">SSL</h3>

<blockquote>
  <p>Fedor Indutny - how will all of these work with, for example, SPDY?</p>
</blockquote>

<p>In short, generally it would work, but not with the current code. The reasons for this are simple, but require some explanation.</p>

<p>Ok, let's start with <em>SPDY</em>. Or better let's start with <em>HTTP/2</em>, in which favour google has dropped <em>SPDY</em>. <br>
There are a couple of differences, which do not allow us to do exactly the same, as I have done in this module. Which is pretty much exactly what I described before. There are some differences:</p>

<p>First of all, HTTP/2 is mostly used together with SSL. Some also describe HTTP/2 as SSL only. Of course it is not, this would be a problem in some edge cases I will describe further, later. For the reference, this information comes from this Mailing List: <br>
<a href="http://lists.w3.org/Archives/Public/ietf-http-wg/2013OctDec/0625.html">http://lists.w3.org/Archives/Public/ietf-http-wg/2013OctDec/0625.html</a></p>

<p>Well, okay so we have SSL. That means we have one Issue already: We can't parse the incoming packet, we need to decrypt it first. But that is not all, unless we're going to create another reverse proxy or use a specialized component, we also need to decrypt this packet twice. When we're transferring the Socket from one fork to another, this fork doesn't know anything about what happened before. Or better, it shouldn't know about it and shouldn't have to care about it. <br>
You maybe already noticed, that we probably do have some problems here!</p>

<p>The wasted CPU cycles for decrypting the same packet twice is one of this problems. But the real problem is another one, we need to reannounce the packet to the new fork to which this packet should get balanced. We also need to share the information of the TLS Session, to enable the fork to actually decrypt the packet and continue communicating with the client. <br>
Another way would be to make the fork aware of the initial packet, which is not encrypted and then continue communicating encrypted, however, the fork still needs the TLS Session to do so. Ok, assuming we've accomplished to share the TLS Session and enabled the fork to understand the received initial packet, in which way whatsoever, there is waiting another task for us!</p>

<h3 id="http2">HTTP/2</h3>

<p><em>HTTP/2</em> is awesome, as <em>SPDY</em> is. One can see <em>HTTP/2</em> as the successor to <em>SPDY</em>, as already mentioned, google has dropped <em>SPDY</em> in favour of <em>HTTP/2</em>, but also <em>HTTP/2</em> was also the basis of the work on <em>HTTP/2</em> which is still work in progress. It makes the Web faster not only by utilizing streams and muxed streams like <em>SPDY</em> does, but also reducing the amount of data transferred in every packet by compressing the headers, which <em>SPDY</em> did via the <code>DEFLATE</code> compression algorithm, HTTP/2 comes with <a href="https://tools.ietf.org/html/draft-ietf-httpbis-header-compression-12">HPACK</a>. Naturally this  doesn't really speeds up anything, in term of computing, but we need  to transfer less data and at last the internet connection is still the slowest component. Especially low bandwith devices suffer from huge headers, increasing the time until they downloaded the complete request and thus actually see the website they were visiting. So <em>HTTP/2</em> does again one thing, that forces us to spend some cycles to decompress this header informations, before we finally can parse them.
While <em>HTTP/2</em> also offers via <code>Literal Header Field never Indexed</code>, which would enable to avoid compression of specific header fields and thus we may be able skip decompressing the information, we assume for now we need to decompress it.</p>

<p>So finally we're able to read the information we want, may it be the IP Address or any other information we get from the header. And again, our forks will also decompress the header informations a second time, like they do for SSL. Everything explained for SSL applies here, too.</p>

<p>So far so good, but this all wasn't the reason why I started this blog post. The questions inspired me to think about how I treat network connections and when I treat them as being safe. Taking us to the next topic.</p>

<h3 id="whendoestrafficneedstobeencrypted">When does traffic needs to be encrypted</h3>

<p>The following needs us to answer some questions:</p>

<ul>
<li>Do wo transmit our data over an insecure channel?</li>
<li>Do we even transmit our data over the network, or do we talk over unix domain sockets?</li>
<li>Are we already communicating over an encrypted VPN?</li>
</ul>

<p>So to be able to decide if we need to encrypt our data, after it has entered our DC or Infrastructure within a DC, we should have at first a secure setup, thus we need to make sure we are segmenting and isolating our network. First of all never use Hubs, but I don't assume that anyone outside of his home would ever get the idea to use a hub instead of a switch. A switch makes it at least harder to sniff within the Network, you should also watch out to install ARP Spoofing detections. But I'm drifting out into details...</p>

<p>To make it short, we need to look is our channel already secure, because there is an encrypted VPN. Or is even the wire secure and we don't need to encrypt in this segment again?</p>

<p>In fact, it's hard to get the right decision. Either we transfer not encrypted in a secure area, in favour of the performance.</p>

<p>Or... <br>
<img src="https://blog.wizardtales.com/content/images/2015/04/encrypt-all-the-things.jpg" alt="A tale from reverse proxies and traffic encryption"></p>

<p>Fact is, we can't guarantee that current encryption algorithms never going to break. But to encrypt everything is surely, the most secure approach to ensure the security of our data. But encrypting data is costly and that means it does make sense to think about it once again if it is really valueable to encrypt data within an already <strong>theoretically</strong> secure network. Because if someone breaks in and steals data, he probably targets the databases before his intrustion gets detected and doesn't try to sniff within a switched network in which he must use ARP Spoofing, which is clearly not what he wants if he don't want attention from the SysAdmins. <br>
The only interesting thing for attackers which they could sniff on the Network anyway may be only passwords, which can be protected in other ways, may be I going to make a blog post over this too, or highly sensible data like payment details. But again, the databases are most probably going to be the first target.</p>

<h3 id="layer4datathathelpsbalancingevenmore">Layer 4 - Data that helps balancing even more</h3>

<p>Beside from all the Stuff and explanations before, let us finally talk about why using informations from OSI Layer 4 can help balancing even more effective.</p>

<p><strong>To note:</strong> I omit all layers above Layer 4 for now, most of the information we read belongs to Layer 7, but I do care only about having the raw tcp connection available and don't about the protocols like http.</p>

<p>So let's assume we have Network architecture where the traffic from the internet first passes our Hardware Load Balancer after entering the DC. The load balancer now balances traffic between our frontends. Let's say we have two of them, our Hardware Balancer works on Layer 2, thus preserves the real IP. It could hash balances via the source IP, so we would need to adjust our balancing on the front end nodes and make them aware of this. But for this time we let him balance just all request evenly, doesn't make him care of sticky-sessions. We would reach a similar behavior if we have different A/AAAA-Record entries linked to our domain, but it wouldn't be really even anymore.</p>

<p><img src="https://blog.wizardtales.com/content/images/2015/04/exampleblog-1.jpg" alt="A tale from reverse proxies and traffic encryption"></p>

<p>So imagine this crude Diagram represents our Network. The green arrows mark the safe pathes, where we could send data in plain. The frontends are connected with 4 different wires to the 4 segments of app servers which are again switched. So both servers have access to all app servers, and we're going to serve static content directly from the frontend, at least the assets we don't serve already from a CDN, for example user avatars and stuff, that might be picked up by the CDN later. Our dynamic content gets delivered by our app servers, which our frontend servers do the load balancing for. <br>
So we're running NGINX on our frontends, but how do we balance this traffic? If you access the other frontend our balancing might balance in a similar way. If completely configured identically, the hash calculation should also behave identically. But lets assume, we want to add dynamically new Servers to the Structure. This would generate problems, as the hash balancing wouldn't fit here anymore.</p>

<p>Balancing via Session cookies, what we can do now is to start balancing our application and letting NGINX set a cookie for the user, which it can use the next time to identify, which backend node is the one the user desires. This needs of course a bit more to setup, we do not want that an attacker might abuse this cookies, thus the user should not be able to easily modify the cookie to transfer him to one special server. Possibly the easiest solution is to use encryption on those cookies. <br>
Another solution would be a shared storage, probably a memory storage, something like memcached where information about balancing targets could be stored.</p>

<p>Finally now we would have a working balancing and the capability scale on demand, ever if the application is explicitly build to save its sessions to shared storages. We also transfer everything after the frontends in plain text and decrypt and encrypt the traffic from and towards the internet at them. Thus we have a fast communication between backend Servers and frontends and in the end a faster response time when communicating with our user.</p>

<h1 id="conclusion">Conclusion</h1>

<p>We ensure the security of our Network by highly segmenting parts of it and strictly control which servers can communicate with which other servers. Doing this by going over entirely different wires, using a switched Network and use solutions to detect attacks. Like IDS and ARP Cache poisoning and Spoofing detections. <br>
Now we're going to decrypt our traffic as soon as it reaches the first Frontend and keep communicating plain with the app servers. <em>HTTP/2</em> still provides the possibility to use it without SSL, the "SSL only" does only apply to "browsing the open web". Or to quote:</p>

<blockquote>
  <p>Mark Nottingham - To be clear - we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption. However, for the common case -- browsing the open Web -- you'll need to use https:// URIs and if you want to use the newest version of HTTP.</p>
</blockquote>

<p>And we may also utilize <code>Literal Header Field never Indexed</code> to avoid compression of our needed information, thus we keep the overhead as small as possible when the traffic is already in our network.</p>]]></content:encoded></item><item><title><![CDATA[A tale from Node.js Multi Compiler problems]]></title><description><![CDATA[<p><img src="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900-1.png#thumb" alt="compiler_node_thumb"></p>

<h1 id="problems">Problems?!</h1>

<p>We all know that, different platform, different behavior. <br>
No matter what you use, if it is OS X, Debian, Arch Linux, RHEL or Windows, there are some common problems for specific platforms, let's handle a simple one, multiple compilers/language versions on different Systems, that are going to confuse</p>]]></description><link>https://blog.wizardtales.com/2015/06/05/common-problems-with-node-on-windows/</link><guid isPermaLink="false">de433241-1646-4a8d-bd72-4dfa4e8115fb</guid><category><![CDATA[node]]></category><category><![CDATA[compiler]]></category><dc:creator><![CDATA[Tobias Gurtzick]]></dc:creator><pubDate>Fri, 05 Jun 2015 16:43:00 GMT</pubDate><media:content url="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900.png" alt="A tale from Node.js Multi Compiler problems"><p><img src="https://blog.wizardtales.com/content/images/2015/12/nodejs-1440x900-1.png#thumb" alt="A tale from Node.js Multi Compiler problems"></p>

<h1 id="problems">Problems?!</h1>

<p>We all know that, different platform, different behavior. <br>
No matter what you use, if it is OS X, Debian, Arch Linux, RHEL or Windows, there are some common problems for specific platforms, let's handle a simple one, multiple compilers/language versions on different Systems, that are going to confuse npm.</p>

<p>So let's start solve this really small problems!</p>

<h2 id="windows">Windows</h2>

<p>On Windows you are probably going to run into errors, if you want to use native extensions and you're developing native Software and thus you have many compiler versions of msvc and gcc installed.</p>

<p>For most of the time node will be able to detect the compiler you've installed, but for some Versions or if you have got installed multiple Versions, e.g. VS2012, VS2003, ..., of different compilers, you may run into this error.</p>

<pre><code>MSBUILD : error MSB3411
</code></pre>

<p>The error is telling you, that you don't have a compiler installed and can't find <code>VCBuild.exe</code>.</p>

<p>To fix this you can simply append <code>--msvs_version=xxxx</code></p>

<p>One example, if you have got Microsoft Visual Studio 2012 installed, the following will fix your problem.</p>

<pre><code>npm install --msvs_version=2012
</code></pre>

<h2 id="gnulinux">GNU/Linux</h2>

<p>When using a GNU/Linux OS, for example <code>Arch Linux</code> you will run at least with <code>Arch Linux</code> in a conflict with <em>Python 3</em> when <em>gyp</em> is used for compiling native extensions.</p>

<p>This problem occurs, because some <em>GNU/Linux</em> Distributions may set the newer <em>Python 3</em> as default executable. To fix this you do not need to touch your System and also you shouldn't alter your System just to make a single Application work...</p>

<p>Instead, you're going to do this:</p>

<pre><code>npm install --python=python2.7
</code></pre>

<p>Or a more persistent way, so you wont run into this trouble again:</p>

<pre><code>npm config set python python2.7
</code></pre>]]></content:encoded></item><item><title><![CDATA[Assign IPv6 to KVM machine]]></title><description><![CDATA[<p><img src="https://blog.wizardtales.com/content/images/2014/10/Kvmbanner-logo2_1.png#thumb" alt="kvm_ipv6_thumb"></p>

<h2 id="weallneedipv6">We all need IPv6</h2>

<p>It is kind of awesome, the internet of things is incoming, which is going to be a security desaster, and so IPv6 does, along with other advantages that IPv6 provides we definitely want it especially because of the mass of available addresses. <br>
Also we have to,</p>]]></description><link>https://blog.wizardtales.com/2014/10/06/assign-ipv6-to-kvm-machine/</link><guid isPermaLink="false">e7fea526-30f6-4d26-85bd-0f9c6ce06ea6</guid><category><![CDATA[network]]></category><dc:creator><![CDATA[Tobias Gurtzick]]></dc:creator><pubDate>Mon, 06 Oct 2014 16:14:00 GMT</pubDate><media:content url="https://blog.wizardtales.com/content/images/2014/10/Kvmbanner-logo2_1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.wizardtales.com/content/images/2014/10/Kvmbanner-logo2_1.png" alt="Assign IPv6 to KVM machine"><p><img src="https://blog.wizardtales.com/content/images/2014/10/Kvmbanner-logo2_1.png#thumb" alt="Assign IPv6 to KVM machine"></p>

<h2 id="weallneedipv6">We all need IPv6</h2>

<p>It is kind of awesome, the internet of things is incoming, which is going to be a security desaster, and so IPv6 does, along with other advantages that IPv6 provides we definitely want it especially because of the mass of available addresses. <br>
Also we have to, IPv6 will replace IPv4 in 5 or 100 years, or just never...</p>

<h3 id="advantages">Advantages</h3>

<p>One big advantage of IPv6 is that you can allocate one IP Address per Application, or whatever desire you have to use more than one IP Adress. It's not a problem anymore. <br>
Everyone gets minimum a /64 Block referring to <code>SLAAC</code>, that means 64bit or 2^64 or 18.446.744.073.709.551.616 or just enough...</p>

<p>That's great! No NAT anymore or no worries about ports wich are already in use anymore!</p>

<h3 id="whyimsohappyaboutipv6">Why I'm so happy about IPv6</h3>

<p>I own many dedicated Servers, but only a few of them got multiple IPv4 Addresses allocated. That's a problem for example if I want to setup VMs, which I do commonly often.</p>

<p>It's not a big deal to setup the NAT over iptables and sharing this IP Address, but I want to have my Servers listen on any Port, without caring about the Port may be already in use. In this case I have to build awkward workarounds, like using <code>NGINX</code> which is proxying to the other <code>NGINX</code> on the VM or a modified <code>HAProxy</code>. Or I just allocate the Port to this single machine, stuff like that and much more...</p>

<p>This times are over, thanks to IPv6! <br>
Well... , or maybe not, yet it does not seem that it is going to be used anytime soon, but hey however, let's get IPv6 ready anyway! </p>

<p>So lets assign a dedicated IPv6 to our KVM machine.</p>

<h3 id="configuration">Configuration</h3>

<p>One way would be to use the router advertisment daemon <code>radvd</code>, the other way would be to statically assign the IP Addresses.</p>

<p>I will setup now only a static IP, for setting up <code>radvd</code> you may consider reading: <br>
<a href="http://www.mueller.mn/2014/01/ipv6-ueber-dhcp-an-kvm-gaeste-verteilen/">http://www.mueller.mn/2014/01/ipv6-ueber-dhcp-an-kvm-gaeste-verteilen/</a></p>

<p>It's in german but the configuration parameters should tell you enough about what you have to do. <br>
Also this post mentions some problems using router advertisment together with forwarding. If you want more Information about this you might read this post: <br>
<a href="http://strugglers.net/~andy/blog/2011/09/04/linux-ipv6-router-advertisements-and-forwarding/">http://strugglers.net/~andy/blog/2011/09/04/linux-ipv6-router-advertisements-and-forwarding/</a></p>

<h3 id="thebridge">The Bridge</h3>

<p>So the first thing we have to setup, is our bridge. In this case I'm on debian and edit my <code>/etc/network/interfaces</code></p>

<pre><code class="language-bash"># This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo  
iface lo inet loopback

auto eth0  
iface eth0 inet manual

auto br0  
iface br0 inet dhcp  
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off

iface br0 inet6 static  
        address 2001:beef:2:9dd0::
        netmask 64
        post-up /sbin/ip -f inet6 route add 2001:beef:2:9dff:ff:ff:ff:ff dev br0
        post-up /sbin/ip -f inet6 route add default via 2001:beef:2:9dff:ff:ff:ff:ff
        pre-down /sbin/ip -f inet6 route del default via 2001:beef:2:9dff:ff:ff:ff:ff
        pre-down /sbin/ip -f inet6 route del 2001:beef:2:9dff:ff:ff:ff:ff dev br0
</code></pre>

<p>The IPv4 I get over DHCP on this Server, my new IPv6 block I have to setup statically. <br>
The post up define and set the gateway on the last Adress of the block.</p>

<p>Also you need to edit or add the following options in <code>/etc/sysctl.conf</code> to be enabled:</p>

<pre><code>net.ipv6.conf.all.proxy_ndp  = 1
net.ipv6.conf.all.forwarding = 1
</code></pre>

<h3 id="thekvmsettings">The KVM Settings</h3>

<p>Next you configure your KVM and add an interface of the type bridge like the following:  </p>

<pre><code class="language-bash">    &lt;interface type='bridge'&gt;
      &lt;mac address='52:54:00:0a:41:d5'/&gt;
      &lt;source bridge='br0'/&gt;
      &lt;target dev='vnet2'/&gt;
      &lt;model type='virtio'/&gt;
      &lt;alias name='net2'/&gt;
      &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/&gt;
    &lt;/interface&gt;
</code></pre>

<p>Now you can configure the IPv6 in your VM.</p>

<h3 id="thetrickypart">The tricky part</h3>

<p>It's not really tricky, but you've to know it. But there are several issues that can happen to you.</p>

<h4 id="neighbourdiscoveryproxy">Neighbour Discovery Proxy</h4>

<p>You may need to tell the System that it should <em>"split your subnet into parts"</em>. You accomplish this by doing this:</p>

<pre><code>ip -6 neigh add proxy 2001:beef:2:9dd0::12 dev br0
</code></pre>

<p>This should be of course your configured <strong>IPv6 of your VM</strong> and you need to configure this every time you add a new IP. You may read more about this <a href="http://linux-ip.net/html/adv-proxy-arp.html">here</a>.</p>

<p>By the way, if you want to add a new IPv6 to your host System you only need to execute this command:</p>

<pre><code>ip -6 addr add 2001:beef:2:9dd0::29 dev br0
</code></pre>

<p>That's all. No <code>iptable</code> forwarding stuff, no NAT, no Portforwarding. Just a dedicated IPv6, or more Addresses if you want to, for this single virtual machine.</p>

<p>Alternatively if you want to give a specific VM a whole subnet (for example a /122), I recommend to use <a href="http://priv.nu/projects/ndppd/">ndppd</a>.</p>

<p>I recommend you also to read the <a href="http://man7.org/linux/man-pages/man8/ip-neighbour.8.html">ip-neighbour(8)</a>.</p>

<h1 id="troubleshooting">Troubleshooting</h1>

<h2 id="dadfailed">Dadfailed</h2>

<h3 id="unix">Unix</h3>

<p>If you restart your VM it may happen, that you can't use your IPv6 again with the error <code>global tentative dadfailed</code>. This means that your address is already in use, to prevent this deactivate <code>net.ipv6.conf.all.accept_dad</code> in your <code>/etc/sysctl.conf</code>.</p>

<p>More information about this you may get by reading this: <br>
<a href="http://blog.tankywoo.com/linux/2013/09/27/ipv6-dadfailed-problem.html">http://blog.tankywoo.com/linux/2013/09/27/ipv6-dadfailed-problem.html</a></p>

<h3 id="windows">Windows</h3>

<p>Oh ok, you're using windows. <br>
Then you need to use netsh.</p>

<pre><code>netsh int ipv6 show addresses
</code></pre>

<p>You should find your address that can't be assigned because of DAD-Status <code>duplicate</code>.</p>

<p>Ok, so how to disable dad_accept on Windows? <br>
First execute this</p>

<pre><code>netsh int ipv6 show int
</code></pre>

<p>Now get from the displayed list the <code>Idx</code> of your Network Interface. In our case the <code>Idx</code> will be <strong>19</strong>.</p>

<p>To list which options are enabled for your interface enter this command:</p>

<pre><code>netsh int ipv6 show int 19
</code></pre>

<p>Now execute the following command to disable accept_dad:</p>

<pre><code>netsh int ipv6 set int 19 dadtransmit=0
</code></pre>

<p>Thats all.</p>

<p>For more information view the MS Docu: <br>
<a href="http://technet.microsoft.com/en-us/library/cc740203%28v=ws.10%29.aspx">http://technet.microsoft.com/en-us/library/cc740203%28v=ws.10%29.aspx</a></p>]]></content:encoded></item></channel></rss>