Today morning I allowed my phone to be updated to the latest build of ArrowOS, what I didnt know was that it was an automatic monthly build.This ended up bricking my phone.
My first action was trying to revert it back to an older build.Checking the ArrowOS download website for previous builds, resulted in nothing.Looks like older build links were removed from the website. I checked my machine for an older build and I found one however do the mismatch in the security patch level prevented recovery from sideloading the older build.
Looking on the telegram group, I didnt see much activity from the maintainer. I decided to pull up my sleeves and built the rom from scratch. This time I setup a machine on clouding.io, they are a local(Spain) cloud hosting service which is fairly priced.The best part about them is that they have fast NVMe disks, must have for building roms and their snapshot pricing is much lower than others even 5-10x cheaper than the big guys like Google, DigitalOcean etc.
I ended up building the rom, I removed the offending commit which resulted in the issue.Seems like those commits did not have any side-effects as I am running the rom now without any issues.
One of the reasons to write this post was to document my steps so I dont need to spend as much time.
# Install bare essentials sudo apt-get update sudo apt install -y git jq git config --global user.email "surajshirvankar@gmail.com" git config --global user.name "Suraj Shirvankar" # Install scripts has all of the AOSP depedencies cd ~/ git clone https://github.com/akhilnarang/scripts cd scripts ./setup/android_build_env.sh # Create bin file to download repo mkdir -p ~/bin # fetch repo from google curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo chmod a+x ~/bin/repo
Need to add repo to the search path
if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi
Time to fetch all the code
mkdir arrow cd arrow # Depth reduces disk and network requirements by fetching only the latest commit repo init --depth=1 -u https://github.com/ArrowOS/android_manifest.git -b arrow-13.1 # Download all of the git repos. Takes some time repo sync -c -j$(nproc --all) --force-sync --no-clone-bundle --no-tags source build/envsetup.sh # Fetch relevant device repos.Python script might fail lunch arrow_apollo-userdebug
Since the script failed for me I had to manually comment out the offending lines from 202 to 207
if exists_in_tree(mlm, repo_target) != None: existing_m_project = exists_in_tree(mlm, repo_target) elif exists_in_tree(arrowm, repo_target) != None: existing_m_project = exists_in_tree(arrowm, repo_target) elif exists_in_tree(halm, repo_target) != None: existing_m_project = exists_in_tree(halm, repo_target)
Since the repo configuration was built for the maintainer he had decided to use repo which were private and only visible to him.Device specific repos are stored in the .repo/local_manifests/roomservice.xml
.We can change them in this file to point to my repos.
<?xml version="1.0" encoding="UTF-8"?> <manifest> <project path="device/xiaomi/apollo" remote="ArrowOS-Devices" name="android_device_xiaomi_apollo" revision="arrow-13.1" /> <project path="device/xiaomi/sm8250-common" remote="github" name="ArrowOS-Devices/android_device_xiaomi_sm8250-common" revision="arrow-13.1-apollo" /> <project path="vendor/xiaomi" remote="github" name="ArrowOS-Devices/android_vendor_xiaomi_apollo" revision="arrow-13.1" /> <project path="kernel/xiaomi/sm8250" remote="github" name="PixelExperience-Devices/kernel_xiaomi_sm8250" revision="thirteen" /> <project path="hardware/xiaomi" remote="github" name="Dobsgw/android_hardware_xiaomi" revision="arrow-13.1" /> <project path="packages/apps/GCamGOPreBuilt" remote="github" name="ArrowOS-Devices/android_packages_apps_GCamGOPrebuilt" revision="arrow-13.1" /> <project path="vendor/xiaomi-firmware/apollo" remote="gitlab" name="h0lyalg0rithm/vendor_xiaomi-firmware_apollo" revision="arrow-13.1" /> </manifest>
The configuration about the build is present in vendor/arrow/config/version.mk
.Here is where you can setup the build config like including google apps, updater etc
# Time to build the rom repo sync -c -j48 --force-sync --no-clone-bundle --no-tags export ARROW_GAPPS=true export ARROW_OFFICIAL=true # You should see the build file name(ARROW_VERSION) contain OFFICIAL and GAPPS lunch arrow_apollo-userdebug mka bacon -j60 # This machine is pretty beafy
Using this feature you can track all of the malloc/free calls used by the application and decide where the memory needs to be allocated. Here is how you can hijack all the malloc calls made by the application.
// _GNU_SOURCE enables the RLD_NEXT handle used in the dlsym #DEFINE _GNU_SOURCE // Provides the prototype for dlsym #include <dlfcn.h> #include <stddef.h> // Create a function pointer to hold the location of the malloc call static void* (*real_malloc)(size_t size); // __attribute__((constructor)) tells the loader to run this function once the library is loaded void* __attribute__((constructor)) lib_init() { // dlsym looks for the symbol malloc which appears after real_malloc = (void*(*)(size_t))dlsym(RLD_NEXT, "malloc"); } // We hijack all the malloc calls from the application void* malloc(size_t size) { return real_malloc(size); }
We then need to build the code as a shared library using gcc -shared -fPIC -o libmalloc.so main.c
Then to use the library all you need to do is LD_PRELOAD=libmalloc.so ./app
where app is the application you want to hijack
To get around this I decided to fork the github action. It took me a while to understand how actions are released and deployed to the github marketplace.Luckily the repository already contained auatomated workflows to build and release new versions of the github action.
The github action is configured to download the github-pages ruby gem which contains the restrictions like allows themes and plugins which can be used with the github pages platform, since these restrictions were just validations added before the action of building the jekyll static html pages, all I had to do was get rid of those restrictions.
I have updated the action which can be just like the original github action.All you have to do is replace the github workflow name from actions/jekyll-build-pages@v1
to h0lyalg0rithm/jekyll-build-pages@v1
As you can see this website now supports dark mode :rocket: .
]]>The goal of the project is to convert SPIRV graphics code to RISCV assembly.SPIRV is a widely used IR and it supported by most common shading languages like HLSL and GLSL.The khronous group wants to make this IR a universal IR to also support more than just graphics related operations.
Since was already some on going work by the LLVM team to convert SPIRV to LLVM, we decided to take a look at it, however it didnt support GLSL compiled SPIRV.We then decided to look at AMDs compiler which ended up supporting it.
My first go was writing a quick transformation pass and added it at the end of the first stage of the compiler. Here is how I implemented the Pass.
class SpirvSamplerLowering : public llvm::PassInfoMixin<SpirvSamplerLowering> { public: llvm::PreservedAnalyses run(llvm::Function &function, llvm::FunctionAnalysisManager &analysisManager) { LLVM_DEBUG(dbgs() << "Run the pass Spirv-Sampler-Lower\n"); m_builder = std::make_unique<IRBuilder<>>(function.getContext()); for (auto &bb : function) { for (auto &inst : bb) { if (auto *ci = llvm::dyn_cast<llvm::CallInst>(&inst)) { if (ci->getCalledFunction()->getName().starts_with("lgc.create.image.sample")) { IRBuilder<> Builder(ci); std::vector<Value *> Args(ci->arg_begin(), ci->arg_end()); llvm::FunctionCallee newCallee = function.getParent()->getOrInsertFunction("llvm.riscv.image.sample", ci->getFunctionType()); llvm::CallInst *newCallInst = Builder.CreateCall(newCallee, Args); ci->replaceAllUsesWith(newCallInst); m_instsToErase.push_back(ci); } LLVM_DEBUG(dbgs() << ci->getCalledFunction()->getName() << "\n"); } } } const bool changed = !m_instsToErase.empty(); for (Instruction *const inst : m_instsToErase) { inst->eraseFromParent(); } m_instsToErase.clear(); return changed ? PreservedAnalyses::none() : PreservedAnalyses::all(); } const bool changed = !m_instsToErase.empty(); for (Instruction *const inst : m_instsToErase) { inst->eraseFromParent(); } m_instsToErase.clear(); return changed ? PreservedAnalyses::none() : PreservedAnalyses::all(); } static llvm::StringRef name() { return "Lower LLPC sampler to LLVM sampler IR"; } private: std::unique_ptr<llvm::IRBuilder<>> m_builder; llvm::SmallVector<llvm::Instruction *, 8> m_instsToErase; }
The run function executes when it run the pass.We then iterate over the function and its basic blocks to find the instruction that calls the lgc instrinsic we are trying to replace. Once we have a reference to the intrinsic we copy the arguments passed to the instrinsic and create a new instruction with the passed arguments.We then store the reference to the instruction in a vector to be delted later.
Writing this pass was pretty simple and it mostly felt like a fancy regex on the IR.
]]>To start I looked up for the terraform provider on github and came across this Telmate/terraform-provider-proxmox Looking at the documentation I didnt come across a way to upload an iso to proxmox, so I decided to work on this feature for the provider.
This was the first time I wrote an extension for proxmox, so I had to deep a lot deeper into the code and documentation.
First thing I had to do was define how the user api was going to be. I decided on the following
resource "proxmox_storage_iso" "unique_resource_name" { storage = "local" // Where should this be stored in proxmox filename = "image.iso" // Name of the image pve_node = "pve" // Target node where the storage points too url = "http://example.com/image.iso" // URL to the image iso }
Next was time to build the Resource to TF State integration.
func resourceStorageIso() *schema.Resource { return &schema.Resource{ Create: resourceStorageIsoCreate, Read: resourceStorageIsoRead, Delete: resourceStorageIsoDelete, Schema: map[string]*schema.Schema{ "filename": { Type: schema.TypeString, Required: true, } } } }
The Proxmox plugin has callbacks for the different actions that a resource might have to work with. These callbacks are pretty straight forward as their meaning maps to their verb. The schema on the other hand are the different fields that can be provided to the resource.
Terraform resources also have a special ResourceData field called Id which is used to uniquely identity a resource.
In the create callback, we call the proxmox-go-api library to create an ISO resource.One of the things I had to do was fetch the iso and store it in a temporary file and then upload it using the api.
Once the iso is created in proxmox, we call the api again to validate if the image has been saved correctly.I then set the Id
field to the volId
field from the api response which is the unique identifier
from proxmox.
The delete callback was a bit tricky since the proxmox-go-api didnt have a delete api, so I called it using the generic client delete method present in the library. Since proxmox doesnt let you update a given iso, I didn’t have to implement the update callback.
Here is the link to the final PR for this feature
Update(07-08-2023): One of the things I forgot to mention was setting up the development environment for the provider.One of things that took a lof of my time was the setup itself.
I created a .terraformrc
file in my $HOME directory
provider_installation { dev_overrides { "telmate/proxmox" = "<path to terraform-provider-proxmox>" } }
I created a main.tf
to test out the feature.Running terraform init would result in error as it cannot find the provider.
Just run terraform plan
to execute the action.
I decided to build PixelExperience rom for my device, It took a while as my HDD was pretty slow.However it completed in 2 hours. This is how I got the rom installed on my device.
adb reboot bootloader
fastboot --set-active=a fastboot flash boot recovery.img fastboot --set-active=b fastboot flash boot recovery.img
fastboot reboot recovery
to boot into recovery.adb sideload copy-partitions-20210323_1992.zip
fastboot wipe-super super_empty.img
b
so that the inactive partition is a
where the recovery would install the operating system.adb sideload pixelexperience_jasmine_sprout.zip
However since I had some time, I decided to give it a try.To write a barebones kernel module, here I how I went about it.
#include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> MODULE_DESCRIPTION("Simple module"); MODULE_AUTHOR("Suraj Shirvankar"); MODULE_LICENSE("GPL"); static int init_module(void) { printk("Loaded kernel module"); return 0; } static void exit_module(void) { } module_init(init_module); module_exit(exit_module);
Then I had to create a Makefile
to build the final kernel module
obj-m += test.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
I had to make sure that the name of the object file matched the name of the file which contained the kernel code. All this kernel module does is to print out in the log that it was loaded.
Next I tried to create a kernel module that would create a new proc file which would let us know the number of processes that are running. This would allow us to run the following in the terminal.
#bash cat /proc/proc_count 40
Here is how I went about writting it.
#include <linux/module.h> #include <linux/init.h> #include <linux/kernel.h> #include <linux/proc_fs.h> #include <linux/seq_file.h> #include <linux/sched.h> MODULE_DESCRIPTION("Process count module"); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Suraj Shirvankar"); struct proc_dir_entry *proc_entry; static int proc_count(struct seq_file *seq, void *offset) { printk("Couting processes"); struct task_struct *task_struct; int process_count = 0; for_each_process(task_struct){ process_count += 1; } seq_printf(seq, "%d\n", process_count); return 0; } static int init_mod(void){ printk("Loaded module to count processes"); proc_entry = proc_create_single("proc_count", 0, NULL, proc_count); return 0; } static void exit_mod(void) { proc_remove(proc_entry); } module_init(init_mod); module_exit(exit_mod);
Now if i run the following I get this as the result
cat /proc/proc_count 625
dlsym
which looks for the next symbol for malloc/realloc/calloc and it returns the function pointer to allow us to invoke later.
Here is the error I encountered while running gdb.
_dlerror_run (operate=operate@entry=0x7ffff7bd20b0 <dlsym_doit>, args=args@entry=0x7fffffffbfe0) at dlerror.c:154 154 if (result->errstring != NULL) Missing separate debuginfos, use: dnf debuginfo-install libunwind-1.2.1-3.fc27.x86_64 libxml2-2.9.8-4.fc27.x86_64 numactl-libs-2.0.11-5.fc27.x86_64 xz-libs-5.2.3-4.fc27.x86_64 zlib-1.2.11-4.fc27.x86_64 (gdb) backtrace #0 _dlerror_run (operate=operate@entry=0x7ffff7bd20b0 <dlsym_doit>, args=args@entry=0x7fffffffbfe0) at dlerror.c:154 #1 0x00007ffff7bd2141 in __dlsym (handle=handle@entry=0xffffffffffffffff, name=name@entry=0x7ffff7878bf0 "malloc") at dlsym.c:70 #2 0x00007ffff77bdb6c in xxxxx::uninitialized_malloc (size=168) at ../../xxxxx.cxx:44 #3 0x00007ffff77bf14d in malloc (size=168) at ../../xxxxx.cxx:225
When we look at the source of glibc we see the following block
int internal_function _dlerror_run (void (*operate) (void *), void *args) { struct dl_action_result *result; /* If we have not yet initialized the buffer do it now. */ __libc_once (once, init); /* Get error string and number. */ if (static_buf != NULL) result = static_buf; else { /* We don't use the static buffer and so we have a key. Use it to get the thread-specific buffer. */ result = __libc_getspecific (key); if (result == NULL) { result = (struct dl_action_result *) calloc (1, sizeof (*result)); if (result == NULL) /* We are out of memory. Since this is no really critical situation we carry on by using the global variable. This might lead to conflicts between the threads but they soon all will have memory problems. */ result = &last_result; else /* Set the tsd. */ __libc_setspecific (key, result); } } if (result->errstring != NULL) { /* Free the error string from the last failed command. This can happen if `dlerror' was not run after an error was found. */ if (result->malloced) free ((char *) result->errstring); result->errstring = NULL; } result->errcode = _dl_catch_error (&result->objname, &result->errstring, &result->malloced, operate, args); /* If no error we mark that no error string is available. */ result->returned = result->errstring == NULL; return result->errstring != NULL; }
Looking at the following line result = __libc_getspecific (key);
we see that result is set to the value based on the value in the thread.
While debugging with gdb the value of key
was 0 and the value returned from _libc_getspecific was not NULL but the value returned was not a valid dl_action_result
, resulting in the segfault in the following line if (result->errstring != NULL)
.
With the help of my colleague in BSC I was able to debug why the value was invalid.In the library we used a thread specific value for each pthread using the following api.
int pthread_setspecific(pthread_key_t key, const void *value);
The main issue with the usage of this api was that I didnt initialize the pthread_key_t struct, hence libc would set the key to value of 0 and the value was set to the second argument passed.
Looking back at the libc internal we can now see that value returned to __libc_getspecific(key)
returned the value based earlier resulting in the segfault.
After trying to share my large PR code to their email address using the process described in their contributions page, I assumed the issue was on my setup as it was the first time I was sending code patches via email.
However today I was contributing a smaller change to project and I tried the patch using the following command
git format-patch -s -o "outputfolder" --add-header "X-Unsent: 1" --suffix .eml --to ffmpeg-devel@ffmpeg.org -1 1a2b3c4d
It failed once again as the build system could not apply the patch. Then I started to dig further into this and came to realize that git comes with a built in feature to send email patches, however it comes part of a seperate package from the git project. Since my machine was based on Ubuntu, I was able to install it with the following.
sudo apt-get install git-email sendmail
Once that is installed I had to set the following in my ~/.gitconfig
.
[sendemail] smtpserver = smtp.gmail.com smtpuser = surajshirvankar@gmail.com smtpencryption = ssl smtpserverport = 587
Then I generated the email and sent it using the following
git send-email --to="ffmeg-devel@ffmpeg.org -3
where it sends an email with last 3 commits.
Since I was using my gmail account to send the patch, I had to generate a Google App Password which will be used as the password to login in behalf of my email account.
]]>The way this works is we compare if the value is less than 0 and if its less than 0 we subtract the value from 0, which converts it back to a positive number.
.global _start _start: mov r0, #10 bl abs b _start .global abs abs: cmp r0, #0 neglt r0, r0 bx lr
.global _start _start: mov r0, #1 mov r1, #1 bl add b _start add: add r0, r0, r1 bx lr