What’s your most controversial programming opinion?

Questions : What’s your most controversial programming opinion?

This is definitely subjective, but I’d like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the “What are five things you hate about your favorite language?” question. I contended that classes in C# should be sealed by default – I won’t put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I’d rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like “unit testing isn’t actually terribly helpful” or “public fields are okay really”. The important thing (to me, anyway) is that you’ve got reasons behind your opinions.

Please present your opinion and reasoning – I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

Total Answers: 407 Answers 407

Popular Answers:

  1. The only “best practice” you should be using all the time is “Use Your Brain”.

    Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don’t warrant them. Just because something is new, or because someone respected has an opinion, doesn’t mean it fits all 🙂

    EDIT: Just to clarify – I don’t think people should ignore best practices, valued opinions etc. Just that people shouldn’t just blindly jump on something without thinking about WHY this “thing” is so great, IS it applicable to what I’m doing, and WHAT benefits/drawbacks does it bring?

  2. “Googling it” is okay!

    Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn’t hold that against people that use it.

    Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don’t know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

    What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

    (although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)

  3. Most comments in code are in fact a pernicious form of code duplication.

    We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

    I think eventually many people just blank them out, especially those flowerbox monstrosities.

    Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

    On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.

  4. XML is highly overrated

    I think too many jump onto the XML bandwagon before using their brains… XML for web stuff is great, as it’s designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

    My 5 cents

  5. Not all programmers are created equal

    Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

    It’s politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it’s not always the case. I have even seen cases where lead developers were ‘beyond hope’ and junior devs did all the actual work – I made sure they got the credit, though. 🙂

  6. I fail to understand why people think that Java is absolutely the best “first” programming language to be taught in universities.

    For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

    For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

    Also the natural progression should be from “how can I do this” to “how can I find the library which does that” and not the other way round.

  7. If you only know one language, no matter how well you know it, you’re not a great programmer.

    There seems to be an attitude that says once you’re really good at C# or Java or whatever other language you started out learning then that’s all you need. I don’t believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

    It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn’t necessarily tally with the qualities I would expect to find in a really good programmer.

  8. Performance does matter.

  9. Print statements are a valid way to debug code

    I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

    Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)

  10. Your job is to put yourself out of work.

    When you’re writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

    If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment’s notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can’t do that, then you’ve failed miserably.

    Interestingly, I’ve found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.

  11. 1) The Business Apps farce:

    I think that the whole “Enterprise” frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

    Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does “magic” to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don’t need most of that.

    How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

    The point of this is not that complexity==bad, it’s that unnecessary complexity==bad. I’ve worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that’s needed to solve most use cases.

    I’d try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

    2) The n-years-of-experience-required:

    Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don’t really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you’re doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

    3) The common “computer science” degree curriculum:

    The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you’re doing something wrong. If you don’t get several courses full of algebra and math, it’s wrong. If you don’t delve into functional programming, it’s incomplete. If you can’t apply loop invariants to a trivial for loop, you’re not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it’s full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.

  12. Getters and Setters are Highly Overused

    I’ve seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you’re using threads (but generally is not the case) or if your accessors have business/presentation logic (something ‘strange’ at least).

    I’m not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding… ha!


    This answer has raised some controversy in it’s comments, so I’ll try to clarify it a bit (I’ll leave the original untouched since that is what many people upvoted).

    First of all: anyone who uses public fields deserves jail time

    Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

    Many people think:

    private fields + public accessors == encapsulation

    I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

    Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of “Clean Code”):

    There is a reason that we keep our variables private. We don’t want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?

  13. UML diagrams are highly overrated

    Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.

  14. Opinion: SQL is code. Treat it as such

    That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

    I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don’t you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?

  15. Readability is the most important aspect of your code.

    Even more so than correctness. If it’s readable, it’s easy to fix. It’s also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.

  16. The use of hungarian notation should be punished with death.

    That should be controversial enough 😉

  17. Design patterns are hurting good design more than they’re helping it.
  18. PHP sucks 😉

    The proof is in the pudding.

  19. Unit Testing won’t help you write good code

    The only reason to have Unit tests is to make sure that code that already works doesn’t break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won’t even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

    And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

    In fact, I’ll generalize that even further,

    Most “Best Practices” in Software Engineering are there to keep bad programmers from doing too much damage.

    They’re there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.

  20. Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

    I think that a method should be created wherever you can name one.

  21. It’s ok to write garbage code once in a while

    Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever… Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.

  22. Code == Design

    I’m no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren’t really any more user friendly.

    Here’s an article on the topic of Code as Design.

  23. Software development is just a job

    Don’t get me wrong, I enjoy software development a lot. I’ve written a blog for the last few years on the subject. I’ve spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

    But in the grand scheme of things, it is just a job.

    It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I’d rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

    I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.

  24. I also think there’s nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don’t have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a “binaries” directory and reference it in a project using a relative path.

    Quite a lot of people seem to think I should be burned at the stake for even mentioning “source control” and “binary” in the same sentence. I even know of places that have strict rules saying you can’t add them.

  25. Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don’t need to worry themselves with memory management etc.)

  26. Software Architects/Designers are Overrated

    As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don’t care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

    How’s that for controversial?

    Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.

  27. There is no “one size fits all” approach to development

    I’m surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the “one size fits all” approach to development so I think I may actually be in the minority.

    Things I’ve seen being touted as the correct approach for any project – before any information is known about it – are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

    People even seem to go as far as putting badges on their blogs such as “I’m Test Driven” or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

    It isn’t.

    Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you’re working on and its unique requirements, but also the size and ability of the team you’re working with.

  28. Most professional programmers suck

    I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many…

  29. A degree in computer science does not—and is not supposed to—teach you to be a programmer.

    Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

    If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)

  30. SESE (Single Entry Single Exit) is not law


    public int foo() { if( someCondition ) { return 0; } return -1; } 


    public int foo() { int returnValue = -1; if( someCondition ) { returnValue = 0; } return returnValue; } 

    My team and I have found that abiding by this all the time is actually counter-productive in many cases.

  31. C++ is one of the WORST programming languages – EVER.

    It has all of the hallmarks of something designed by committee – it does not do any given job well, and does some jobs (like OO) terribly. It has a “kitchen sink” desperation to it that just won’t go away.

    It is a horrible “first language” to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

    It is not a good language to try to learn OO concepts. It behaves as “C with a class wrapper” instead of a proper OO language.

    I could go on, but will leave it at that for now. I have never liked programming in C++, and although I “cut my teeth” on FORTRAN, I totally loved programming in C. I still think C was one of the great “classic” languages. Something that C++ is certainly NOT, in my opinion.



    EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways – either teaching it as C “on steroids” (start with variables, conditions, loops, etc), or teaching it as a pure “OO” language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ “as C”, then I think you should teach C, not C++.

    But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most “intro” texts try and cover everything. It is simply not possible to cover all the topics in a “first language” course. You have to at least split it into 2 semesters, and then it’s no longer “first language”, IMO.

    I do teach C++, but only as a “new language” – that is, you must be proficient in some prior “pure” language (not scripting or macros) before you can enroll in the course. C++ is a very fine “second language” to learn, IMO.


    ‘Nother Edit: (to Konrad)

    I do not at all agree that C++ “is superior in every way” to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you’re really just writing C, and the C compilers are more optimized in these applications.

    I wrote a MIDI engine, first in C, later in C++ (at the vendor’s request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet – but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.


  32. A degree in Computer Science or other IT area DOES make you a more well rounded programmer

    I don’t care how many years of experience you have, how many blogs you’ve read, how many open source projects you’re involved in. A qualification (I’d recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

    Just because you’ve written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

    Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

    I would not be surprized if this answer gets voted down.

    Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn’t matter at the end, as long as you can work well together.

    Always act mercifully towards other developers, irrespective of qualifications.

  33. Lazy Programmers are the Best Programmers

    A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

    As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company’s database design paradigms).

    Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

    A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.

  34. Don’t use inheritance unless you can explain why you need it.

  35. The world needs more GOTOs

    GOTOs are avoided religiously often with no reasoning beyond “my professor told me GOTOs are bad.” They have a purpose and would greatly simplify production code in many places.

    That said, they aren’t really necessary in 99% of the code you’ll ever write.

  36. I’ve been burned for broadcasting these opinions in public before, but here goes:

    Well-written code in dynamically typed languages follows static-typing conventions

    Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

    • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn’t mix types.

    • A type-error in a statically typed language is still a type-error in a dynamically typed language.

    • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

    • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren’t subclasses of one another, then they almost certainly implement the same interface.

    While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

    Dynamic typing does not reduce the amount of code programmers need to write

    When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add “so why use dynamically typed languages to begin with?”. The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

    I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

    The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml’s REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.

  37. Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.

  38. Code layout does matter

    Maybe specifics of brace position should remain purely religious arguments – but it doesn’t mean that all layout styles are equal, or that there are no objective factors at all!

    The trouble is that the uber-rule for layout, namely: “be consistent”, sound as it is, is used as a crutch by many to never try to see if their default style can be improved on – and that, furthermore, it doesn’t even matter.

    A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in “fixations”, can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code – and writing code with it in mind especially.

    It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it’s actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture – even if you don’t consciously read every character.

    The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it’s laid out in a way that makes it easier to take in.

    Almost without exception, everyone I have asked to try this style (including myself) initially said, “ugh I hate it!”, but after a day or two said, “I love it – I’m finding it hard not to go back and rewrite all my old stuff this way!”.

    I’ve been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques 🙂


    I finally got around to blogging about this (after many years parked in the “meaning to” phase): Part one, Part two, Part three.

  39. Opinion: explicit variable declaration is a great thing.

    I’ll never understand the “wisdom” of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

    Nobody’s ever given me an explanation better than “well it saves time since I don’t have to write ‘int i;’.” Uhhhhh… yeah, sure, but how much time does it take to track down a runtime error?

  40. Opinion: Never ever have different code between “debug” and “release” builds

    The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.

  41. Pagination is never what the user wants

    If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

    [EDIT: clarification, based on comments]

    As a real world example, let’s look at this Stack Overflow question. Let’s say I have a controversial programming opinion. Before I post, I’d like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

    I would prefer one of these options:

    1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

    2. Allow me to see all the answers so I can use my browser’s “find” option (give me all the results).

    The same applies if I just want to find an answer I previously read, but can’t find anymore. I don’t know when it was posted or how many votes it has, so the sorting options don’t help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.


  42. Respect the Single Responsibility Principle

    At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn’t be doing everything in the page load method they often push back … so for the children please quit building the “do everything” method we see all to often.

Tasg: language-agnostic,

Answer Link
  • Unable to run NoraUI mvn verify goal
  • Unable to run my app on emulator in VS Code
  • Unable to run multiple instances of libVLC(MobileVLCKit) in IOS via flutter framework
  • Unable to run make on griddb source on ubuntu 20.04 (building from source)
  • Unable to run latexindent macOS Monterey 12.0.1
  • Unable to run kotlinc-native command
  • Unable to run JUnit Test… Java.lang.ExceptionInInitializerError (Android Studio)
  • Unable to run java with -Xmx > 966m
  • Unable to run ionic cap run android from wsl2 inorder to start android emulator
  • Unable to run Intel HAXM installer: Cannot start process, the working directory does not exist
  • fs
  • Unable to run Google Analytics sample code
  • unable to run flutter run after upgarding to flutter 2.8.0 from 2.5.3
  • Unable to run Django with PostgreSQL in Docker
  • Unable to Run Container Using testcontainers
  • Unable to run ClojureScript Hello World program, Error building classpath. Error reading edn.
  • unable to run client command for apache karaf 4.3.3 through remote server
  • Unable to run c program 2nd time using eclipse
  • unable to run c++ in visual studio code on m1 chipset
  • Unable to run Android Instrumented Tests
  • Unable to run adb, check your Android SDK installation and ANDROID_SDK_ROOT environment variable: …AndroidSdkplatform-toolsadb.exe
  • Unable to run a singlespecific .spec.ts file through angular cli using ng test –include option
  • Unable to run a Mango query
  • Unable to return response back to view in laravel from package
  • Unable to return object reference in std::optional
  • Unable to return NULL in a function that expects an integer return type
  • Unable to return correct change in JavaScript Cash Register
  • Unable to retrieve version information from Elasticsearch nodes. Request timed out
  • Unable to retrieve values from Axios Response data
  • Unable to retrieve dotenv JWT secret Error: secretOrPrivateKey must have a value
  • Unable to resolve your shell environment
  • Unable to resolve token for FCM while implementing Push notification for Xamarin
  • Unable to resolve the request yii
  • Unable to resolve service for type Swashbuckle.AspNetCore.Swagger.ISwaggerProvider
  • Unable to resolve service for type Microsoft.EntityFrameworkCore.Diagnostics.IDiagnosticsLogger