There are several paths to starting a career in software development, including the more non-traditional routes that are now more accessible than ever. Whether you're interested in front-end, back-end, or full-stack development, we offer more than 10,000 resources that can help you grow your current career or *develop* a new one.
7 Linux Commands and Tips to Improve Productivity
Designing Developer-Friendly APIs and SDKs: Strategies for Platform Success
DZone is proud to announce our media partnership with PlatformCon 2024, one of the world’s largest platform engineering events. PlatformCon runs from June 10-14, 2024, and is primarily a virtual event, but there will also be a large live event in London, as well as some satellite events in other major cities. This event brings together a vibrant community of the most influential practitioners in the platform engineering and DevOps space to discuss methodologies, recommendations, challenges, and everything in between to help you build the perfect platform. Need help convincing your manager (or yourself) that this is an indispensable conference to attend? You’ve come to the right place! Below are three key reasons why you should attend PlatformCon24. 1. Platform Engineering Is a Hot Topic in 2024 So, what is platform engineering? In his most recent article on DZone, Mirco Hering describes a platform engineer as someone who plays three roles: the technical architect, the community enabler, and the product manager. This multifaceted approach helps to better streamline development practices and take the load off of software engineers and allow for each team to be more in sync with their deployment cycles. In 2024, we’ve seen an increase in articles and conversations on DZone around platform engineering, how it relates to DevOps, and the top considerations when looking to better optimize your development processes. Developers want to know more about this, and this conference is a perfect place to learn from the experts, and connect with other like minded individuals in the space. 2. Learn From Platform Engineering and DevOps Experts Have you seen the lineup of speakers for PlatformCon this year?! Industry leaders will help you navigate this space and key conference themes, with prominent names including Kelsey Hightower, Gregor Hohpe, Charity Majors Manuel Pais, Nicki Watt, Brian Finster, Mallory Haigh, and more. At DZone, we value peer-to-peer knowledge sharing, and find that the best way for developers to learn about new tech initiatives, methodologies, and approaches to existing practices is through the experiences of their peers. And this is exactly what PlatformCon is all about! This conference also gives attendees unparalleled access to the speakers via Slack channels. What better way to navigate the evolving world of platform engineering than to learn from the experts who are leading the way? 3. Embark on a Custom DevOps + Platform Engineering Journey As we mentioned earlier, platform engineering is multifaceted, and with that, the approaches and practices are as well. The five conference tracks highlighted below are intended to allow you to tailor your experience and platform engineering journey. Stories: This track enables you to learn from the practitioners who are building platforms at their organizations and will provide you with adoption tips of your own. Culture: This track focuses on the relationships between all of the developers and teams involved in platform engineering — from DevOps and site reliability engineers to software architects and more. Toolbox: This track focuses on the technical components of developer platforms, and dives into what tools and technologies developers use to solve for specific problems. Conversations will focus around IaC, GitOps, Kubernetes, and more. Impact: This track is all about the business side of platform engineering. It will dive into the key metrics that C-suite executives measure and will offer advice on how to get leadership buy-in to build a developer platform. Blueprint: This track will give you the foundation to build your own developer platform, covering important reference architectures and key design considerations. Register Today to Perfect Your Platform Now that we’ve shared multiple reasons why you should attend PlatformCon 2024, we’ll leave you with one final motivation — it’s free to register and attend! This conference is the perfect opportunity to connect with like-minded people in the developer space, learn more about platform engineering, and help determine the best next steps in your developer platform journey. Learn more about how to register here. See you there!
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Enterprise AI: The Emerging Landscape of Knowledge Engineering. Is AI taking our jobs? Let's hope not, because we don't want devs taking other jobs. They prefer to be behind computers. This is an excerpt from DZone's 2024 Trend Report,Enterprise AI: The Emerging Landscape of Knowledge Engineering.Read the Free Report
I recently read an article about the worst kind of programmer. I agree with the basic idea, but I wanted to add my thoughts on it. I have seen, over time, that developers seem invested in learning new things for the sake of new things, rather than getting better at existing approaches. Programming is like everything else — new is not always better. I have a Honda CRV that is not as easy to use as some cars I used to own before touch interfaces became popular. The touch screen sometimes acts like I'm pressing various places on the screen when I'm not, making beeping noises and flipping screens randomly. I have to stop and turn the car off and on to stop it. It has a config screen with every option disabled. It has bizarre logic about locking and unlocking the doors, that I have never fully figured out. I often wonder if devs who make car software have a driver's license. If I tried asking 100 programmers the following question, chances are very few of them, if any, could answer it without a web search: Bob just completed programming school, and heard about MVC, but is unsure how to tell which code should be modeled, which code should be viewed, and which code should be controlled. How would you explain the MVC division of code to Bob? It's not a genius question, it's really very basic stuff. Here are some other good questions about other very basic stuff: 1. Why Did Developers Decide in REST That POST Is Created and Put Is Updated? The HTTP RFCs have always stated that PUT is created or updated to a resource on the server such that a GET on that resource returns what was PUT, and that POST is basically a grab bag of whatever does not fit into other verbs. The RFCs used to say that a POST URL is indicative of an operation, now they just say POST is whatever you say it is. Developers often talk about the REST usage of POST and PUT like Jesus Christ himself dictated this usage, like there is no argument about it. I have never seen any legitimate reason why PUT cannot be created or updated as the RFC says, and POST can be for non-CRUD stuff. Any real, complex system that is driven by customer demand for features is highly likely to have some operations that are not CRUD — integrations with other systems, calculations, searches (eg, a filter box that shows matches as you type, find results for a search based on input fields), and so on. By reserving POST for these kinds of other operations, you can immediately identify anything that isn't CRUD. Otherwise, you wind up with two usages of POST — mostly for create, but here and there for other stuff. 2. Why Do Java Developers Insist on Spring and JPA for Absolutely Every Java Project Without Question? Arguably, a microservice project should be, well, you know, micro. Micro is defined as an adjective that means extremely small. When Spring and JPA take up over 200MB of memory and take 10 seconds to fire up a near-empty project that barely writes one row to a table, I'm not seeing the micro here. Call me crazy, but maybe micro should be applied to the whole approach, not just the line count: the amount of memory, the amount of handwritten code, the amount of time a new hire takes to understand how the code works, etc. You don't have to be a freak about it, trying 10 languages to see which uses the least amount of RAM, just be reasonable about it. In this case, Spring and JPA were designed for monolithic development, where you might have problems like the following: A constructor is referred to 100 times in the code. Adding a new field requires modifying all 100 constructor calls to provide the new field, but only one of those calls actually uses the new field. So dependency injection is useful. There are thousands of tables, with tens of thousands of queries, that need to be supported in multiple databases (eg, Oracle and MSSQL), with use cases like multi-tenancy and/or sharding. There comes a point where it is just too much to do some other way, and JPA is very helpful. 3. Why Does Every Web App Require Heavy Amounts of JS Code? When I started in this business, we used JSP (Java Server Pages), which is a type of SSR (Server Side Rendering). Basically, an HTML templating system that can fill in the slots with values that usually come from a database. It means when users click on a button, the whole page reloads, which these days is fast enough for it to be a brief sort of blink. The bank I have used since about 2009 still uses some sort of SSR. As a customer, I don't care it's a bit blinky. It responds in about a second after each click, and I'm only going to do maybe 12-page loads in a session before logging out. I can't find any complaint on the web about it. I saw a project "upgrade" from JSP to Angular. They had a lot of uncommented JSP code that nobody really knew how it worked, which became Angular code nobody really knew how it worked. Some people would add new business logic to Angular, some would add it to Java code, and nobody leading the project thought it was a good idea to make a decision about this. Nobody ever explained why this upgrade was of any benefit, or what it would do. The new features being added afterward were no more or less complex than what was there before, so continuing to use JSP would not have posed any problems. It appeared to be an upgrade for the sake of an upgrade. 4. Why Is Everything New Automatically So Much Better Than Older Approaches? What is wrong with the tools used 10 or 15 years ago? After all, everything else works this way. Sure, we have cars with touch screens now, but they still use gas, tires, cloth or leather seats, a glove box, a steering wheel, glass, etc. The parts you touch daily to drive are basically the same as decades ago, with a few exceptions like the touch screen and electric engines. Why can't we just use a simple way of mapping SQL tables to objects, like a code generator? Why can't we still use HTML templating systems for a line of business apps that are mostly CRUD? Why can't we use approaches that are only as complex as required for the system at hand? I haven't seen any real improvements in newer languages or tooling that are significantly better in real-world usage, with a few exceptions like using containers. 5. Do You Think Other Industries Work This Way? I can tell you right now if engineers built stuff like programmers do, I would never get in a car, walk under a bridge, or board an airplane. If doctors worked that way, I'd be mortally afraid every visit. So why do we do things this way? Is this really the best we can do? I worked with a guy who asked shortly after being hired "Why the f do we have a mono repo?". When I asked what was wrong with a monorepo, he was unable to give any answer, but convinced management how this has to change pronto, apparently convinced with almightly passion all microservice projects must be structured as separate repos per service. Not sure if it was him or someone else, but somehow it was determined that each project must be deployed in its own container. These decisions were detrimental to the project in the following ways: One project was a definition of all objects to be sent over the wire. If service A object is updated to require a new field, there is no compile error anywhere to show the need to update constructor calls. If service B calls A to create objects, and nobody thinks of this, then probably only service A is updated to provide the new required field, and a subtle hard-to-find bug exists, that might take a while for anyone to even notice. Your average corporate dev box can handle maybe 15 containers before flopping over and gasping for air. So we quickly lost local development in one of those unrecoverable ways where the team would never get it back. Every new dev would have to check out dozens of repos. No dependency information between repos was tracked anywhere, making it unknowable which subset of services has to be run to stand up service X to work on that one service. Combined with the inability to run all repos locally yields two equally sucktastic solutions to working on service X: Use trial and error to figure out which subset stands up X and run it locally Deploy every code change to a dev server When Alex talks about programmers using hugely complex solutions of the sort he describes, it sounds to me like devs who basically jerk off to everything new and cool. This is very common in this business, every team has people like that in it. That isn't necessarily a big problem by itself, but when combined with the inability/unwillingness to ensure other devs are fully capable of maintaining the system, and possibly the arrogance of "everything I say is best", and/or "only I can maintain this system," that's the killer combination that does far more harm than good.
I remember back when mobile devices started to gain momentum and popularity. While I was excited about a way to stay in touch with friends and family, I was far less excited about limits being placed on call length minutes and the number of text messages I could utilize … before being forced to pay more. Believe it or not, the #646 (#MIN) and #674 (#MSG) contact entries were still lingering in my address book until a recent clean-up effort. At one time, those numbers provided a handy mechanism to determine how close I was to hitting the monthly limits enforced by my service provider. Along some very similar lines, I recently found myself in an interesting position as a software engineer – figuring out how to log less to avoid exceeding log ingestion limits set by our observability platform provider. I began to wonder how much longer this paradigm was going to last. The Toil of Evaluating Logs for Ingestion I remember the first time my project team was contacted because log ingestion thresholds were exceeding the expected limit with our observability partner. A collection of new RESTful services had recently been deployed in order to replace an aging monolith. From a supportability perspective, our team had made a conscious effort to provide the production support team with a great deal of logging – in the event the services did not perform as expected. There were more edge cases than there were regression test coverage, so we were expecting alternative flows to trigger results that would require additional debugging if they did not process as expected. Like most cases, the project had aggressive deadlines that could not be missed. When we were instructed to “log less” an unplanned effort became our priority. The problem was, we weren’t 100% certain how best to proceed. We didn’t know what components were in a better state of validation (to have their logs reduced), and we weren’t exactly sure how much logging we would need to remove to no longer exceed the threshold. To our team, this effort was a great example of what has become known as toil: “Toil is the kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows.” – Eric Harvieux (Google Site Reliability Engineering) Every minute our team spent on reducing the amount of logs ingested into the observability platform came at the expense of delivering fewer features and functionality for our services. After all, this was our first of many planned releases. Seeking a “Log Whatever You Feel Necessary” Approach What our team really needed was a scenario where our observability partner was fully invested in the success of our project. In this case, it would translate to a “log whatever you feel necessary” approach. Those who have walked this path before will likely be thinking “this is where JV has finally lost his mind.” Stay with me here as I think I am on to something big. Unfortunately, the current expectation is that the observability platform can place limits on the amount of logs that can be ingested. The sad part of this approach is that, in doing so, observability platforms put their needs ahead of their customers – who are relying on and paying for their services. This is really no different from a time when I relied on the #MIN and #MSG contacts in my phone to make sure I lived within the limits placed on me by my mobile service provider. Eventually, my mobile carrier removed those limits, allowing me to use their services in a manner that made me successful. The bottom line here is that consumers leveraging observability platforms should be able to ingest whatever they feel is important to support their customers, products, and services. It’s up to the observability platforms to accommodate the associated challenges as customers desire to ingest more. This is just like how we engineer our services in a demand-driven world. I cannot imagine telling my customer, “Sorry, but you’ve given us too much to process this month.” Pay for Your Demand – Not Ingestion The better approach here is the concept of paying for insights and not limiting the actual log ingestion. After all, this is 2024 – a time when we all should be used to handling massive quantities of data. The “pay for your demand – not ingestion” concept has been considered a “miss” in the observability industry… until recently when I read that Sumo Logic has disrupted the DevSecOps world by removing limits on log ingestion. This market-disruptor approach embraces the concept of “log whatever you feel necessary” with a north star focused on eliminating silos of log data that were either disabled or skipped due to ingestion thresholds. Once ingested, AI/ML algorithms help identify and diagnose issues – even before they surface as incidents and service interruptions. Sumo Logic is taking on the burden of supporting additional data because they realize that customers are willing to pay a fair price for the insights gained from their approach. So what does this new strategy to observability cost expectations look like? It can be difficult to pinpoint exactly, but as an example, if your small-to-medium organization is producing an average of 25 MB of log data for ingestion per hour, this could translate into an immediate 10-20% savings (using Sumo Logic’s price estimator) on your observability bill. In taking this approach, every single log is available in a custom-built platform that scales along with an entity’s observability growth. As a result, AI/ML features can draw upon this information instantly to help diagnose problems – even before they surface with consumers. When I think about the project I mentioned above, I truly believe both my team and the production support team would have been able to detect anomalies faster than what we were forced to implement. Instead, we had to react to unexpected incidents that impacted the customer’s experience. Conclusion I was able to delete the #MIN and #MSG entries from my address book because my mobile provider eliminated those limits, providing a better experience for me, their customer. My readers may recall that I have been focused on the following mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” – J. Vester In 2023, I also started thinking hard about toil and making a conscious effort to look for ways to avoid or eliminate this annoying productivity killer. The concept of “zero dollar ingest” has disrupted the observability market by taking a lead from the mobile service provider's playbook. Eliminating log ingestion thresholds puts customers in a better position to be successful with their own customers, products, and services (learn more about Sumo Logic’s project here). From my perspective, not only does this adhere to my mission statement, it provides a toil-free solution to the problem of log ingestion, data volume, and scale. Have a really great day!
Navigating the intricate world of software development is not merely a solitary pursuit; it's a collaborative journey where seasoned engineers play a pivotal role as mentors. Drawing from my personal experiences in the industry, which spans over a decade, I embark on a thoughtful exploration of effective mentorship in software development. In this post, I'll delve into the profound significance of mentorship, share insightful anecdotes from my own journey, and offer actionable tips for senior engineers eager to become impactful mentors. The Crucial Role of Mentorship in Software Development Mentorship in software development is akin to a dynamic dance between experienced professionals and those at the inception of their careers. It goes beyond the traditional hierarchical structures, serving as a conduit for the exchange of knowledge, experiences, and guidance. The landscape of software development, with its ever-evolving technologies and methodologies, makes effective mentorship indispensable. 1. Knowledge Transfer Mentorship acts as a bridge for the transfer of tacit knowledge, the kind that textbooks and online courses can't encapsulate. The insights, best practices, and practical wisdom that mentors impart significantly accelerate the learning curve for junior engineers. 2. Career Guidance Beyond technical skills, mentorship extends to offering invaluable career guidance. Navigating the complex terrain of the tech industry demands insights into various career paths, industry trends, and strategies for professional development – areas where a mentor's compass proves invaluable. 3. Personal Development Mentorship is not confined to the professional realm; it encompasses personal development. Mentors often assume the role of career coaches, helping mentees cultivate essential soft skills, navigate workplace dynamics, and foster a growth mindset. Journeying Through Mentorship: Insights from Personal Experiences Having transitioned from a managerial role at a junior level to senior management over my extensive 12+ years in the software development industry, mentorship has been an intrinsic part of my professional narrative. Witnessing the growth of junior engineers, celebrating their achievements, and understanding how mentorship contributes to the collective advancement of the tech community has been a source of profound satisfaction. 1. Fostering a Growth Mindset A key lesson from my mentoring experiences is the significance of cultivating a growth mindset. Encouraging junior engineers to view challenges as opportunities for learning, providing constructive feedback, and celebrating their achievements create a positive learning environment. 2. Tailoring Communication Styles Effective mentorship requires the ability to tailor communication styles to individual needs. Recognizing that some engineers thrive on detailed technical explanations while others benefit from practical examples is crucial for effective knowledge transfer. 3. Nurturing Confidence Building confidence in junior engineers is a cornerstone of effective mentorship. Establishing an environment where they feel safe to ask questions, make mistakes, and iterate on their work instills confidence. As a mentor, instilling belief in their abilities is as crucial as imparting technical knowledge. 4. Setting Realistic Goals Goal-setting is integral to mentorship. Establishing realistic short-term and long-term goals helps junior engineers track their progress and provides a roadmap for their professional development. These goals should align with their interests and aspirations. 5. Encouraging Autonomy While mentorship involves guidance, it is equally crucial to encourage autonomy. Empowering junior engineers to take ownership of their projects, make decisions, and learn from the outcomes instills a sense of responsibility and independence. Practical Tips for Effective Mentorship in Software Development Now that we've explored the profound significance of mentorship and gleaned insights from personal experiences, let's distill these lessons into actionable tips for senior engineers aspiring to be effective mentors in the dynamic realm of software development. 1. Establish Clear Communication Channels Foster open and transparent communication channels. Regular check-ins, one-on-one meetings, and feedback sessions provide a structured platform for mentorship. 2. Understand Individual Learning Styles Recognize that each mentee has a unique learning style. Tailor your approach to match their preferences, whether they thrive on hands-on coding sessions or prefer conceptual discussions. 3. Share Personal Experiences Personal anecdotes can be powerful teaching tools. Share your experiences, including challenges faced and lessons learned. This creates a relatable context for mentees to draw insights from. 4. Encourage Continuous Learning Foster a culture of continuous learning. Introduce mentees to relevant resources, suggest books, online courses, or workshops, and encourage participation in industry events. 5. Provide Constructive Feedback Constructive feedback is instrumental in professional growth. Frame feedback positively, focusing on areas of improvement while acknowledging accomplishments. This approach fosters a constructive learning environment. 6. Set Clear Goals and Expectations Define clear goals and expectations for mentorship. Whether it's specific technical skills, project milestones, or career aspirations, having a roadmap provides direction for both mentor and mentee. 7. Create a Safe Space for Questions Ensure mentees feel comfortable asking questions and seeking clarification. Creating a safe space for open dialogue promotes a culture of continuous learning. 8. Encourage Networking and Collaboration Facilitate opportunities for mentees to network with professionals in the industry. Encouraging collaboration on projects and fostering a sense of community contributes to a broader understanding of the tech landscape. 9. Be Adaptable Be adaptable in your mentoring approach. Recognize that the needs and goals of mentees may evolve over time. Being flexible ensures mentorship remains relevant to their changing circumstances. 10. Lead by Example As a mentor, lead by example. Demonstrate the qualities and work ethic you encourage in your mentees. Your actions will serve as a model for their own professional conduct. Conclusion Effective mentorship in software development is an art that demands a blend of technical expertise, interpersonal skills, and a genuine passion for guiding the next generation of engineers. As a senior engineer, embracing the role of a mentor is not just a responsibility but an opportunity to contribute to the collective growth of the tech community. By sharing experiences, fostering a growth mindset, and providing personalized guidance, senior engineers can leave an indelible mark on the careers of those they mentor. The legacy of effective mentorship extends beyond individual achievements, influencing the trajectory of the entire software development landscape. In the dynamic realm of technology, mentorship stands as a cornerstone for progress and innovation.
In the course of talking about job hunting with friends, colleagues, and randos on Slack and elsewhere, I end up talking about resumes. A lot. There is, (in my (not so) humble) opinion, a sizeable misunderstanding about what resumes are, what they do, how they should look, the effort one should (or shouldn’t) put into creating them, and more. Given the current period of churn in the tech industry and the resulting uptick in the frequency with which I’m having these types of conversations, I decided to commit to what’s become a standard part of my “so you’re looking for a new job?” shpiel to paper (or at least electrons). So… what, in my (again, not so humble) opinion, are resumes meant to do? Contrary to popular belief, common use, and what you may have been told in school at some point, a resume is not a loving, inciteful, and/or detailed retrospective of your work history. It is not meant to stand as de facto proof of your skills. It is not a biography of your work. You create (and send) resumes because it’s a required step for the purpose of the application, not because it’s particularly convincing. In the long run, it does very little to make the case for hiring you. After all, people could (and do) write literally ANYTHING on their resume, and there’s no way to validate it until the hiring manager… wait for it now: Sits down and talk with the candidate! This brings me to my main point: A resume serves EXACTLY one purpose: to entice the recipient to call you for an interview. If you could send a blank page that said “will bring cookies and beer,” and it would result in a phone call, you should do that. (Do not do this. It doesn’t work. Don’t ask me how I know.) Therefore, your primary goals – which will inform both the format and the content of the information you share – are to: Get past the automated HR filters every company uses these days so that a real human sees your resume. Entice that human to set up an initial call, where the REAL interviewing will begin. Let’s talk about item #1 first. And I’ll start with a semi-well-known “resume hack”: White Fonting The idea behind the buzzword is simple: you take keywords and/or the job description itself and include it in your resume, using the smallest font size possible and coloring the text white, rendering it invisible to the human reading the page; but the text still registers with the automated systems that ingest and auto-scan the resume. The required keywords are detected, and the resume is passed to the next stage. Sometimes. Sometimes, the text actually messes up the experience section on the application, causing the resume to be rejected when it might have otherwise passed. In other cases, a human sees what’s happened and rejects the resume because it’s perceived as “cheating.” (My personal feeling is that using software to auto-filter resumes is cheating, and cheating a cheat is basically Kobayashi Maru-ing the thing, and I’m 100% team Kirk on this.) That said, it’s clear (to me, at least) that white-fronting is neither reliable nor guaranteed, but it does work in some cases. Use with caution. Sometimes, It Is Who You Know As I’ve already explored, there is a demonstrable value to having someone on the inside to help shepherd your resume along the application journey. Internal referrals often give your resume an automatic pass to the first real (hiring manager) interview stage. Even when it doesn’t, at the very least, it increases the likelihood you’ll get feedback if you don’t make the cut. If you don’t know anyone at the company in question, it’s time to trot out your LinkedIn skills to find people who know the people you need to know. Get introduced. Offer to buy someone in your targeted group/department/specialty a coffee and pick their brain about the company and work. Don’t fish for a job; reach out for a conversation. Once you’ve met and allowed them to understand who you are and what you are about, THEN you can express interest and ask if that person would be willing to give you a referral. You Get What You Give Be prepared to customize your resume. Highlight (or, in some cases, re-write) the resume to accentuate the needs expressed in the job description and de-emphasize elements that are less important. Does that mean more work for each job application? Yes. Should you do it for every single job? OF COURSE NOT. You’d do this for the high-value opportunities, not the “this came up as I was scrolling LinkedIn” jobs. But remember that the effort you put into an application very often reflects the value of the outcome. Not always, but often. For the second part, I am going to emphasize that in every place you can quantify a result, you need to do so. It’s Not What You Said; It’s How You Said It Again, a resume is not just a list of “I did this sh…tuff.” It should convince the reader that you are able to produce results FOR THEM. By measurably quantifying the effects and impacts of your past work, you implicitly state your ability to do the same for them. Consider the difference: “Cultivated a healthy work-life balance culture for both in-office and remote employees by creating groups and events for in and out-of-work activities,” vs. Improved employee satisfaction stats by 5% YoY by creating groups and events for in and out-of-work activities, with an average of 65% attendance over 2 years. Increased product visibility by integrating a new records management system to become a more competitive offering for new clients vs. $5k revenue increase MoM for the first 3 months and +30% adoption rate by integrating a new RMS, making the offering more visible, competitive, and valuable. Obviously, you might not always have numbers for the things you’ve done. This suggests a few things to me: Start making a habit of noting these types of results – not just because it looks good on a resume, but because the business you work for now is ALSO interested in these types of results. Challenge managers to provide these types of outcome statistics and question work when no perceivable value can be attained. As you are writing or updating your current resume, note the items where you can offer ranges, where you can stand by terms like “significant increase” or “measurable impact,” and which items simply defy quantification. For those items that have no measurable outcome, consider why you’re including it. Again, “At my old job, I did stuff” is not a compelling argument to hire you. Sure, you need to show that you have experience with a specific language or technology. But “I know how to do things” or “I can learn how to do things” can be communicated in other ways besides taking up valuable inches of resume space as a laundry list of tasks. Separate and Elevate Consider separating the work result from the work history. Have one section for “Places I Worked” that simply lists dates, company names, and job titles. Then disassociate the tasks from the job by grouping them based on technology, result, or some other category. Nobody really cares that you did these things at company X, but those things at company Y. It’s more interesting to see all the ways you’ve used Java to create results or your various improvements to team productivity. By grouping the work you’ve done by category, you create a compelling picture of your skills. The (Mostly) Un-Necessary Summary You need to trust that the interview(s) will reach out to you and ask for specifics, or background, or context. In fact, a well-designed resume will cause the reader to wantto do exactly that. As always, I hope this helps. If you have additional questions, contradictions, or corrections, leave them in the comments.
Do you know what the global edtech and smart classroom market size was in 2022? No idea? Then, we must inform you that it was somewhere around USD 115.80 billion, which is estimated to grow from USD 133.55 billion in 2023 to USD 433.17 billion in 2030. Yes, you read that right! And among all the contributors to these numbers, Physics Wallah has been a well-known name. But did you know that there was a time when this edtech brand launched its app for the first time, and it crashed because a myriad of authorized users logged into the application simultaneously? In short, the concerned app failed to meet the growing demand of users due to its poor performance, scalability, and resilience, resulting in a huge loss to the business. Now, if you don’t want to experience the same by hook or by crook, you must rely on scalable digital products. Yes, only when you do that will you be able to develop a powerful digital product that can survive and thrive in the market easily. But the query pops up: what are scalable products, and why should you use them? Well, to get a good sense of that, you will have to browse through the next text stack. What Do You Mean By Scalable Products? These products generally refer to the ability of a software application to handle the upsurging demand, complexity, or use of the mobile program without affecting its quality, performance, and functionality. Product scalability is one of the key factors that determines the success and growth of any virtual application out there. Since not all digital products are good at scalability, a stunning survey by a business-focused media organization has found that only 8% of agencies succeed in scaling during their operation. Can You Give Some Examples of Scalable Digital Products? A case in point here is that John Young has showcased the RS/6000 SP operating system as a scalable system in one of his books named “ Exploring IBM’s New-Age Mainframes.” But why? You might want to know. Well, it is because of the ability of RS/6000 SP to retain performance levels even after adding extra processors. Another example of scalable products on this list is scalable fonts. Yes, you heard that right! In the printing domain, scalable fonts can be resized effortlessly depending on the requirement without losing their quality. With that over, it is time to grasp imperative information about: Why Should You Focus On Product Scalability? Honestly speaking, if you wish to grow and fulfill the increasing demands of your users while staying competitive in the market, it makes sense to make your product extensible. Let’s delve deep into the details: Meet Growing Demand Horizontal scaling lets you connect a higher number of devices or servers to your existing infrastructure in order to cope with the increased traffic and allocate the load evenly. On the other hand, vertical scaling concentrates on upgrading current hardware to better performance and potential during scalable product creation. Improve Performance And Reliability By taking the plunge to distribute the workload across various servers or implement load balancing techniques, it becomes a breeze to: Minimize latency Manage peak loads efficiently Guarantee a seamless user experience Perform Effective Capacity Planning Some industry experts say that when you have scalable digital products at your disposal, the chances are high that you can precisely anticipate resource requirements and distribute them accordingly to ensure excellent performance and steer clear of overprovisioning or underutilizing resources. Thus, if you are willing to fabricate a mobile app that can grow with your business down the road, you must have a casual conversation with the dedicated representatives of a top IT service provider that delivers promising app development services. When Should You Build Scalable Products? Being a responsible app owner, it is necessary to sell your offerings and live up to the expectations of your consumers in terms of features and performance. Although product scalability was not your priority years ago, it is the need of the hour to scale your deliverables to support its growth. And one of the clear signs of the need to extend your product is when its performance starts deteriorating. For instance, a software application might show a degraded performance when a battery of users begins using it at the same time. That’s when you need to take a look at key metrics, such as: Memory utilization CPU usage Disk I/O Network I/O If any of them needs further improvement to take care of the increased load, you must fulfill the same. What else? Some industry specialists suggest analyzing wads of indications proactively to figure out when to proceed with product scalability and when not. By doing this, you won’t have to wait for your app performance to drop big time. Instead, you will be able to optimize the same before time. Beyond this, you can ask yourself a few important questions: Do you have a business value to deliver to a massive consumer base? If so, there is plenty of scope for your business to grow, and you should invest adequately in your product scalability. Have you built an outstanding product already? Did you receive the feedback of your desired market that confirms you have a superb product? If the answer is yes, do not hesitate to expand your product. Please remember that forming scalable apps is not everyone’s cup of tea. And if you can’t forge a scalable application properly, you can’t expect to achieve aggressive growth and generate considerable revenue in the future. Perhaps this is the reason why a popular insight-delivering firm has found in its research that less than 0.01% of all customer-based mobile programs had an opportunity to become financially successful in 2018. Finally, it is time to discuss the most awaited topic in this write-up, i.e., How To Carry out the Entire Product Scalability Task Make the Most of Customer Feedback and Data Collection Quantitative analytics and research are two impeccable methods to collect buyer feedback that can make you aware of a successful expansion plan. By paying attention to their responses, you can learn which functionalities to add to your scalable products next and then optimize and retest. With such a plan at your disposal, you can rest assured that your application will keep evolving and adapting to market demands in the time to come. For example, it will help if you gather actual data from users when they use your software during the MVP stage. Keeping track of user behavior, user journey, drop-off, and interactions at every touchpoint can help you recognize functionalities that can be improved, added, or deleted from scalable digital products when moving from the MVP stage to the complete launch. Be mindful that it is necessary to go through all the positive and negative comments and find a way to address them. Wondering why? Well, it is unsatisfied clients who will highlight the issues in your app that you can sort out when shifting from the MVP stage to full-fledged scalable product development. And in case you need some sort of technical assistance to create a scalable application, you can always count on a trusted IT service provider on the cloud with expertise in software development services. Choose the Correct Database Engine The next thing you must do is pick the most appropriate database engine and create an exceptional plan or model to ensure your product scalability takes place without any hassle. Hence, when the demand boosts, you will be prepared to process a sizeable number of transactions using this approach easily. Keep in mind that replicating your database could be the perfect idea when your web app scales. And just to let you know, this replication process entails imitating the database so that its several copies exist, and each one can look after a subset of the CRUD (Create, Read, Update, Delete) tasks. It is an accepted practice in the industry to divide the read and write operations of the databases when creating scalable digital products. Put APIs First You might not believe this, but the truth is that the advent of API-first software architecture has been one of the most useful innovations in the field of software design. Earlier, APIs were manufactured as an afterthought to existing applications and were complicated, too. Nowadays, an API-first design actually separates them from the beginning, allowing teams to quickly build experiences and products compatible with several endpoints. The API-first methodology insists on tapping Application Programming Interfaces to join all platform parts and transfer crucial data between them when making scalable products. Call on Continuous Integration and Continuous Delivery Just like Agile soars productivity, CI/CD procedures take the same to an even better level, ensuring rapid release of scalable digital products with fewer issues. The key is the large-scale adoption of continuous automation throughout all stages of development, which results in a more streamlined and consistent development of code. This is something that acts as an imperative process when looking to integrate new functionalities or execute product scalability tasks. You might have seen many software developers working in parallel these days on web app development projects, which takes productivity to the next level and also boosts the likelihood of bugs. If this happens, you will have to spend an enormous amount of time bringing together different pieces of the program that have been constructed by different developers out there. However, the good news is that this trouble is being fixed with the help of CI/CD tools. Now that you have wrapped your mind well around the right method to scale a digital product, it is time to get down to the last but most important topic from our point of view, i.e., What Is the Cost of the Entire Digital Transformation Project? Just so you know, each transformation project is unique and different. The total cost to develop fully scalable digital products will vary depending on the following: Your organization Its industry The type of transformation required A wide variety of other factors According to a premier online knowledge resource focused on the CXO, the average digital transformation project can set you back by $27.5 million based on a 2020 report. If you want to get the hang of the expenditure required to work on your specific product scalability task, it is a wise decision to connect to the most knowledgeable executive of a renowned IT service-providing agency. The Rundown So far, you have come to know how imperative it is to make your product scalable as time goes by. Right? Because with every passing day, the needs, preferences, and expectations of patrons keep changing. If you are in the market to provide them with the best possible product or service, you also need to ensure that they can access the same anytime and from anywhere they like. But to guarantee the same, you will have to come up with completely scalable digital products that can cater to hundreds of thousands of consumers at the same time. For that, it is advisable to collaborate with a well-established IT solution provider now that has a specialization in scalable product construction.
Comparing the backend development landscape of today with that of the late '90s reveals a significant shift. Despite the fact that the barriers of entry to software development as a career have become lower, the role is now more complex, with developers facing a broader range of challenges and expectations. Engineers today grapple with building larger and more intricate systems and an overwhelming amount of choice across all aspects of software development. From which language, tool, platform, framework, etc. to use, to which solution, architectural style, design pattern, etc. to implement. The demand for designing robust, scalable, and secure distributed systems capable of supporting thousands of concurrent users, often with near-perfect availability, and compliance with stringent data-handling and security regulations, adds to the complexity. This article delves into the ways backend development has evolved over the past 20 years, shedding light on the aspects that contribute to its perceived increase in difficulty. Higher User Expectations Today's computers boast exponentially greater memory and processing power, along with other previously unimaginable capabilities. These technological leaps enable the development of far more complex and powerful software. As software capabilities have increased, so too have user expectations. Modern users demand software that is not only globally accessible but also offers a seamless cross-platform experience, responsive design, and real-time updates and collaborative features. They expect exceptional performance, high availability, and continual updates to meet their evolving needs with new features and enhancements. This shift challenges developers to leverage an array of technologies to meet these expectations, making backend development even more challenging. Increased Scale and System Complexity The complexity of software problems we tackle today far surpasses those from 20 years ago. We are now orchestrating networks of computers, processing thousands of transactions per second, and scaling systems to accommodate millions of users. Developers now need to know how to handle massive, polyglot codebases, implement distributed systems, and navigate the complexities of multithreading and multiprocessing. Additionally, the necessity for effective abstraction and dependency management further complicates the development process. With complex distributed systems, abstractions are essential to allow developers to reduce complexity, hide the unnecessary details and focus on higher-level functionality. The downside of the widespread use of abstractions is that debugging is much more difficult and having a comprehensive understanding of a system much more challenging, especially due to the limitations of traditional system visualization tools. Furthermore, the proliferation of APIs necessitates meticulous dependency management to prevent the creation of systems that are convoluted, fragile, or opaque, making them challenging to understand, maintain, or expand. Although many developers still resort to whiteboards or non-interactive diagramming tools to map their systems, recently, more dynamic and automated tools have emerged, offering real-time insights into system architecture. These changes, along with many others (e.g. heightened security requirements, the introduction of caching, increased expectations for test coverage, exception handling, compiler optimization, etc.), underscore the increased complexity of modern backend development. The era when a single programmer could oversee an entire system is long gone, replaced by the need for large, distributed teams and extensive collaboration, documentation, and organizational skills. Overwhelming Choice With the rapid pace that technology is evolving, developers now have to navigate a vast and ever-growing ecosystem of programming languages, frameworks, libraries, tools, and platforms. This can lead to decision paralysis, exemplifying the paradox of choice: it is a mistake to assume that if we give developers more choice, they will be happier and more productive. Unlimited choice is more attractive in theory than in practice. The plethora of choices in the tech landscape is documented in the latest CNCF report - which shows hundreds of options! While a degree of autonomy in choosing the best technology or tool for a solution is important, too much choice can lead to overload and ultimately overwhelm people or cause procrastination or inaction. The solution is to strike a balance between providing developers with the freedom to make meaningful choices and curating the options to prevent choice overload. By offering well-vetted, purpose-driven recommendations and fostering a culture of knowledge-sharing and best practices, we empower developers to navigate the expansive tech landscape with confidence and efficiency. Different Set of Skills The advent of cloud computing has introduced additional complexities for backend developers, requiring them to be proficient in deploying and managing applications in cloud environments, understanding containerization, and selecting appropriate orchestration tools. Besides technical knowledge, skills in modern backend developers that are particularly valued are: Managing legacy software and reducing architectural technical debt. The majority of projects developers work on these days are “brown field”. Knowing how to adapt and evolve architectures to accommodate unforeseen use cases, all the while managing — and possibly reducing — architectural technical debt is a prized skill. Assembling software by choosing the right technologies. With the explosion of software-as-a-service (SaaS) and open-source software, software development has shifted to an assembly-like approach, where backend engineers need to meticulously select and combine components, libraries, and frameworks to create a complete system where each piece fits seamlessly. Designing a scalable, performant, and secure system architecture. Backend software engineers are designers too and they must possess a deep understanding of software design principles to create scalable and maintainable applications. Cross-team communication. Distributed systems are built by large teams that comprise many different stakeholders. A sign of a great engineer is the ability to communicate effectively, fostering a shared understanding and efficient decision-making across all stakeholders. Conclusion In reflecting on the evolution of backend development over the past two decades, it becomes evident that the role has transformed from a relatively straightforward task of server-side programming to a multifaceted discipline requiring a broad spectrum of skills. The challenges of meeting higher user expectations, managing the scale and complexity of systems, navigating an overwhelming array of choices, and acquiring a diverse set of skills highlights the complexity of modern backend development. While it has never been easier to enter the field of software development, excelling as a backend developer today requires navigating a more complex and rapidly evolving technological environment. Possessing expertise in system architecture, cloud services, containerization, and orchestration tools, alongside the soft skills necessary for effective cross-team communication, will remain pivotal for success in this dynamic domain.
Fix: Developer Chasm To Engage More Devs With My Open Source Project Wish I could push that git commit to move beyond initial developer engagement. A developer chasm means getting stuck with open-source community growth after initial engagement. In this article, I will share the insights that helped me successfully move open-source projects from the initial developer engagement stage to the category leader stage with community-led growth. I learned from my developer community-building work and developer relations consultant work for open-source projects. A quick note if you’re hearing the term “Developer Relations” for the first time.Developer Relations or DevRel meaning: A work function that covers the strategies and tactics for building and nurturing a developer community. What Is the Source of My Learning? The source of my learning for the topic is my experience as a Developer Relations Specialist for open-source projects, building Invide (a remote developers community), organizing Git Commit Show (a global developer conference), and, of course — being a developer myself. You’ll find me quoting examples from these experiences. What Will We Cover in This Blog? What’s not a problem for open-source projects today What is the challenge open-source projects face — developer chasm Case studies of solving developer chasm Five insights to fix the developer chasm Disclaimer: The data mentioned in the post is from Nov 8, 2022. I didn’t update it, as the conclusions are still the same. What’s Not a Problem for Open Source Software Today Open Source Has Already Won Over Proprietary 70–90% of modern codebase is the Open-Source code — Jim Zemlin, Linux Foundation One estimate comes from the Linux Foundation, which found that in 2020, open-source software accounted for 70–90% of the code in any given piece of modern software. This means that the vast majority of software that we use today, from web browsers to operating systems to mobile apps, is built on open-source code. Open Source Is Raining Public repositories on GitHub [Data Source: GitHub, Nov 8, 2022] Total public repos — 43M Created in 2022–12M (27% of total) As of today (Nov 8, 2022), there are more than 43 million public repositories on GitHub. A significant number of these public repositories can be counted as open-source software (we will come back to those numbers later in this post). The key data point to notice is that 12M new public repositories were created this year itself. That’s a huge 27% of the total public repositories. A big number. And we love that, don’t we? Starting an Open Source Project Is Easy 1. Build a Useful Software: Easy If you compare the efforts it takes to build a web app or an automation system in 2023 vs. 2000, it would seem pretty easy to build software these days. There’s already a huge ecosystem of useful open-source software for the majority of common developer needs. There are 43M public repositories on GitHub as of now, out of which a huge 6.1M are MIT licensed, and 2.2M are Apache 2.0 licensed. The tech education is available in abundance over YouTube and blog content. The support is quite easily available on GitHub, Stack Overflow, Reddit, Discord, etc. DevOps tools and cloud services further make it easier to test and iterate faster. So making a software that takes the existing system one step further is not hard. If you have an idea and average programming skills, you just need to get started, and code it. 2. Make It Open Source: Easy Even before you take the first step to build something, you can publish the first commit easily on GitHub or any other source code hosting platform. Git has matured so much that you don’t need to think about how you will deal with source version control. Many licenses have been standardized to cover various cases of ownership and distribution rights for your code. So it is only a matter of making the decision to open-source and a couple of minutes to actually do it. 3. Engage Some Early Users: Easy It is not rare to see that projects get decent early engagement as long as there’s a need. This year itself, there have been 7000 new Open-Source GitHub repositories that received more than 100 stars. That’s a decent amount of engagement for a new project where some early adopters are considering the product to explore further, asking questions, reporting issues, etc. 4. Engage More Developers: Hard Falling on your face after a decent achievement. When you go beyond that first release and the first 100 GitHub stars, the challenges start appearing in making it a bigger project. Starting is easy; making it meaningfully big is hard. Is it true? Let’s discuss that with some data. The Problem Open-Source Projects Face Today: The Developer Chasm Only 1/128 GitHub repos with 100+ stars has 5k+ stars To understand that, let’s first understand how many devs engage with an open-source project. We will take the count of GitHub stars as a proxy for the number of developers considering exploring the project. Out of 12M repositories created this year, 7000 projects have 100+ stars, and only 55 projects have 5000+ stars. It means for every 128 projects that have 100+ stars, there’s only one that has 5000+ stars. In other words... For every 128 open source projects that were able to engage 100 developers, there’s only one that was able to engage 5000+ developers. Some more stats that showcase the exponential difficulty in getting GitHub stars (proxy to developer engagement) for public repositories: This year, by now (Nov 8, 2022), 12M new GitHub repositories have been created Out of which 70k had 10+ stars 1/10th of those 70k i.e. 7k had 100+ stars Only less than 1/14th of these 7k i.e. 500 had 1k+ stars 55 received 5k+ stars And only 15 new GitHub repositories received 10k+ stars this year No. of public GitHub repositories vs the no. of GitHub stars. [Data Source: GitHub, Nov 8, 2022] The Problem: “Developer Chasm” Getting Stuck After Initial Engagement From the stats presented in the previous section, we can conclude that it is exponentially harder to engage more developers with your open-source project beyond the initial engagement. From my experience as a developer, startup founder, and DevRel consultant for open-source projects, I have seen this pattern everywhere. You create initial versions of your open-source project, share it with some friends and communities, get some decent engagement and feedback, and probably do some more iterations of product and developer engagement strategies. And then you’re stuck there; it looks like you’ve been slowed down by an uphill path right after a smooth ride on a plain highway. You wonder how to engage more developers with your project in order to make it a category leader. This is the challenge this post aims to solve. And that’s the problem we aim to solve in this post from a DevRel perspective. I say this “from a DevRel perspective” because this problem has been well documented from other angles but not from the angle of DevRel. For example, the book “Crossing the Chasm by Geoffrey A. Moore” mentions the same challenge as “The Chasm” using this popular graph: The key concept mentioned in the book: The chasm is the gap between the Early Adopters and the Early Majority. This is the point in the technology adoption lifecycle where a new technology must prove its value to a larger audience in order to achieve mainstream success. Companies that are unable to cross the chasm often fail to achieve their business goals. As in our case, we’re focused on open-source product adoption by developers,Let’s refer to this challenge of moving from early adopter to early majority phase as— “developer chasm.” You see that innovators section in the graph, that’s your audience who engaged with you when you first shared about your project publicly. And then you attempted to attract early adopters and either failed to do so OR found it hard to move to the next phase, the magical phase of “early majority.” The book spills the wisdom on strategy about how to solve this from a broad perspective with key ideas such as: Repositioning the product Finding the winning niche category Building relationships A must-read book. Great advice for any tech product. On the other hand, this post will provide a more detailed view of how to solve this challenge with DevRel strategies and tactics independent of product decisions. What can your Developer Relations team do to leap from this developer chasm? Is It Even Possible for a DevRel Person To Solve the Developer Chasm? More than often, you’ll find people labeling success as luck, being in the right place at the right time. Some people will say, “Build a great product, and they will come.” Do products that have been improving at a great speed also struggle with the same challenge of getting stuck after engaging those initial users? In my experience, yes. Even the projects that are improving quickly also face the same challenge. And this turns into a loop where it is even difficult for products to improve without continuous engagement and feedback from users. So engaging more developers beyond that initial engagement makes or breaks the project. What can open-source maintainers do about it? Case Study #1: Open-Source Project That Rose Like a Phoenix I was lucky enough to work on this open-source project that had become stagnant after the initial growth. The competitor (in blue) also had a similar story — after the first huge growth, it was not growing at a rate that the team would be excited about. But within two quarters, a series of decisions and activities led our project to experience that hockey stick growth beating the competition that had been around for some time. It reached that 10k stars mark quickly. The learning: It is possible to solve that challenge of growing beyond first initial adoption. Case Study #2: Community That Engaged 15K Experienced Developers In 2016, I started building an invite-only community of remote developers. As with other projects, initial adoption was great. 100s of experienced joined our chat channel, but then, it stopped moving beyond that initial growth. Fast forward to 2.5 years later, we had engaged more than 15k experienced developers. What made all these things possible? When I think about that, these are the points I came up with: 1. Start With the Decision to Focus on One Metric Solve one problem at a time. If you have focus, half the battle is won already. More often than not, I find open source project authors/teams making this mistake, especially in the Developer Relations team; they want to do everything at once and many times work on (or not let go of) projects just because of FOMO (fear of missing out). It’s important to remember that you can’t improve everything at once. So, start by focusing on one problem at a time. Once you have made some progress on that problem, you can then move on to the next one. I have seen the benefits of focusing on one metric and the downsides of having multiple metrics to focus on. Before we dive into how to choose the right metric, let’s take a look at some of the examples of the metrics I had chosen in my Developer Relations projects Metric Example 1: # Of GitHub Stars Get to the 5k GitHub stars mark within 2 quarters We chose this metric to aim for the key business goals: Break out of stagnant growth period Build better public/investor perception of the project’s growth Example 2: # Of Support Messages Increase the number of new support messages by 20% within a quarter. We chose this metric to aim for the key business goals: Increase community engagement Understand the community use cases, needs, and the challenges Example 3: # Of Interviews Conduct 250 `1:1` interviews within 2 quarters We chose this metric to aim for the key business goals: Get verified talented developers in the community, controlling the community diversity and the needed characteristics of the members Understand their needs/aspirations How To Choose the Right Metrics? Start by asking: What matters the most right now? What will matter in the coming 6 months or a year? Then, come up with a list of different metrics that can reflect the goal progress and check which ones are: Aligned with the business goal Simple to understand Easy to track Remember, deciding the metrics is an art, and experience helps. Some examples of metrics to choose from: GitHub stars GitHub repo traffic # of new Slack/Discord members Followers on social channels Comments on social channels GitHub forks Beware of metrics such as: Docker pulls: Inaccurate data, especially due to CI/CD and automation pulls, which are not in small numbers Telemetry sourced metrics: Privacy concerns are not easy to tackle, and this leads to incomplete data. And no, don’t think about defaulting to opt-in; you do not want to piss off your community 2. Simplify the Communication I was working with this open source projects’ team, which had tried to engage developers on Reddit in the past, but it did not work well for them. When I proposed to do this again, the team was reluctant to do so because of past failures. Anyway, I started doing it, posting and analyzing one post at a time, and slowly, the engagement on my Reddit posts started improving. Eventually, this led to multiple viral engagements. What change did I make that started giving better results on the same channel? The communication. Let’s take a look at the communication before and after Before Simplifying One of the early posts. No engagement at all. Before simplifying communication After Simplifying One of the later posts with decent engagement. Look at those numbers of upvotes, and even more valuable than that — the % of people who viewed and upvoted: 96% (as compared to the prev. one, which had a 57% upvote rate only). After simplifying the communication and tailoring it to the audience's needs and style Why did it work way better than the earlier communication? While looking at the change, you might be tempted to make a conclusion — it is a simpler message, it is easy to understand for a broader audience, it showcases the value upfront, etc. While all those arguments are true, I still believe there’s no magic formula for communication that can result in better outcomes despite following these best practices. But what has always resulted in a better outcome is the process of refining the communication. Starting from wherever you are and taking the next steps to make it simpler and more interesting. How To Make Your Communication Simpler and More Interesting for Developers? Before you start working on this, one thing that needs to be understood is that the business pitch and the developer pitch are different. The way you communicate to each of them is not just slightly different but a totally different way of looking at the communication. The person taking care of the business (e.g., CXOs, execs) cares for different kinds of problems, and the developer cares for different kinds of problems, not just related to technical problems but problems in life in general as well. They could be the same person acting like both, but nonetheless, you should consider them two different personas to serve. This is why keeping the DevRel function separate from Sales is key to nailing both types of communications. Start thinking for this developer audience from the ground up, and start thinking about the problems they face in their lives. Your content will usually solve those developer problems or at least show that you care about them. But how do you go about figuring out those problems, and what does the process of refining this communication look like? The process to refine the communication, I like to call the funky QUAKE — Question Understand Answer, Keyword Experiment. Sometimes, I like to call it DevQuake, as in an earthquake. See, I haven’t been sharp at naming things, whether it is a variable in programming or an abbreviation to remember a process, open to ideas :) Questions: Start with looking at the questions people are already asking on StackOverflow, Reddit, developer communities, events, and anywhere you believe they will be asking questions. Usually, these questions will be in the form of “how to do/fix/create X”. There are many more types of questions that you’ll discover along the way. Soak that information in. Understand: From these questions, understand different problems or confusion they have. Not only the problem your product solves but also pay attention to all adjacent problems indirectly related to the key problem you solve, related to your category. Read between the lines. Note it down. This research will come in handy later. Answers: Answer those questions on internet forums. The activity of answering those questions will bring you one step closer to how your audience thinks and why they think that way. It will help you think on a deeper level about your developer audience's needs and behavior. Keywords: Understand the keywords/style your early adopters use. Understand how they describe their problems, what keywords they use to explain their problem or expected solution, what keywords do they already understand, what knowledge do they already have about your category, etc. But mainly, it is about the “Keywords”. Experiments: Experiment with new communication angles and then listen. It could be in the form of an educational post; it could be a simple question you might have; it could be an ask for help/feedback; it could be a celebration of a problem/solution, it could be something topical, it could be something inspirational, it could be a long rant, it could be in any form — text, image, video, etc. It could be the same communication but to a different kind of audience, etc. The bottomline is to be bold in experimenting with new angles. If you keep following this for some time, you’ll see that your communication with your target developer audience is getting better. You’ll soon start seeing the impact of these improvements on the growth/engagement metrics. 3. Engage With Developers Wherever They Are I find many DevRel folks waiting for targeted developers to join their product community and only then they help them out. This is not a good strategy, in my opinion, and leaves room for competition to engage those people who either don’t know about your community or don’t feel the need to join your community. Instead, go out and engage with developers on external communities/forums, asking questions related to the problems your product solves. This will build your reputation as an expert and bring those people to engage with your community. Learning: “Don’t wait for them to come to your community, have conversations with devs outside your community as well.” Another limiting belief I see is related to how you decide on the external communities/forums where you decide to engage. Most of the time, I see that open source authors or their DevRel team have too narrow a focus, e.g., if they have a search product, they usually focus only on search-related communities; if their target audience is backend engineers, they focus only on backend engineer-related communities. This leaves a lot of missed opportunities to engage developers. Instead, you should be thinking about a holistic picture of all the different things your target developer audience might be interested in. Aren’t there backend developers who love Chess or Rick and Morty? Some developers don’t even take part in a backend developer community, but they will participate in a chess community. Learning: “A developer is more than just a developer, a human being with varied interests. engage with them wherever they are.” Example: Think Outside the Box I engaged developers for a search-related product in a data visualization community. These two things do not seem connected, but I asked a simple question — are there some developers who love data visualization? The answer is yes. Of course, not all data visualization lovers are developers, but if my post goes viral, will it engage some of those developers who are part of this community? The answer is yes. That’s what happened: the post went viral, and we got a huge number of GitHub stars from that activity. I had to think outside the box to come up with a communication for that community that would be relevant to everyone in that community and would attract our target developer audience. A post on a seemingly unrelated community When engaging with external communities (e.g., HN, SubReddits, Meetups, etc.), you need to keep some things in mind so you do not end up pissing your target developers off. Be someone who adds value, not the one who keeps spamming everywhere. And there are some principles that I follow to make sure I am the former one: Understand first, then try to be understood. First understand that community, why people come here, what do they like, what kind of content do they engage more with, etc. Although, there are some hacks but it still takes time to do that, you gotta do that, there’s no other alternative. No more than one team member in a community. When you have more than one team member in an external community, you will end up coordinating to make your post look good. You will end up spamming when you both are in sync with what the other person is doing. All of this is not good for your project, and you’ll piss some people off and do more harm than good. The solution is simple — one community, one person. Be the first one to start the conversation. The ego will kill your project. I have seen many DevRel folks, who try to play hard balls to make a perception about their project/team. It is useless and does the opposite. Have the humility to be the first one to start the conversations. As simple as that. Notice what they say and what keywords they use. We have discussed that before. Keep your eyes and ears open. Individuals over teams. Developers in communities hate people who say, “We did this,” “We are so cool,” and so on. Instead, think from an individual perspective and communicate your authentic thoughts from that angle; people will be able to relate to that. For example, I never say, “We released a new version of our project; it has these cool x, y, z features.” I would rather say — “I have been working on this project for x months, and I’m excited that the new version goes out today with features I contributed, such as y and z features.” I bet that more people will engage with your second communication. Being authentic is never outdated and is a key to communication that makes an impact. 4. The Mindset: Obsess With Transparency and Ask For Help “It is a common false belief that in order to build your authority, you should not expose your vulnerability or ask for help.”– Pradeep Sharma Case Study: Turning a PR Disaster Into an Opportunity Imagine you’re launching your open-source project’s key release after months of hard work. You’re showcasing it in front of an audience of 200+ people in a professional live event. All of a sudden, someone hacks the event and starts playing racist music and drawing vulgar sketches, and you have to end the event there. This is a major PR disaster and disappointment for the team. It pisses off your team, the contributors, the investors, and everyone else who was attending the event. But later, when you approach this with humility, authenticity, and transparency, not hiding your vulnerable side, accepting your failures, and asking for help, this becomes one of the important reasons behind your exponential community growth. Here’s the link to this full story. After the dust settled Authenticity pays off So what is the learning here? The mindset. The mindset of the open-source project owners can be a slow poison that kills the project silently, or it can be a weapon that drives growth. One mindset is to showcase only stuff that makes you look good, and one mindset is to showcase everything, irrespective of how it makes you look, for the sake of transparency. If I had tried to hide our PR failure, I wouldn’t have built trust with the community. Why Obsess With Transparency? If you’re running an open-source project and don’t obsess with transparency, you’ll never build the needed trust in the community (whether internal or external community) to drive your project beyond that first level of engagement. Transparency is the key DNA element of any open-source project. This DNA makes it competitive with proprietary alternatives. This openness encourages a sense of community and shared ownership. There is a reason why developers trust open source more than proprietary software — transparency. Transparency builds trust. It starts with showing your code in public; irrespective of whether it is good or bad, you put it out there for everyone to see. But this transparency should not end there; it needs to be reflected in every aspect of running an open-source project, including the DevRel function. One key reason is that not everyone has the time to go through the complete code (and the later revisions) to make an objective judgment about how much they can trust your project. They will use the transparency you show in your communication to measure how much they can trust you. I am talking about genuine transparency, not the one where you share all good things and when something bad happens, you go silent. You are not doing anything unfair by going silent in that moment but you’ll never earn the trust of the community this way. The way I measure transparency — how many times in a year do you share the news that you don’t need to in order to look good. Why Ask For Help? There’s a sureshot way to not get any contributors for your open source project — do not ask for help. If you don’t ask, they won’t come. But the same feelings that stop you from becoming more transparent also stop you from asking for help when you should — fear of looking bad. Asking for help is not a sign of weakness, but a strength. No one person has all the answers, and the beauty of open source lies in its collective intelligence. By asking for help boldly, you can tap into the knowledge and wisdom of the community. Asking for help empowers them with an opportunity to step up and take responsibility, fostering a sense of ownership and engagement. Transparency and the willingness to ask for help are not just nice-to-have qualities in open source projects; they are essential. Key points to remember: Transparency is not about “sharing all the good news.” Vulnerability builds emotional connection Why would someone help if you don’t need it 5. Scale Developer Education With Content Case Study: Turning the Project To Be Synonymous to Category I was working on this project which wanted to be the leader in the “Neural Search” product category. I turned that open source project to be synonymous with Neural Search. What I did was this — I wrote one blog post and one video educating about the category of the product “Neural Search” and distributed that content well such that 75% of the first-page search engine results were my content. If you as a developer were to research what “Neural Search” is and how to go about implementing it, you’re likely reading/watching my content. This content engages more developers with that open-source project even after 2 years. Dominating search suggestions Dominating Google’s rich cards “People who ask.” Dominating the search results for the main product category keyword There were some more decisions involved in this overall achievement; let’s understand them. Developers don’t like to be sold. They are the smartest people to detect BS on the internet. You telling them about your product is a form of BS for them, it has little impact on their decision making. If you’re banking on telling developers about your product, you’ll struggle to get any meaningful adoption or scaling up your developer engagement. Developers will always do their own research and make a judgment from there. But once their research reaches the conclusion that your product is actually a good choice for their use case, it will bring more users. Because they will share their genuine research outcomes with their colleagues, their future employers, users on internet forums, etc. And that’s what other developers will see as non-BS content. It is a slow process, and you can speed it up a bit by creating non-BS content yourself. But what is non-BS content? Educate your developer audience that can help them with their research. But what should you educate your developer audience about? Common answer I hear — about my product. It is the wrong answer. By teaching developers about your product, you may get some users if you’re lucky, but you’ll never get meaningful adoption and will never be the category leader. Instead, do this: Pick the category that you want to lead Educate your targeted dev audience about this category Retarget that developer audience (whom you taught about your category) and educate them about your open-source project now Conclusion We discussed the good and worrisome aspects of developer engagement for open-source projects. We also discussed an approach to overcome the challenge of getting stuck after initial engagement. These insights may help you grow your open-source project, become a better DevRel Engineer, or just get one step closer to becoming a top Developer Advocate (a DevRel expert). In summary, how to engage more developers with your OSS: Focus on one metric Simplify the communication Engage wherever they are Obsess with transparency, ask for help Scale with developer education content
Numerous developers embark on their tech journey only to find themselves disoriented, intimidated by coding sessions, and wrestling with the notion that they might not possess the quintessential programmer's mindset. The path they tread is fraught with challenges, stemming not only from a lack of proper experience but also from the absence of essential tools. Crafting exceptional software is no small feat. It demands an extensive repertoire of knowledge, an eye for detail, astute logical reasoning, relentless research, and, most crucially, time. Developers are perpetually swamped, striving to maintain a sharp focus to avert errors amidst their bustling schedules. Indeed, the role they play is both demanding and fraught with responsibility, making dips in productivity almost inevitable amidst the myriad tasks, vast data, and looming deadlines they juggle. In software development, gauging productivity can seem like an elusive task. Ever find yourself wondering where the hours have flown? Or feel daunted by the high expectations set for your projects? Fortunately, there's light at the end of the tunnel. Overcoming these obstacles is feasible, especially with the aid of specific productivity tools designed for developers. In the sections that follow, I'll introduce a curated selection of tools aimed at streamlining workflows and enhancing efficiency for developers. These tools have been phenomenal in the experience of my application teams which boosted their productivity and efficiency. 1. Agile Project Management: Jira Agile methodologies have revolutionized the way we approach software development, emphasizing flexibility, continuous delivery, and customer satisfaction. Jira stands out as a robust tool for agile project management, offering features like sprint planning, issue tracking, and Scrum boards. Why Jira? Jira, developed by Atlassian, has become synonymous with Agile project management for several compelling reasons. Its intuitive interface, coupled with powerful features, makes it an indispensable tool for managing complex software projects. Here’s how Jira has enhanced our productivity: Sprint planning: Jira’s sprint planning tools allow teams to break down projects into manageable tasks, grouped into sprints. This feature was transformative, enabling us to prioritize work, estimate efforts more accurately, and adapt plans swiftly based on changing requirements. Issue and bug tracking: One of Jira’s strengths lies in its robust issue and bug-tracking system. By centralizing bug reports and feature requests, Jira facilitates a more systematic approach to addressing issues, ensuring nothing falls through the cracks. This centralization has significantly reduced our downtime and improved the quality of our final products. Customizable Scrum and Kanban boards: Jira’s flexibility in allowing teams to customize their Scrum or Kanban boards was a game-changer. This customization meant that we could tailor our project management approach to fit the unique workflow of each team, increasing efficiency and visibility across projects. Integration with development tools: Jira’s ability to integrate seamlessly with a wide array of development tools, including code repositories, CI/CD pipelines, and testing tools, streamlined our development process. These integrations allowed for automatic updates and notifications within Jira, bridging the gap between project management and actual development work. 2. Code Collaboration: GitHub The essence of modern development lies in collaboration. GitHub has been pivotal in fostering a culture of collaboration, providing version control, pull requests, and code review functionalities that streamline team-based development efforts. Why GitHub? GitHub goes beyond being a mere repository hosting service; it’s a powerful tool for software teams aiming to collaborate more effectively. Here’s why it became an indispensable part of our workflow: Centralized version control: GitHub provides a centralized platform for our code, offering robust version control capabilities powered by Git. This feature allowed our team members to work on different features simultaneously without fear of conflicts, significantly speeding up the development process. Pull requests and code review: One of GitHub’s most valuable features is its pull request system, facilitating code reviews and discussions right alongside the code itself. This process has not only improved the quality of our code but also provided a learning opportunity for the team, as feedback is shared openly and constructively. Integrations and automation: GitHub Actions and its marketplace of integrations have automated many aspects of our development workflow, from continuous integration/continuous deployment (CI/CD) pipelines to automated testing. This automation saves countless hours of manual work, allowing developers to focus on coding rather than administrative tasks. Open source community: GitHub houses the world’s largest community of developers and open-source projects. This vast network has enabled us to contribute to open-source projects and utilize community-driven projects, significantly reducing the need to "reinvent the wheel" for common functionalities. 3. Continuous Integration/Continuous Deployment (CI/CD): Jenkins In the quest for efficiency, the CI/CD pipeline automates the building, testing, and deployment of applications. Jenkins, with its extensive plugin ecosystem, automates these processes, significantly reducing manual effort and increasing deployment frequency. Why Jenkins? Jenkins is more than just a tool; it's a catalyst for DevOps practices, offering unparalleled flexibility and an extensive ecosystem of plugins. Here’s how Jenkins became a cornerstone of our development workflow: Automated builds and testing: Jenkins automates the process of code compilation and testing, ensuring that every code commit is automatically built and tested. This immediate feedback mechanism allows developers to identify and rectify issues early, significantly reducing bugs in production. Scalable pipeline as code: Jenkins Pipelines allow us to define our CI/CD process as code. This approach not only makes our pipelines more reproducible and version-controlled but also enables us to scale our CI/CD processes as our projects grow. Extensive plugin ecosystem: One of Jenkins' greatest strengths is its vast array of plugins, supporting integration with virtually every development, testing, and deployment tool out there. This flexibility has allowed us to tailor Jenkins to our specific needs, integrating seamlessly with our toolchain. Support for distributed builds: Jenkins supports distributed builds out of the box, allowing us to run builds on different machines, which speeds up the build process and supports parallel execution of tasks. This feature was particularly beneficial for our team, as it allowed us to maximize our hardware resources and reduce build times significantly. 4. Code Quality Assurance: SonarQube Ensuring code quality in a fast-paced development environment is a challenge that every software team faces. High-quality code is not just about reducing bugs—it's about maintainability, scalability, and efficiency. Maintaining high code quality is non-negotiable for productivity. SonarQube offers comprehensive code analysis to detect bugs, vulnerabilities, and code smells, ensuring that quality is baked into the development process. Why SonarQube? SonarQube stands out for its depth of analysis, covering not just bugs and errors but also code smells, security vulnerabilities, and duplications. Here's how SonarQube has been pivotal in enhancing our code quality: Automated code reviews: SonarQube provides automated code reviews, analyzing pull requests for bugs, vulnerabilities, and code smells before they are merged. This preemptive feedback loop has drastically reduced our time spent on manual code reviews, allowing teams to focus on feature development and innovation. Customizable rules and quality gates: SonarQube allows us to define custom rules and set up quality gates based on our specific requirements and standards. This customization ensures that all code meets our defined quality criteria before it's considered ready for production, fostering a culture of excellence. Detailed dashboards and reports: The platform offers intuitive dashboards and detailed reports that provide visibility into the health of our codebase. These insights enable us to identify areas for improvement, track progress over time, and make informed decisions about where to allocate resources for maximum impact. Integration with CI/CD pipelines: Integrating SonarQube with our Jenkins CI/CD pipelines has automated the process of code analysis, ensuring that every build is automatically scanned. This integration has been crucial in embedding code quality checks into our development lifecycle, making quality assurance an ongoing, integral process. 5. Automated Testing: Selenium Automated testing represents a cornerstone in modern software development, particularly as teams adopt faster, more agile methodologies. The shift from manual testing to automated frameworks significantly impacts a team's efficiency and the overall quality of the projects. Automated testing tools like Selenium enable developers to write and execute test cases for web applications, ensuring functionality works as expected, thus reducing the time spent on manual testing. Why Selenium? Selenium’s appeal lies in its flexibility and the comprehensive coverage it offers for web application testing. Below are key reasons Selenium became an integral part of our testing strategy: Cross-browser and cross-platform testing: Selenium supports testing across all major browsers and platforms, ensuring our web applications offer a consistent user experience, irrespective of the user’s choice of technology. This cross-compatibility is vital in today's fragmented digital landscape. Integration with test frameworks and CI/CD pipelines: Selenium integrates seamlessly with various test frameworks (such as JUnit and TestNG) and CI/CD tools (like Jenkins). This integration allows us to embed testing into the development pipeline, facilitating continuous testing and immediate feedback. Support for multiple programming languages: Unlike some testing frameworks that are language-specific, Selenium supports several programming languages, including Java, C#, Python, and Ruby. This versatility meant our team could write tests in the language they were most comfortable with, improving test development efficiency. Open-source community: Being open-source, Selenium has a vast and active community. The availability of extensive documentation, forums, and plugins has made troubleshooting and extending Selenium’s capabilities easier, enhancing our team's ability to implement complex test cases. 6. Cloud Services: AWS The cloud has become synonymous with modern software development. Amazon Web Services (AWS) provides a vast array of services that empower developers to build, deploy, and scale applications with ease and flexibility. Among the myriad of cloud service providers, Amazon Web Services (AWS) stood out for its comprehensive suite of services, reliability, and scalability. Embracing AWS in our projects not only facilitated more efficient development workflows but also unlocked new capabilities that were previously out of reach due to hardware limitations or cost constraints. Why AWS? AWS's dominance in the cloud computing sector is well-earned, offering an extensive array of services that cater to virtually every aspect of computing, from serverless architectures to machine learning. Here are the key factors that made AWS an essential part of our development toolkit: Extensive range of services: AWS provides a wide variety of services, including computing power (EC2), storage solutions (S3), database services (RDS and DynamoDB), and machine learning (SageMaker). This diversity allowed us to tailor solutions specifically to our project needs, often within a single ecosystem. Scalability and flexibility: One of AWS's most significant advantages is its scalability. Services like Auto Scaling and Elastic Load Balancing ensure that our applications can handle variable loads seamlessly, adjusting resources automatically to meet demand without manual intervention. Global infrastructure: AWS's global network of data centers ensures low latency and high redundancy for our applications. This global footprint was crucial for deploying applications that served a worldwide user base, ensuring optimal performance regardless of geographical location. Security and compliance: AWS commits heavily to security, offering tools and features that help us comply with various regulatory standards. The shared responsibility model and services like AWS Identity and Access Management (IAM) have been instrumental in securing our applications and data. 7. Containerization: Docker Docker has revolutionized the way applications are deployed, allowing developers to package applications into containers, ensuring consistency across environments, and simplifying deployment processes.
Miguel Garcia
VP of Engineering,
Nextail Labs
Jade Rubick
Engineering advisor,
Jade Rubick Consulting LLC
Manas Dash
Software Development Engineer,
TESCO
Scott Sosna
Senior Software Engineer II,
Datasite