How we came to zerocoding

I completed a small but very intelligent intensive from ProductSense “Technical Skills Intensive for Managers”. The intensive was devoted to the nuances of classical development, but in the last block of this course, Vad Mikhalev, the founder of the Zerocoding University, spoke about zerocoding and conducted several practices using specific tools as examples.

This topic was a revelation for me. I have heard the term no-code before, but everything flew past me and seemed like a separate specific topic from the programming world. But it was the two days of Wada’s intensive block that showed that this is not so – these are new opportunities, first of all, for product managers.

I opened the seminar materials and decided to try what would come of it. Five days later, we had the first draft of our MVP prototype. This became the basis of the decision that determined our entire current period.

First draft of the prototype in the Adalo interface
Comparison of zerocoding tools
The first draft of the prototype was assembled in Adalo, but it became obvious that the database capabilities of the tool itself were not enough for our tasks. A database in Adalo is a simple construction that allows you to operate with all basic types of information and build relational links between different tables, which are called collections in Adalo. But here are the minimum possibilities for working with the data itself at the database level.

In our case, we had to:

Looking for a solution for an external database – choosing between Airtable and QuintaDb. Fortunately, I already had a reason to deal with the database before and I talk about it in the article.
Looking for a solution to synchronize data between an external DB and Adalo. Chose between Zapier, Integromat or direct communication via API (API is a separate topic that is no less important in development than a database. Here I describe my experience in studying this issue)
It was necessary to find and study data parsing tools with their subsequent cleaning for transfer to the database with minimization of manual labor. For parsing, I chose between parser programs and the Scherpa RPA software robot (for data cleaning: Excel, Google Sheets, Airtable).

Decision stack chain options
Glide VS Adalo: App Builders
Today there are several dozens of zero-coding constructors available on the market that solve the problem of developing a mobile application without code. Glide and Adalo are definitely among the leaders: here I calmly trust the expertise of Zerocoding University and its founder Vadim Mikhalev. It was these two solutions that he offered at the intensive for product managers. This can be confirmed by active chats in the Russian-speaking community on Glide and Adalo.

Glide

prototype in Glide interface
On Glide, I assembled an almost final version of the MVP of our project, and if it were not for the limitations of the constructor itself, then this version could be enough.

But the limitations of Glide are not a minus, but rather a plus, as they allow the designer to stay firmly in their niche of tools for design professionals. Glide helps to quickly assemble an application for solving applied design problems, although there are quite successful applications for the mass consumer.

What are the limitations of Glide:

There is a ready-made set of functions, possible interface designs, types of user screens that cover a significant part of the needs of a mobile application. Choose and use. Like in Lego – there are several types of blocks from which you can assemble both a car and a space rocket. The downside is that it is extremely difficult, or maybe not possible, to make something custom, neither in terms of capabilities, nor in terms of design. Plus in the same. Most of the logical structures have already been developed and there is a very good chance that you will have enough of them. A ready-made design with minimal possibilities for correcting something will not allow you to make an obvious cranberry. There are really many modules and functions offered. As well as ready-made templates, on the basis of which you can get your application with your own data in 15-30 minutes.

The database is only Google Sheets so far. Minus – Google tables are not a particularly suitable tool for maintaining a database. It is no coincidence that Glide itself extends the missing features of Google Sheets with a set of its own functions and field types. Databases in Google Sheets lack several dozen types of columns, normal relational relationships between tables, and the power to process data. If the table has more than 1000 rows, the performance drops. You need a Google account to use Glide. But if you feel at least somewhat confident in Google Sheets or Excel or Numbers, you know how to enter information structurally, then you are already ready to work in Glide. You won’t have to worry about setting up data synchronization – take your Google spreadsheet, connect it – and that’s it: the application is almost ready. And the data was entered through Google, and you are entering it – everything is already synchronized. UPD: I was corrected in the comments – Glide’s own database can work without connection to Goggle tables. Yes, this is a correct clarification. Own database is quite functional, although it has a somewhat strange tool for linking data in different tables.
There is no option to publish to stores yet, only PWA. The disadvantages of this are primarily marketing. Explaining to the user that placing a shortcut to a PWA application on their desktop turns it into a native application is not an easy task. So if users are not fans of your project or employees of your team, then only because of the PWA format you can lose a significant part of your potential audience, which is a shame, because there is nothing wrong with the PWA format. But to teach the market – so-so idea. Pros – this is an excellent PWA, much better than the PWA of the same Adalo, in my subjective opinion.
The main thing is that the entry point and understanding of the program logic in Glide takes several hours. You can assemble a prototype or immediately the final version of your own application in one or a couple of days. The proposed logic and developer capabilities not only speed up the login process and ease of operation, but also can easily give you some sensible ideas for UI / UX design or a set of features that you yourself would not have been born.

Adalo

MVP in the Adalo interface
Adalo is a mobile application builder that allows you to solve a specific problem in several ways at once and gives you arbitrarily visual formats for solving it.

The database built into Adalo will be enough if your project does not involve a large array of data, for example, no more than a few hundred rows for 3-5 tables, and you do not need to constantly update or replace this data with arrays and carry out various work with them outside the framework user interfaces.

But to sort data by different fields, filtering data, grouping, own report at the Adalo database level does not allow. You can solve such problems if you develop special internal interfaces in Adalo itself, similar to user screens, but this is an excessively expensive way, not effective for a large amount of tasks.

Nuances of the business approach


What architecture will be, from a business point of view, is not so important. It will be fundamental in the case of a highly loaded portal, where there are a lot of connections and transactions with a huge number of visitors at a time. In the case of our oil service, there are no questions about a serious load and high speed of user interaction, which allows the implementation of additional functionality to an existing site that will be needed to work with a mobile application.

The car oil service has one more nuance: in addition to the functionality that they already have, an additional one is expected in the mobile application. Therefore, it is necessary not only to connect the application to the site, but also to make certain modules on the backend side, in Bitrix itself, for the operation of a full-fledged mobile application and new goodies.

Adding live
We have chosen a situation where a block of necessary modules is added from the side of the site, and this whole thing communicates with the application via the REST API. With 1C, we do not cling to the application, using what the client’s full-time developers have already done: we change the current system to a minimum so that they can continue to adequately support their own infrastructure. Something for the application to work will still need to be rewritten, but not more than 5% of the total infrastructure.

The server part will consist only in the fact that we are adding a separate REST API module to their current infrastructure and the necessary additions that are not on the software. Thus, we kill two birds with one stone:

The admin panel remains simple and understandable for each user,
We create a minimum of problems for the client’s programmers.
The second point is often missed during development: when a new team joins an existing team, friction may arise, technical directors instead of work begin to argue, swear, measure their knowledge, and this does not end very well for business.

I am a strong supporter of the idea that when two teams work together, there should be a strict protocol for working in modules that both teams have access to. For example, our new functions will lie separately, without overlapping the existing functionality: there will be joint work with git, and no one will swear at anyone, as happens in joint projects.

The private story above is a typical example of how a business grows from a classic corporate website first into a kind of automatic service, and then it hatches into a mobile application with a set of functions that the client needs. Very often a mistake is made when everything that has gone before collapses and is rebuilt anew. This does not protect against new mistakes, but it increases the time and costs.

The described case is a vivid example of how it is possible to develop a working scheme not “according to the textbook”, spend less money, time and nerves, and with a similar quality result. This approach is cost-effective, convenient from the point of view of the business process and easier for the developers themselves.
Zerocoding Tools
Today, there are more than a hundred different zero-coding tools*, and when I was faced with the task of choosing a toolkit, I did not seek to study all available ones, but decided to look at those that are heard (or language, keyboard keys) by those who have already plunged into it direction ahead of me.

  • links to several reviews: one, two, three

Over the past six months, I have used the following zero-coding tools:

Glide – mobile app builder
Adalo – mobile app builder
Airtable – database, spreadsheets
QuintaDB – Database
Itegromat – data synchronization between different parts of a common IT solution
Zapier – data synchronization between different parts of a common IT solution
Sherpa RPA – software robots for automating routine processes
Quickly studied:

Bubble – app builder and database
Stacker – personal user accounts
Criteria for evaluation
What was critical for the final choice:

Ease of entry and use – the last six months, development has fallen on me. My experience in development was very small and managerial. From programming, I generally studied only Basic at school.
Cost of use – for a startup at the Pre-Seed stage, the available investments are always at a minimum, and most of the tools can only be fully used at paid rates.
Completeness of functionality – minimization of the overall stack of IT solutions.
Briefly about the project
Neighborhood Events is a search service for local leisure activities and events in the user’s location.

From the development side, this is a fairly loaded technological information service, which is based on a relational database and a number of user interfaces, both mobile and web.

How we went through the traditional MVP development path
At first, we followed the traditional path for a technology startup:

From idea to product concept development.
We checked what problems we can solve with our product through custom development: we took a series of interviews and surveys of a potential audience.
Researched the market for its potential, players, business models and competitors.
We defined our own product hypotheses and business models.
We developed the first conceptual prototypes in Miro and Figma.
Prepared terms of reference for development and UI/UX design in Sketch.
Developed MVP by a team of developers on React Native (programming environment).
Made the first publication on Google Play.

First version MVP
It took us 7 months to complete these stages and we were already ready to solve a pool of tasks directly related to the pilot launch of the project. But then the first lockdown happened in March 2020, and the whole topic of offline events became irrelevant for a long time, at least to launch a pilot and test the reality of all our key product hypotheses.

From that moment, our throwing began, which led to multiple errors and a waste of time for the development team. We decided to switch to the direction of online events – the topic is close, the products are similar, as it seemed to us at the start of this pivot. But the main thing was the feeling that it was necessary to do it quickly, while the interest around the “online” topic was only maturing, and no one had yet released an intelligent search solution in this niche.

And instead of going through all the points listed above sequentially, we began to do everything in parallel, and some points were skipped or passed superficially. Vanity is always the enemy of the valuable.

Eventually:

They exhausted the development team: they began to change the product on the fly, without a sensible technical specification, using the labor of programmers for experiments.
We immersed ourselves in the topic of online in parallel with the development, and when we finally somehow fully mastered online events, it became obvious that neither the end user nor the author of the events could simplify life with such a product.
Three months later, the race burned out all. We conducted new research on the online events market, which confirmed doubts – it became clear that we needed to return to the original idea of the product.

What has happened so far:

Lost the development team.
Lost time.
Opportunities for investing our own funds have become more modest – we only spent on development and got off lightly: we worked with a friendly development team, and our MVP evaluation was by market, without all subsequent adjustments related to the topic online.

Reason is the cost of professional developers

Disable ads

Launch sales in new markets
For example, in China and India – a free business training program will help
Many people have business ideas in their heads, but the salaries of programmers are so high that it is impossible to realize them without investments.

No-code constructors are simpler than programming languages, which allows you to master them in a few weeks or months, and independently develop and launch a product without involving programmers.

The third reason is development speed.

When it comes to testing hypotheses, speed is important and quality comes second.

This is the reason landing page builders have become so popular. Not due to quality, but due to the speed of development. And the quality improves over time.

All this together creates a demand for development without code. But not everything is as smooth as it might seem.

Current No-code Issues
Let’s take a look at the reverse side, and for the sake of symmetry, here are three disadvantages that can make you abandon No-code.

The first disadvantage is dependence on constructors

There is nothing you can do about problems on the constructor side. You can’t fix bugs that annoy you, you can’t change the priority of development tasks.

You don’t have access to the code, and if the constructor doesn’t work for some reason, then your project won’t work either. In other words, constructor problems are your problems too.

The second drawback is the limited capacity

Every constructor has limitations, and what seems like a simple task may not be feasible on a constructor. Even if you involve professional developers, not all restrictions can be bypassed.

You have to make compromises and look for workarounds, or completely abandon development without code.

The third disadvantage is poor scalability

The more complex the project, the more likely it is to fail when implementing it on constructors. And the slower it will work, and you have few optimization tools, since there is no access to the code.

And the more complex and larger the project, the higher the tariff you will need, and at some point it ceases to be profitable.

Over time, No-code solutions become more reliable, scalable and flexible, but there are still disadvantages.

Can No-code replace programming?
According to my subjective assessment, of what a professional developer can do, only a few percent can be repeated in No-code. I’m a developer myself, and I can imagine the difference.

Most often, I come across the argument that in No-code it is impossible to do some specific tricky thing from the personal experience of the interlocutor. The examples are different – either integration, or synchronization, or a feature of the interface, whatever.

From a recent
The argument is valid, but the question itself is posed incorrectly.

The correct question looks like this:

Can programming replace No-code?
When someone solves his problem on the constructor, the world becomes one less task for programmers. It may seem like a drop in the ocean, but that’s what’s happening.

Over time, the designers will be able to collect more and more complex projects. With a personal account, subscription, personalization, a complex interface, and so on.

Now in No-code services you can collect landing pages, small websites, online stores, online courses, marketing auto funnels and chat bots. The list will only grow.

This process will stop only if good programmers suddenly become much cheaper. Then the need for No-code may fall, and until then the demand will grow, because for a number of tasks it is faster and cheaper.

Nobody will replace anyone
Programmers will not be left without work, there will simply be fewer simple, same-type, template tasks. These tasks will increasingly be done without any code at all, but it is too early to say that programmers will not be needed.

After all, someone has to develop the No-code tools themselves.

Today I want to raise the problem of building a project when it has both a website and a mobile application. It is on this issue that conflicts very often arise between programmers who approach the task from a development point of view and want to do everything perfectly, and clients who are primarily interested in the economic side of the issue.

And there are especially many conflicts in cases where a business already has a website, and it suddenly needs a mobile application.

An example from my own experience: not so long ago I was approached by guys who have their own network of specialized oil change services. They work with all models and specialize in fluids (mainly oil, but they change other fluids too). They are presented in several cities, the working scheme of their service can be depicted as follows:

There is a site to which visitors come;
After registration, they have access to a personal account, a client card, ordering any services;
in the inner loop there is 1C, into which applications fly away from customers, where they are then successfully processed;
part of the data is returned from 1C in the form of viewing records, data on the next replacement, and so on.
In general, nothing unusual: everything is plus or minus like in a trivial online store, only instead of goods there are services, and you need to sign up for the station. And now they faced the issue of developing a mobile application.

Reference approach
The classic approach of programmers to development looks like this:

First, some server part is done,
The admin panel is attached to it,
From the server there is an integration with the internal ERP
Only then a website and a mobile application are made to this backend
Both the site and the application, in this case, play the role of a frontend: both display data from the server, representing a tool for working with data not for the administrator, but for end customers. It turns out the classic infrastructure of a classic client-server application: everyone is happy, everyone is happy.

However, everything is not always so smooth: in the example above, the site and the server part are combined. earlier they had just a corporate website, and then it was developed to a service with a personal account. It turns out that in order to do everything for good, you need to demolish what previous programmers have done in a few years and do it right from scratch.

The problem is precisely in the approach to architecture based on what has already been done. Most often, programmers insist that the old scheme should be forgotten and rebuilt, justifying it with previous crutches and the need for refactoring, taking into account new tasks. On average in the market, this costs one and a half to two million, and in time it takes about a year, taking into account testing and debugging.

How No-code speeds up product development


Hello! I am a computer programmer. I love writing code and I want to share how the No-code approach allows me to do it better and more efficiently.

What is no-code
This is the solution of problems that programmers usually solve, without programming itself. In a narrow sense, this is just a set of services, in a broad sense, it is an approach to development that allows you to save time and money, while getting results.

The beauty of the new approach is that it is compatible with the old one, they can be easily combined with good results in a short time. This is what I want to demonstrate.

What was my problem
I’m developing a Creatium website builder and we have a free trial period which unfortunately attracts scammers.

Bad people register, create websites and place malicious scripts on them, or impersonate others and fraudulently obtain customer data.

Further, bad people use a link to our test subdomain (*.creatium.site) with a good reputation and thus set us up.

Several times we were blocked by VK and Google, showing visitors of test sites the following message:

It may take up to a week for the lock to be lifted. Fortunately, this did not affect the working second-level domains, they continued to work normally.

How we solved the problem before
The approach to solving the problem is simple – you need to detect and block malicious sites before they are detected by Google.

To do this, we compiled a list of suspicious phrases from sites that we blocked, and every time someone published a page in the editor, we looked for matches. This was done automatically, the results of the check came to a special channel in Slack.

Phrase examples: giveaway, viagra, location.href, password, etc.

The first version of the site verification system was developed for two weeks, and it had significant drawbacks:

Lack of analytics. Over several years, we collected about 6,000 suspicious phrases, many of which later never led to the discovery of new malicious sites. We had no idea which phrases were effective and which were just wasting our time.
Repeated triggers. If the page is re-published a day later, another notification will come to the Slack channel, although we have already checked this page.
Weak check. We tracked only phrases, often missed redirects to other sites and connection of suspicious scripts.
At one point, it became clear that a new system was needed to correct these shortcomings.

A new approach to problem solving
The list of suspicious phrases from the admin panel was moved to Google Sheets, columns were added to track performance. It immediately became more convenient to work.

It was

And so it became
Redesigned the way to check pages. They now open in a virtual browser that keeps track of all external scripts, redirects, frames, and links. This part is made by code, since No-code does not yet know how to do this. Anything that seems suspicious is sent to the Slack channel.
If the “False Alarm” button is pressed in Slack, the script adds a flag to the database that we trust this page and should not be reported again on reposts.

The Blocked or Fixed button triggers another script branch that increments the block count in Google Sheets so we can track performance.

In addition, both buttons remove the notification in Slack so that you can work with the channel in the Inbox Zero method.

Saving time and money
If we programmed all this, it would take 2 weeks. And so I spent 3 days alone.

We saved time on creating interfaces – for this we use the Slack channel and Google Sheets. We saved time on writing integrations between individual parts of the system – for this we use Integromat. However, now we pay for it 30 dollars a month.

The most valuable thing is that I can spend the saved time on the most important part of the system – the mechanism for checking suspicious sites, and make it high quality.

Future for No-code?
Developing something with the help of programmers is long and expensive, sometimes unpredictable. But you can do almost anything, if the budget allows.

No-code development is cheaper, takes less time, but has many limitations.

By combining these approaches, you can achieve better results than using each approach separately.
Does no-code threaten classical development?
I believe that in the future the no-code approach will be used more often, and more and more business tasks will be solved without code. But does this mean that programmers will no longer be in demand, and in the end they will be replaced by designers and neural networks? Let’s figure it out.

64
9569 views
My name is Vyacheslav Grimalsky, I am the founder of the Creatium website builder. Obviously, I am an interested party, but I will try to be objective.

Briefly about No-code
No-code (no-code, zero-code) is a solution to problems that programmers usually solve without programming itself, that is, development without code, using constructors.

There are two extremes
As a no-code advocate, I can’t help but debate this topic, and I often see two opposite extremes.

In one, people say that No-code is good for nothing, the hype will subside, and people will continue to quietly program for themselves, as they have always done.

At the other extreme, people say that programmers will soon have a very hard time, because soon there will be no work left. Here is an example of such an opinion.

Only the Sith make everything absolute, so I want to put everything in its place.

Why is No-code becoming popular?
There are many reasons, here are a few.

The first reason is modern business education.

It says that hypotheses need to be tested, and get feedback from potential customers as early as possible.

You may be right a thousand times over, but what’s the point if your product doesn’t sell? It’s a nightmare to spend a lot of money and time developing a product that ends up being useless. I have been through this myself, and I know many who have been in the same situation.

Therefore, if you have an idea, you need to build an MVP (Minimum Viable Product) and test it on real customers. Make sure they are willing to pay for it, and only then start full development in code, knowing that you are on the right track.

How to manage software development?

We compare domestic products for project management.

In 2022, we at Navicon, like many IT companies, faced the suspension of Jira, Trello, Asana, and a number of specialized products from global IT giants. For users, this means the inability to purchase new licenses, change the tariff – and even the gradual blocking of existing paid accounts. We were not ready to work in the free version, and even with a high risk of losing access to the system. And to lose project management tools, a list of tasks, a list of responsible managers and outlines of project ideas for even one project for any system integrator is the beginning of chaos.

Therefore, we had to quickly look for affordable high-quality analogues. I share the ones that we paid attention to.

YouGile

The YouGile interface is quite simple and familiar.

A distinctive feature of YouGile is flexible interaction with boards, setting up project roles and sharing boards for related teams. There is a mobile application, which means that you can stay in touch even in line for a cup of coffee.

The functionality is pretty standard for Agile products: boards, deadlines, responsible, attachments to tasks and subtasks in the format of checklists or a multi-level list. You can prioritize tasks and view tasks for all team members. This is convenient when you need to evaluate the workload of specialists. True, this function has a minus – it is not clear on what day this or that work should be performed.

Reporting in YouGile is rather limited, without beautiful dashboards on the manager’s desktop. On the other hand, data upload to Excel is available for self-configuring reporting and uploading pre-configured reports. In addition, among all the products reviewed, only YouGile provides for advanced configuration of access rights and the development of roles, including connecting contractors to tasks.

Reporting download available

Strengths. There are familiar work tools: kanban cards and a Gantt chart. An extensive set of options for working with rights and roles. The product is very mobile: there is a convenient application and chats for specific tasks – no need to be distracted by instant messengers. The browser version is very similar to foreign counterparts in terms of functionality. Therefore, re-implementation should not be difficult.

Weak sides. The lack of integration with the version control system, the UI leaves much to be desired – in general, there is something to work on.

The system is suitable for Agile companies. Focused primarily on small and medium teams.

WEEEK

WEEEK is great for weekly planning

A service with the principle of weekly planning: you schedule tasks and move on them. A notification will remind you of those that have not been completed. Very similar to Trello – there are boards with the same functionality, and all tasks are visible on one screen. Of the additional features – the calendar mode. Helps organize your working time.

Disable ads

Marketing

How to get traffic data for an offline business and 5 examples of using this information

Computer vision is very useful and effective for offline business. Phygital View makes it easy to apply this complex technology.

It is convenient that the user can combine tasks into projects, and projects into separate workspaces, or workspaces. Within each individual task, you can also create subtasks – and these will not be checklists, as in simpler task trackers, but full-fledged cards. Many built-in automation tools: task prioritization, assigning and reassigning a responsible person, and others. Thanks to this, the system allows you to manage complex projects.

Within each task, you can create subtasks

Strengths. Better than in YouGile, deadlines are organized – you can allocate a specific time to complete a task. A convenient system of push notifications has been set up, including in the format of messages in Telegram. Boards can be flexibly customized. Since the application was created primarily for creative teams, a convenient system for uploading files is provided – they can be attached directly in the application. The icing on the cake is an attractive and minimalist interface. After YouGile, the design feels very fresh.

Weak sides. The lack of reporting and analytics on tasks limits and allows you to track the progress of work only in time. It is not possible to specify labor costs in the task card.

The Kaiten service turned out to be more suitable for a large distributed team. It has many modules, from Trello-like boards to full Scrum Sprint management. You can work on Kanban and Scrum. All standard functionality is available: summary tables for sprints and projects, analytics by tasks, including at the company level, resource loading reports, including in the form of a Gantt chart, integration with GitLab. The interface is pleasing to the eye.

I was especially attracted by the fact that you can flexibly configure links between tasks. There is no predetermined hierarchy, which is usually the case with task trackers.

Convenient, eye-pleasing interface

Strengths. Firstly, the functionality is well developed: it is possible to customize the boards, including for different business processes, differentiate the role model and work with the customer’s representative. You can also build analytics and charts in different sections: for example, sprint burndown charts, obtaining aggregation values ​​in the context of sprints, work items, employees, and departments. In fact, all the necessary functions for Agile teams are available.

Weak sides. With a large number of tasks on the boards, the system noticeably slows down. Plus, it’s not too expensive.

Well suited, for example, for projects where they develop and implement solutions with a long life cycle and want to assemble a task tracker, a planning system, and project management in a single space.

IT newcomers failures

“I need to feed my family”
This is a direct consequence of the aggressive advertising of courses that promise rapid income growth. Well, at least people don’t get loans before leaving for IT – well, if that’s the case.

At first, in IT, a person is paid little. If he is not yesterday’s student, then there are almost always obligations behind him – family, children, mortgages, etc. The drop in income can be both serious and for a long time – it strongly depends on the desire to survive in IT. Many simply cannot stand this, especially men over 30.

I don’t know where such information comes from, but they sincerely think that a financial cushion is enough for 2-3 months, and then the grass will turn green and cognac will flow under a lying stone. What happens after 3 months is clear. “I want, I try, and it seems to work, but I have obligations, I can’t let my family down.”

Once again – I write this without irony and mockery, I have family and obligations myself. He himself started in IT with a salary of 5 tr.

Therefore, friends, men over 30: save up a pillow for at least six months. And do not burn bridges with either the previous or the new job.

“I’m thinking…”
Oddly enough, but there is also such a reason for non-survival. In all seriousness, there are people who came to study and train, but manage to mold, voice and zealously defend their vision of the Way of the Programmer. Accordingly, to criticize what is offered in the employer company.

No, just for fun listen. And they will offer freedom of choice of the path. Along with the freedom to pay your own salary.

More 50/50. Some smile, apologize and sit down to work. Others leave with their heads held high.

“I will enter the ball”
There are quite a few cheaters. Basically, the pattern is like at an institute – to mingle with the crowd, somehow “surrender”, learn how to solve a couple of typical problems of a certain profile and take some warm corner.

Unfortunately, this pattern is strong because it sometimes works. The world of programmers is so rich and diverse that there is a place for non-programmers. But the survival rate is not worth it.

“Mom / wife sent”
If my mother, then, as a rule, she sent me to study at an institute or college as a programmer. If the wife – then just “enter the IT”, because “Snezhana’s husband is out of his mind.” Snezhana’s husband may be a little rough, but it’s very difficult to overcome yourself.

Especially and precisely because “mother sent” and “Snezhana’s husband.” The hierarchical instinct, coupled with the acquired inferiority complex, creates either an apathetic or a very nervous dude who sits every single day and does not understand what he is doing here. Motivation for learning is appropriate.

Well, if it boils in oneself, then it starts to infect others. Look for your own kind, conduct heartfelt conversations, almost incite to rebellion, just to get rid of the forced necessity or justify it by leading some not very confident novice programmer astray.

Don’t do this, please. Waste a lot of good people’s time, incl. – himself.

“There is everything according to the instructions and on the Internet”
There is such an opinion about programming as about middle-level system administration – all the answers can be found on the Internet. For the sake of truth, we have to state that in certain areas it is really so – a significant part of the tasks have long been algorithmized.

But the one that requires a creative approach will definitely meet. Fortunately, if on the third day isp. term – the realization will come faster that in programming you need to think a lot, invent, try, make mistakes and move blindly.

No matter how hard the developers of new technologies, in which the code is written by itself, try, the programmer has been and remains a creative profession. This, again, is not golemy show-off. You will have to come up with solutions. Yes, now is not the year 2000, there is something to rely on – but it is to rely on, and not to steal.

Unfortunately, there are non-survivors who refuse to think. They literally sit down, fold their pens and demand “tell me what code to write to me.”

“I miss the stars from the sky”
One of the most common mistakes is choosing the wrong starting point. Sometimes the wrong company or even the wrong department. There is nothing particularly terrible here, you just need to carefully listen to the interviewer’s story about the company and not be afraid to express your desires.

And that’s how it happens. In one department, seasoned programmers are taught, in the other, support staff. Both professions are important and necessary. A person wants to be in support, but is afraid to admit it – who wants to see a condescending smile from HR? (spoiler – it won’t happen, HR gets paid to close a position).

So they stick, suffer, do not survive and leave. It happens and vice versa – they want to become programmers, they are afraid not to get in, they grab a titmouse, and there it seems that the place is normal, and they pay well, but they never entered IT. Then, oddly enough, the transition is even more difficult.

“I can not”
A fairly generalized reason, but it occurs often. A person came, sat down, did something, the mentor looked after and helped, but at some point the trainee fell into depression and came to quit. He makes a verdict on himself – “I can’t cope,” “I can’t do it,” “others are much better.”
The problem is aggravated by the fact that with proper warming up, a person is no longer ready to listen to the mentor’s arguments, feedback, because “he decided everything” (or even found a new job). It is clear that the mentor and / or the boss will get their own – they did not notice in time, did not support, and in general.

But we are here about the reasons for non-survival. And such, alas, occurs. The trainee is captured by the natural “run” response and is not ready to freeze or hit.

A separate category – “well, I see.” Here it gets into your head that the rest are much better. No matter how much you explain – in no way. According to my observations, here the syndrome of an excellent student is often mixed in – you need to raise your resume, look at the scores for the Unified State Examination and the average for the diploma. A person is used to maintaining an inferiority complex in himself, and you, with your persuasion, only hinder him.

If you come to enter IT, trust the assessment of only one person – the mentor. Well, or whatever you call it there. It’s best to talk to him right now.

Defining bots on a website using neural networks

WordPress

big data

Machine learning

Artificial intelligence

From the sandbox

A couple of years ago, like many other site owners in RuNet, I faced a sharp increase in visitors from social networks. At first, this was pleasing, until it came to a detailed study of the behavior of such “users” – it turned out that these were bots. Not only that, they also greatly spoiled the behavioral factors that are critical for good ranking in Yandex, and even in Google.

Studying TG channels devoted to cheating behavioral factors (and most bots are used for this), I came up with the idea that the bot developers must be mistaken somewhere, somewhere they run into the inability to implement a full-fledged emulation of the parameters of real browsers and the behavior of real visitors. Based on these two hypotheses, the idea arose to create 2 neural networks. The first of these should determine the bot by numerous browser parameters. And the second – on the behavior on the site – on how the user scrolls, clicks and performs other actions.

Base gathering

The first thing you need to train a neural network is a sufficient number of training examples – visits that we know for sure are bots and visits that we know for sure are real people. Three parameters were used for this selection:

Recapcha V3 scoring.

Whether the visitor is logged into Google, Yandex, VK services.

Uniqueness of Canvas Hash.

The essence of the last parameter is that an image is drawn on the HTML “canvas” element, then the md5 hash of this image is calculated. Depending on the operating system, its version, browser version, device, the image is slightly different and the hash is different. Bots tend to add random noise to the image to make them harder to detect, and as a result, they have a unique hash. Real people don’t have unique hashes.

So, a real visitor if:

Recapcha V3 score >=0.9.

Logged in Yandex and one of: Google or VK.

Canvas Hash is non-unique.

Bot if:

Recapcha V3 score<=0.3

Not logged in anywhere, or logged in only in Yandex. Bots for cheating behavioral factors very often go under the Yandex profile.

Unique Canvas Hash.

Data was taken from three information sites for a period of 1 month. A little more than forty thousand visits got into the database, 25% of which are bots and 75% are real people. What exactly was going to be described below.

Bot detection by browser settings

Although the bots work on browser engines, this is far from the full-fledged Google Chrome that they want to impersonate. They have to emulate many settings in order to look like a real user’s browser. So let’s try to train the neural network to find discrepancies between the emulated parameters and the real ones. To do this, we will collect the maximum amount of information about the browser, namely:

OS, OS version, browser name, browser version, gadget model, if possible.

Connection parameters – network type, speed.

Screen resolution, display window size, whether there is a scrollbar, and other display-related options.

WebGL parameters (video card model, memory size, etc.).

The types of media content that the browser can play.

What fonts are supported by the browser (the 300 most common fonts are analyzed).

WebRTC settings.

It turned out several dozen parameters

The next thing to decide is which neural network architecture to use. Since the base is somewhat unbalanced, the first thing that came to my mind was to try an autoencoder. Train it on real people (75% of them), and interpret emissions as bots. The following architecture was used:

The result is this:

The total error is large. It is necessary to try the usual classifier on fully connected layers. The following architecture was chosen:

What happened:

The result is excellent! And for screening out most of the bots that are used to cheat the PF, such a neuron is suitable.

But what if the budget allows you to use bots not just on the browser engine, but on the real Google Chrome? This requires significantly more resources, but is technically easy to implement. For such a case, this neural network is not suitable. But you can try to analyze the behavior of the bot and compare it with the behavior of real people.

Definition of a bot by behavior on the site

Good bots emulate the behavior of a real person – they click, scroll, move the mouse along a human-like trajectory, etc. But they are probably wrong somewhere – maybe they have a slightly different distribution of events, other delays, click locations, etc. Let’s try to collect as much data as possible about the behavior of visitors. To do this, we analyze the following events:

Mouse movement.

Scroll wheel.

Touch screen.

Page scroll.

For each event, collect the following parameters:

duration.

Changes in the X and Y axis.

Rate of change in X and Y axis.

The number of elementary events the browser received.

Events fit into a time series, which means it makes sense to use one-dimensional convolutional networks. The optimal architecture turned out to be:

And the result is the following:

The result is also quite good. The disadvantages include the fact that it takes at least 20 seconds to be on the page for a sufficient number of actions to occur. Therefore, it cannot be used to filter traffic at the download stage.

Steps for software production

Service support in production, monitoring and fixing errors

After the release, it is important to monitor errors and atypical behavior of metrics for at least a day. The development team should have a resource to quickly fix bugs after the release, I recommend planning tasks in the sprint taking this factor into account. If major bugs were found after the release, then the team should consider steps to prevent similar situations, for example, strengthen code review or testing. Also, any errors should be well read in the logs and metrics. If you learned about a serious error from users, then it is worth introducing new metrics that would clearly signal an emergency situation in the future.

Each company has its own development specifics, but I hope that the steps described above will be valid for a large number of web services. In the comments, share your vision of the correct rollout of the service in production and describe the steps that are used in your companies or projects.

What clients want: full compliance, cybersecurity monitoring without response and less GosSOPKa

Rostelecom-Solar Blog

Information Security

*

All customers are different. Some are tech-savvy and know exactly what they need. Others just want to work without failures and incidents, although they do not understand how to implement it. And someone does not understand information security, but is sure that he knows exactly how to build a process. When it comes to such complex tasks as creating or rebuilding a Security Operations Center (SOC), the difference in approaches to information security becomes apparent. In this series of posts, we decided to share a selection of the most popular requests that companies come to us as a service provider. We have already written about customers who are really good at IT and understand exactly what they want from SOC. And about those who know firsthand what SOC is, but are not so immersed in the question, which ultimately leads to a kind of task setting. In this post, we will talk about companies that see the main risks not in cyber threats, but in inspections and fines from the regulator, missed deadlines, so they just want to hide behind a fig leaf.

A lot of requests from such customers are related to the topic of compliance:

“I want to receive notifications from you earlier than from the NKTsKI.”

“Just organize interaction with the State SOPKA, you have a license.”

“Install a piece of iron so that everything works, like a sensor from the FSB, but for us.”

“Take responsibility for all functions of the GosSOPKA center.”

“Ensure compliance with the requirements of the Central Bank / FSTEC / FSB (underline as necessary)”.

In general, they all boil down to three main tasks:

Identification of threats, incidents, vulnerabilities before the regulator.

Formal closure of requirements during verification.

Transferring areas of responsibility to the service provider.

Let’s deal with each separately.

Identification of threats, incidents, vulnerabilities before the regulator

What’s wrong with this request? It seems that everything is logical: the desire to receive important information as quickly as possible. But usually in this case, the client does not need full-fledged monitoring, incident response, the process of identifying, prioritizing and eliminating vulnerabilities in the infrastructure. He wants point alerts on indicators completely similar to SOPKA sensors, as well as a number of measures aimed at cleaning the perimeter of critical vulnerabilities on the eve of the regulator’s check. Real security in this option is of little concern.

But usually behind such a request is a lack of readiness to establish regular operational work with incidents and vulnerabilities on their side and build functional information security processes. It is important to note here that the SOPKA sensor works on the closed base of the NCCCI, aggregated and prioritized by various sources, but this is not the only way to fix the compromise of the company’s IT infrastructure by the NCCCI. The service provider works much wider, sees more, digs deeper, but without feedback from the customer, he does not understand the context and cannot independently decide (in 99% of cases) whether the activity that he fixes is an incident or not. As a result, the number of notifications from the service provider is dozens per day (and not once a month, as the customer expects), and they are not getting smaller. At this point, the customer’s specialists need to choose: either start working in tandem with the service provider, or accumulate alerts. In the second case, a “letter of happiness” from the regulator will definitely come and you will need to write explanatory letters. Fortunately, in recent years, against the backdrop of an increase in cyber incidents, many companies have begun to evaluate the role of a service provider in a different way, so formal threat detection is becoming less and less common.

Formal closure of requirements during verification

The regulatory burden is perceived by some companies as a formal measure and, in their opinion, has nothing to do with real cybersecurity. But actually it is not. Registration of information security events is a useful requirement, especially if, after registration, the fact of an incident is confirmed, localized, mitigated and restored – that is, a full response cycle. In the case of a formal approach, the list of notifications from a service provider or your own SIEM / IRP looks like an archive of the Lenin Library, which is simply presented to the regulator during verification.

But the missed incident still remains in the area of ​​responsibility of the company that owns the infrastructure: CII, PD, GIS, and so on. And this smoothly brings us to the third type of requests.

Transfer of responsibilities to the service provider

The IS monitoring service provider is not responsible to the regulator for an incident that occurred at the customer’s site (its owner is responsible for what happens on the infrastructure). Of course, if the provider himself caused the incident is another matter (but I hope such cases are extremely rare). Therefore, the approach to transferring certain areas of responsibility should be as reasonable and conscious as possible.

Steps from development to production

In asynchronous applications, the request UUID must be added to the log, by which it will be easy to track the entire chain of logs for a particular call.

If the logs are written in an understandable form and you can easily determine what exactly the error is, then the work has been done well. Experienced developers know that it is better not to skimp on the info level logs, because if there are not enough of them, you will need to add code, deploy it again to production and try to reproduce the problem, which will take quite a lot of precious time.

For error level logs, I recommend using an additional tool – Sentry. It is convenient to view new and most massive errors in it. For errors of the Exception type, you can save a traceback, which greatly speeds up their correction. It will also be convenient to set up notifications about all new errors in the Telegram channel.

For collecting application metrics, Prometheus is a good choice and then outputting them to Grafana.

Remember that the development and maintenance team must learn about all abnormal situations before user complaints. Therefore, if at the very first rollout of an application it may be enough to collect metrics for 4xx and 5xx application errors, delays for API methods and a couple of important business indicators, then as the code base grows and the importance of the service, it is necessary to constantly add more and more specific and narrowly focused metrics.

Step 1. Code review

If you carefully consider this step, then there is a chance to catch a huge number of errors and not blush in front of users. On code review, it is not necessary to work as an interpreter and meticulously check every line; it is important that the verifier understand the general idea of ​​each written function, and in case of misunderstanding of some points, do not hesitate to contact the author of the code. Also make sure that a unit test is written for each critical piece of code, and check that the parameters checked by the test are correct. The total code coverage with tests should reach 80% (code coverage can be checked with third-party tools, such as SonarQube).

Also, during code review, it is important to check the migrations to the database: make sure that adding columns or creating an index does not “hang” the database. And if there is such a chance, move these steps into the pre-release part and execute with minimal load on the server.

Particular attention should be paid to SQL queries: search, update, and delete operations must always contain a WHERE part (if all parameters are passed to the function as null, then you need to crash and not allow a query to the database without parameters).

Also, for all suspicious queries, it is worth doing an explain (in the ideal case, from the selling database replica) and making sure that the “cost” of queries is low and indexes are used. Otherwise, you need to add the missing indexes to the migrations.

Step 2. Pentest

In order to prevent the data stored by your users from becoming available to third parties, you need to conduct application pentest (penetration testing). It is a method for evaluating the security of computer systems or networks by means of attack simulation. Domclick pentest is carried out by experts from Cybersecurity. If your company does not have specially trained engineers for vulnerability scanning, then I recommend taking at least a basic cybersecurity course (for developers) in order to avoid the most childish mistakes.

In my experience, the following set of actions will greatly reduce the risk of an attacker getting unprotected data:

Use up-to-date versions of libraries (older versions may have vulnerabilities). Choose the most popular libraries in the community.

When working with a database, do not use (or minimally use) raw SQL with concatenation. Make sure that the driver used to connect to the database automatically gets rid of dangerous special characters in the query (most SQL injections are based on adding special characters to the query and then executing the command the attacker needs).

When storing text that will later be displayed to users (for example, comments), turn the text into safe HTML so that the page does not display executable JS code.

For each API method, use role-based access control. Log under which user the operations of adding, changing, deleting data took place.

I also note that after fixing the vulnerabilities identified during the pentest, it is necessary to re-pass the code review.

Step 3. Load testing

To understand whether the service can withstand the “influx” of users, it is necessary to conduct load testing. Also, this step will help you understand how many requests one instance or sub service can withstand, and correctly calculate the margin in case of peak loads.

There are a lot of tools for load testing, and you can choose Apache JMeter or the wrk utility (for local tests) as the basic ones. To create a simulation of a complex load profile, you can write a script yourself with the necessary API calls and run it in the required number of threads.

Step 4. Canary Release

Unfortunately, it is impossible to foresee all possible errors in the release of new functionality, but their impact can be minimized by conducting a canary release (canary release) – this is a method of reducing the risk when introducing a new version into production by providing a change to a small subset of users with an increase to a state where this change is made available to everyone. At each step of increasing traffic to the release version, you need to monitor errors and be ready to roll back to the old version at any time (it is especially important to work out a plan to roll back migrations to the database in case of emergency situations).

Web tools for software developers

▍ Mockito

It allows you to create a similar object, the so-called “stub” and specializes specifically in them. Thus, you can create your own implementation of interfaces, classes with the necessary functionality, test their work for correctness.

There is a good article on Mockito here.

Above, we have already talked about working with the database, and once again, returning to this issue, we cannot fail to mention the Hibernate framework.

Using Libraries

Before you try – check, or maybe someone has already done this …

This is often the case, and it’s worth checking for code that solves your problem. After all, for sure, other programmers have already racked their brains over this issue! Let’s take a few examples.

Business interests often require developers to interact with not very convenient file formats, among which are such as Word or Excel. There is a fairly old and proven solution for this: using the Apache Tika library. According to the developers themselves, the library supports more than 1000 file formats, including Word, Excel, PowerPoint, PDF, etc.

Thus, the library allows parsing files in the listed formats and more.

Speaking about the graphical representation of business information, one of its main types is a variety of graphs. A good free chart generation library is JFreeChart. It provides a convenient API and allows you to display graphics in both vector (PDF, EPS, SVG) and raster ( PNG, JPEG) formats.

Since web developers periodically have to deal with the JSON format, an appropriate tool is needed that can work with it, and the Jackson project provides it, but is not limited to it, the formats that the library works with include: XML , YAML, CSV (and more).

Increase in general erudition

The required speed of development is provided to a very large extent by knowledge of the subject and related areas. And you can even say more that “erudition is our everything” (it’s not in vain, after all, there is a gradation into juniors, middles and seniors, hehe).

In this sense, the following scheme, the so-called backend roadmap, is quite informative. It gives a general understanding of what kind of “gentleman’s” set of knowledge in subject areas a developer should have, depending on the needs of interaction with certain technologies. Yes, time goes by, and this scheme may, to a greater or lesser extent, cease to correspond to reality, but it gives a general idea, which is why it is interesting:

The original scheme at the link above is good because it is clickable and literally each of its elements leads to a page with information regarding the corresponding issue.

There is a similar scheme for front-end development:

In addition, there is a very curious detailed FAQ on the frontend.

However, the information contained in these diagrams and the FAQ should be taken critically and taken in a cognitive way for the overall development.

As an epilogue

Summing up all of the above, I would like to note that we have walked, so to speak, “on top”, and “behind the scenes” there are many other interesting issues left: design patterns (at least the same MVC, within the framework of Spring, since we are talking about it), data structures, optimization methods, flexible development methodologies, and so on and so forth. But this will already pull on a whole book, and not on an article 🙂

If you try to highlight the most important thing, then it probably makes sense to put erudition in the first place, since for sure everyone will have at least 1-2 stories when the technologies and approaches studied for the future came in handy. If only there were a little more time in the day than 24 hours.

Rolling out a service to production: 6 steps to a successful release

Domclick company blog

Website development

*

Programming

*

There are many guides and instructions for creating basic back-end applications. You can also find step-by-step tutorials on how to build an application and deploy it to a server, or detailed instructions for popular CI / CD tools. The steps described in them are enough to launch pet projects, but for full-fledged applications that will have to withstand peak loads from a large number of users and still work smoothly, more detailed and high-quality preparation is needed. Below I will describe the steps that are required for engineers from my teams when first deploying a web application in production and further rolling out large features.

Step 0. Logging and adding metrics

Before rolling out an application to production, it is very important to correctly configure the logging levels for technical messages and errors, their recording in the log storage, and also “smear” all important indicators with metrics. For collecting and viewing logs, I recommend the well-established ELK stack (Elasticsearch, Logstash, Kibana).

After the correct configuration, all stdout logs of the service will be stored in the repository and will be available for viewing in Kubana. Set logging levels as follows:

info – for typical messages.

warning – if this code block was not intended to be entered. Warning level logs need to be reviewed and analyzed from time to time.

error – for errors.

debug – This level of logging is best used as infrequently as possible, because debug messages take up a lot of space and make searches worse. If you still need a debug level to fix problems or when starting the service, be sure to agree on a date when this type of log will be disabled.