Web tools for software developers

▍ Mockito

It allows you to create a similar object, the so-called “stub” and specializes specifically in them. Thus, you can create your own implementation of interfaces, classes with the necessary functionality, test their work for correctness.

There is a good article on Mockito here.

Above, we have already talked about working with the database, and once again, returning to this issue, we cannot fail to mention the Hibernate framework.

Using Libraries

Before you try – check, or maybe someone has already done this …

This is often the case, and it’s worth checking for code that solves your problem. After all, for sure, other programmers have already racked their brains over this issue! Let’s take a few examples.

Business interests often require developers to interact with not very convenient file formats, among which are such as Word or Excel. There is a fairly old and proven solution for this: using the Apache Tika library. According to the developers themselves, the library supports more than 1000 file formats, including Word, Excel, PowerPoint, PDF, etc.

Thus, the library allows parsing files in the listed formats and more.

Speaking about the graphical representation of business information, one of its main types is a variety of graphs. A good free chart generation library is JFreeChart. It provides a convenient API and allows you to display graphics in both vector (PDF, EPS, SVG) and raster ( PNG, JPEG) formats.

Since web developers periodically have to deal with the JSON format, an appropriate tool is needed that can work with it, and the Jackson project provides it, but is not limited to it, the formats that the library works with include: XML , YAML, CSV (and more).

Increase in general erudition

The required speed of development is provided to a very large extent by knowledge of the subject and related areas. And you can even say more that “erudition is our everything” (it’s not in vain, after all, there is a gradation into juniors, middles and seniors, hehe).

In this sense, the following scheme, the so-called backend roadmap, is quite informative. It gives a general understanding of what kind of “gentleman’s” set of knowledge in subject areas a developer should have, depending on the needs of interaction with certain technologies. Yes, time goes by, and this scheme may, to a greater or lesser extent, cease to correspond to reality, but it gives a general idea, which is why it is interesting:

The original scheme at the link above is good because it is clickable and literally each of its elements leads to a page with information regarding the corresponding issue.

There is a similar scheme for front-end development:

In addition, there is a very curious detailed FAQ on the frontend.

However, the information contained in these diagrams and the FAQ should be taken critically and taken in a cognitive way for the overall development.

As an epilogue

Summing up all of the above, I would like to note that we have walked, so to speak, “on top”, and “behind the scenes” there are many other interesting issues left: design patterns (at least the same MVC, within the framework of Spring, since we are talking about it), data structures, optimization methods, flexible development methodologies, and so on and so forth. But this will already pull on a whole book, and not on an article 🙂

If you try to highlight the most important thing, then it probably makes sense to put erudition in the first place, since for sure everyone will have at least 1-2 stories when the technologies and approaches studied for the future came in handy. If only there were a little more time in the day than 24 hours.

Rolling out a service to production: 6 steps to a successful release

Domclick company blog

Website development

*

Programming

*

There are many guides and instructions for creating basic back-end applications. You can also find step-by-step tutorials on how to build an application and deploy it to a server, or detailed instructions for popular CI / CD tools. The steps described in them are enough to launch pet projects, but for full-fledged applications that will have to withstand peak loads from a large number of users and still work smoothly, more detailed and high-quality preparation is needed. Below I will describe the steps that are required for engineers from my teams when first deploying a web application in production and further rolling out large features.

Step 0. Logging and adding metrics

Before rolling out an application to production, it is very important to correctly configure the logging levels for technical messages and errors, their recording in the log storage, and also “smear” all important indicators with metrics. For collecting and viewing logs, I recommend the well-established ELK stack (Elasticsearch, Logstash, Kibana).

After the correct configuration, all stdout logs of the service will be stored in the repository and will be available for viewing in Kubana. Set logging levels as follows:

info – for typical messages.

warning – if this code block was not intended to be entered. Warning level logs need to be reviewed and analyzed from time to time.

error – for errors.

debug – This level of logging is best used as infrequently as possible, because debug messages take up a lot of space and make searches worse. If you still need a debug level to fix problems or when starting the service, be sure to agree on a date when this type of log will be disabled.

Web development setup

Any work of a java programmer is based on the basis, which means not only direct knowledge of the programming language itself, but also knowledge of additional components, without which programming in its pure form becomes quite difficult or uncompetitive in time. That is what we will talk about in this article.

The following presentation does not claim to be complete, but it may be useful to someone.

Using frameworks is the first thing

▍Spring

This framework is at the top of the list of utilities. It speeds up development due to the fact that it consists of many modules, each of which is responsible for a separate area and, in fact, serves to inject the appropriate dependencies.

We can say that it is a kind of framework with which you can solve many typical tasks, which are solved with the help of the corresponding modules.

Speaking more specifically about speeding up development using the appropriate Spring module, for example, I have seen information about reducing the amount of code by 80% when using the Spring Data module.

Spring can be used to create enterprise-scale web applications, and back-end developers can also use it. However, its scope is not limited to these two areas, and it can be used for mobile development or desktop applications as well.

You can find a complete list of Spring modules that close the solution of specific tasks on the official website at this link:

As you can see, there are quite a lot of modules, however, despite this, some stand apart, and you could often see them in the requirements for various vacancies.

▍Spring Boot

It serves to simplify the configuration of Spring for a specific project and contains a number of utilities to simplify this process, since setting up pure Spring can be a long process.

For those who want to get acquainted with it, there is a good manual for working with Spring Boot, as well as an official manual for a quick start in 3 steps.

▍Spring Data

If the application requires data storage in relational or non-relational databases for its work, then it makes sense to use this module, which provides the ability to work and mechanisms for interacting with specific databases of various types. The key concept of Spring Data is the repository, a good article on working with this module can be found here.

▍ Hibernate

A little deviating from the topic, it makes sense to mention this framework. It allows you to significantly reduce the amount of generated manual code for interacting with databases and represents working with them within the framework of the object-relational mapping model (ORM), within which class fields are linked to the corresponding columns of tables in the database.

Thanks to the use of Hibernate, it is possible to reduce the amount of code at a low level, since it takes over the interaction with the database, and the developer remains to interact in code, with a convenient representation of the database in a virtual form.

Thanks to a large community of developers and a theoretical base, learning this framework is greatly facilitated. In addition, it can interact with almost any database.

There is a good article here about using hibernate for the first time in your project.

▍Spring Security

Any application created on the basis of Spring requires its own protection, this is exactly what this tool is designed for, which allows you to provide support for authentication and authorization, protection from attacks, etc.

In addition, since we are not talking about web applications, it makes sense to mention two more modules.

▍Spring REST

It is an architectural style of interaction in components within a computer network, in which the presentation state is transferred, that is, in other words, various components located in physically remote places from each other exchange data within a certain style. Within this style, data exchange between web agents can be carried out in different formats – JSON, XML, etc. This way of interacting within this architectural style is also called RESTful.

There is a good article here about developing a RESTful service.

▍Spring Web Services

Designed to facilitate the process of creating services that support the SOAP protocol that exchange data in XML format. A detailed official guide to creating a SOAP service is available here.

By the way, if your application is based on a microservice approach, then it makes sense to use a message broker to communicate between individual services.

Of the well-known brokers, two come to mind: RabbitMQ and Apache Kafka. The first of them is designed more to be able to implement fairly complex message routing scenarios, while the broker from Apache is for scalable high-load systems, which also provides the ability to store messages and retrieve them (for the desired time period) for analysis purposes.

Testing

The quality of the developed software directly depends on how error-free the final code is. Unit testing and integration testing can be done to ensure quality. The first is designed to check each individual module, which means checking the correct operation of code elements at the level, up to and including each individual method; in contrast, integration allows you to check the correctness of the code as a whole and the relationship of individual components.

▍ JUnit

For testing in Java, the JUnit framework is used. In its latest version, it consists of three components:

JUpiter is a support for new programming models.

JUnit – to run testing.

JUnit Vintage – Provides backwards compatibility to support tests written for previous versions of JUnit.

But testing in its purest form using this framework will not be very convenient, since for unit tests it is necessary to use instances of the tested classes, whose functionality must be limited in order to provide certain behavior within the framework of the test. It is not very convenient to provide this and is fraught with errors.

This is what the following framework is designed for.

How to work with IDOR hackers

To find IDOR, hackers intercept API requests and substitute new identifiers into them using a web proxy, such as BURP Suite. Sometimes they rely on luck and brute-force IDs, but there are more elegant techniques, such as swapping session labels.

To find IDOR:

You need to create two users and save their session labels.

This can be a token or a session ID, which is any string in the API that the application uses to identify the logged in user.

The next step is to log in as the first user, perform a series of actions in the application and record them using a proxy.

Now we need to look at the traffic and find the API call that passes the object ID to the server.

You need to repeat the call, intercept, edit and send to the server with the session label of the second user.

If as a result the server responded with an authorization error, the IDOR is most likely missing. But if the backend returns data about the object, you need to compare the responses to the normal and malformed requests. If they are the same, the application is vulnerable.

In BURP Suite, such checks are partially automated using plugins: AuthMatrix or Autorize. They get rid of the routine and allow you to filter the results (for example, in Autorize using the Scope items only flag and regular expressions). However, these plugins are just a handy tool, the main thing in such bug hunting is experience and understanding of how the application works.

You need to find out what roles and groups are provided in the application and how they interact.

What is the difference between manager, driver and administrator and what functions are available to each of them?

It is desirable to build a map of relationships between resources.

How are orders, checks and goods related? Can one user place orders under someone else’s name?

It is worth exploring the features of the REST API.

This set of rules forces developers to act in a pattern that can be used against them. Let’s say you find an endpoint that exposes a REST resource.

GET /api/chats/<chat_id>/message/<message_id>

Try replacing GET with another HTTP method. If that doesn’t work, add a Content-length HTTP header or change the content type.

Why there are so many IDORs

In the past few years, IDORs have been everywhere, with several in one, even a small application. It seems to me that there are objective reasons for this:

More identifiers are sent from clients.

In the past, the server could directly track user actions, but in modern applications, clients are increasingly passing more data on request through an API.

The old IDOR defenses are no longer used.

Developers often replaced real object IDs with temporary ones that are relevant only for this user and one session. To do this, a separate table was created on the backend, where each object had a temporary identifier. This practice has lost its relevance, because it does not agree well with the principles of the REST API. In addition, this interface no longer provides for recording the state of the client (Stateless).

Role models are getting more complex.

Even if an application has a robust mechanism for checking user rights, it needs to be properly configured. It can be difficult for a developer to know if user X has file Y available. Especially if the user is a regional manager who belongs to one of a dozen subtypes within the role model. Another setting of the authorization mechanism complicates the misunderstanding between developers and end users of the system. As a result, users are often left with redundant options just to avoid accidentally selecting the features they want.

Defending and Eliminating IDOR Are Not the Same Thing

There are many recommendations on the net to combat IDOR, but many of them are confusing. The authors of such tips often list methods to mitigate the risk of a vulnerability and pass them off as ways to fix it. I mean recommendations like:

Use of random identifiers.

Most programming languages ​​provide cryptographic functions that generate a new value with high entropy. If you use them to create object identifiers, it will be more difficult for attackers to pick up a new ID to exploit IDOR.

The use of hashes.

Another way to make it difficult to change identifiers. It is given, for example, in the OWASP memo. However, hashes can be guessed. By the way, Base64, which is sometimes used as such, although it is not a hash function, decodes without any problems.

Using JWT JSON Web Tokens.

Such tokens protect against some manipulation of user IDs, but do not solve problems with object IDs.

Filtering user input before it is processed by the application, validating ranges, lengths, and formats.

Perhaps the most useful of these recommendations, the main thing is to correctly configure the filter.

However, none of these methods solves the problem of access control, does not eliminate IDOR. They only make the problem worse. By the way, external security systems, such as web application firewalls, do not save from this type of vulnerability.

The fact is that IDORs are closely related to the business logic of the application. The only way to reliably mitigate IDOR is to fine-tune session management and user access checking at the object level. This way, even if an attacker finds and changes the internal link, he still won’t gain unauthorized access.

Of course, every application is different. There is no universal way to implement access control, but in any case, this mechanism should be well designed and tested according to certain patterns.

It’s worth checking out a scenario in which a low-privileged user tries to perform actions meant only for high-privileged users. The verification scheme is similar to the one used in hacking.

Log in to the application under an account with the highest privileges.

Perform a series of actions in the application and record API requests using a proxy.

Authenticate to the application with a lower privileged account to generate a token for the Authorization header.

Replay recorded API requests with title changed using a low privilege user token.

The next step is to develop and run unit tests to cover edge cases – situations where:

The user is not authenticated, for example, the authorization header is missing or invalid.

The user is authenticated but not authorized on the resource.

Finally, full integration testing is required, taking into account edge cases. When testing an API, it is desirable to test each method for each endpoint. Unfortunately, there is no silver bullet from IDOR – only testing, testing and more testing.

Storing neural networks in a table-network DBMS HTMS has obvious advantages.

Firstly, the ability to store a large number of similar neural networks that differ in the parameters used in training – the number of epochs, the learning rate and the bias value and, accordingly, the weight coefficients, which allows you to choose the best one after a set of actions to check them.

Secondly, different neural networks can be stored in the same HTMS database, the choice of which is made depending on the rapidly changing environment when solving such problems. HTMS, due to its high performance, which, in turn, is a consequence of the tabular network data model, will provide fast loading into RAM of the currently needed neural network.

Thirdly, the use of an adequate DBMS can help for scientific and educational purposes.

Fourth, the features of the tabular network data model and its implementation in PTIO make it possible to efficiently store in the database and read from it neural networks with tens of thousands of input and hidden nodes. for example, the approximate size of a database for 1 SME with 1000 input nodes and 1000 hidden layer nodes would be:

99% of what I do is using avoidable mistakes. Today I will talk about IDOR, one of the most common and easy-to-use web vulnerabilities. With its help, you can see other people’s photos on a social network or get a discount in an online store, or you can earn thousands of dollars in bug bounties.

Using practical examples, I will show how hackers find and exploit business logic errors in applications and give practical advice on how to fix them at the development stage.

IDOR – what is it and what is it eaten with

I’ll start with the basics. The web application manipulates certain entities. For example, on the website of an online store, these are products, users, baskets, promotional codes, etc. Each instance of such an entity is treated as a separate object, which is assigned its own identifier. ID 483202, pid 6260 – each application is filled with these values.

It is assumed that the user manipulates objects through the interface, within the logic of the application. In this case, the application shows only those objects with which the user is allowed to interact. However, an attentive user will notice the identifiers of these objects, for example, in the address bar. A hacker will definitely try to change them. So you can access other objects directly, bypassing the application logic and despite the prohibitions.

This vulnerability is called IDOR (Insecure direct object references) – an insecure direct object reference. It occurs when three conditions are met simultaneously:

the user can find a direct reference to an internal object or operation;

the user can change the parameters in this link;

the application grants access to an internal object or operation without checking the user’s rights.

Let’s take the link to this article as an example: The identifier 686464 is included in it, and it can be replaced by another number. Two of the three conditions are met.

Going through the numbers, sooner or later you will guess a link to someone else’s draft, for example, this one. If such a link opens, congratulations, you have found IDOR. On Habré, this does not happen, since the third condition necessary for the appearance of IDOR is not met. The correct authorization mechanism works on Habré.

Changing a URL is a classic example of an IDOR, but vulnerable identifiers aren’t just found in the address bar. If we look at the bug reporting statistics on HackerOne, it turns out that IDORs are most often found in the REST API, GET parameters, and the body of POST requests.

Risks and Consequences of IDOR

The danger of vulnerabilities of this type strongly depends on what data and what operations with them are available to the attacker. Conventionally, IDOR is divided into four types (in practice, they often intersect):

1. Gaining unauthorized access to data

Sometimes direct object references give access to the contents of databases: individual fields or internal identifiers that allow you to prepare SQL injections.

I recently encountered a similar error on the portal of a new social network. When going to the GET /feed/gallery/uuid endpoint, the server returned users’ personal data: phone number and email address.

2. Performing unauthorized transactions

By changing your user ID or API keys, you can access paid app features and even run commands as an administrator.

In this example, without authorization, the DELETE /accounts/{uuid} method is available, which allows you to delete an arbitrary user account by specifying a valid UUID. As a rule, this identifier has a high entropy and it is not easy to brute force it, but if such an IDOR is combined with other vulnerabilities, it is very dangerous.

In this case, unauthorized access to a number of endpoints containing the page_size parameter was possible on the resource under study. It is responsible for displaying user pages. The correct modification of the request allowed mass and without authorization to upload information about the user, including the UUID required for the operation of IDOR.

3. Managing Application Objects

Some IDORs allow you to edit data within the application. This vulnerability could allow an attacker to modify session variables, such as escalating privileges or gaining access to restricted functionality.

This is a screenshot from the pentest of one of the delivery services. It turned out that an API intended only for company employees is available from the application. IDOR gave the client the full functionality of an employee, such as viewing the status of vehicles and the ability to create new accounts.

4. Direct file access

This type of IDOR allows you to manipulate the resources of the file system: upload and edit files, download paid content for free.

Once such unauthorized access was found on the website of an online school – there it allowed access to curricula and lessons. To download the content, it was enough to follow the routes: /api/0/curriculum/lessons/ and /api/0/files/<id>/content.

Using HTMS to store and apply neural networks

Python

*

Programming

*

Data storage

*

Machine learning

*

Artificial intelligence

A new approach to modeling neural networks in table-network databases.

[This is a translation of an article I published on www.medium.com in a series of posts about the table-network data model. See links to all posts here.]

HyperTable Management System – HTMS is designed for universal use. One of the subject areas where the features of the base for HTMS – table-network data model correspond to it as adequately as possible, are neural networks¹. The neural network is a directed, weighted graph.

As a basic neural network model, I will use a multilayer perceptron MultyLayer Percehtron – MLP² with one hidden layer.

The theory and practice of MLP is excellently presented in a series of articles by Robert Keim on the basic theory and structure of the known topology of neural networks (see Russian translation). It also contains the text of the Python Python Code for MLP Neural Networks.py program, which implements two main stages of training a neural network – training itself, which consists in selecting weights for activating the hidden layer and activating the output node (vector non-linear optimization problem) and validation (checking ) – determining the probability that the neural network will produce the correct output value for an arbitrary combination of input values.

The Python Code for MLP Neural Networks.py program was used by me as a prototype for creating software (also in Python) using the HTMS system as a DBMS. The program contains the following main components:

The main program is mlp.py

create_db.py is a module that defines the main classes for storing MLP models in a tabular network DBMS. The module uses the middle-level HTMS API. Contains a description of two hypertables:

mlp is a database (hypertable) for storing perceptron models with the following tables:

Start — catalog of perceptrons in the database;

Input – storage of input nodes;

Hidden – storage of hidden layer nodes;

Output – storage of output nodes of perceptrons.

train – a database (hypertable) for storing data samples for training and validation, i.e. datasets of input and output values:

Training – table of sample data for training;

Validation – table of data samples for validation

load_train_valid.py – a module with the function of loading samples of input data for training and validating a neural network – from Excel tables to DBMS tables (as class instances for training and validation), as well as for creating empty tables to store attribute values ​​of Start, Input class instances, Hidden and output. The module uses the middle-level HTMS API.

mlp_train_valid.py is a function that reads data from the Training and Validation tables, trains the neural network, validates it, and writes the resulting MLP to the database. The module uses the object-level HTMS API for training and validation, and uses the middle-level HTMS API to store the perceptron in the database.

mlp_load_exec.py is a module (MID-level HTMS API) with two functions:

mlp_load_RAM – for reading from the database of a specific SME model;

mlp_execute – to get the MLP output value for any set of inputs

mlp_par.py is a module with parameters and a logistic function and a derivative of the logistic function. Below are screenshots from the HTMS hypertable editor, which is part of the software system.

Below are screenshots from the HTMS hypertable editor, which is part of the software system.

General database structure for storing perceptrons (neural networks). The database has 4 tables (Start, Input, Hidden and Output) and 13 attributes:

The Start table is a catalog of perceptrons stored in the database. In the example, there are 5 neural networks with 3 input and 3 hidden nodes. StoI field – stores a set of simple row references in the Input table, where each row corresponds to one input node. In addition to the input nodes, each perceptron has a special bias node – BiasI, so each perceptron is assigned 4 rows. The Correctness field contains the results of the perceptron validation:

In this form, the editor shows the detailed contents of the entire field for one of the rows in the table. In particular, here you can see that the StoI link field in the 1st row of the Start table contains 4 links – to the 1st, 2nd, 3rd and 4th row of the Input table:

The contents of the table to store the input nodes of all perceptrons. The ItoH field contains a set of weighted (numbered) row references in the Hidden table, where each row corresponds to one node in the hidden layer:

The detailed contents of the entire field for one of the rows in the Input table. In particular, here you can see that the ItoH link field in the 1st row of the table contains 3 weighted links – to the 1st, 2nd and 3rd rows of the Hidden table – with weights -0.03, +0.8 and -0.07 respectively:

Table contents for storing hidden nodes of all perceptrons. The HtoO field contains a weighted (numbered) reference to a row in the Output table, where each row corresponds to one output node:

The detailed contents of the entire field for one of the rows in the Hidden table. In particular, here you can see that the HtoO links field in the 1st row of the table contains a weighted link – to the 2nd row of the Output table – with a weight of -0.60:

As an additional example of another perceptron database (with 3 input nodes and 5 nodes in the hidden layer), the contents of its Start table are shown:

An example of a multilayer perceptron with 3 input nodes and 3 nodes in the hidden layer in an HTMS tabular network database. Image generated by HTMS editor (with Graphviz visualization package):

Since the algorithms and data structures of the code fragments for validation in the mlp_train_valid.py module and for using the perceptron in the mlp_load_exec.py module are almost identical, this example shows how HTMS tools can be used at the middle level and object level to solve the same task.

Obtain Harddrive Size

using Powershell
So I ended up stuck in an odd situation the other day. I needed to know how large someones hard drive was, however I didn’t have access to the computer and it wasn’t listed in the asset management software that I use. I do know WMI is enabled on all domain computers, I also decided Powershell was the quickest way to obtain the harddrive size, without diving into VB, SNMP or any other number of ways to find the same information.

Using this article from stack overflow I put together a step by step that will help me in a pinch in the future. There are scripts that can do this as well, however, I didn’t feel the need for a script. This also neatly finds the harddrive size without a lot of fuss. 99% of the time, my asset management software would do this for me.

How to Obtain Harddrive Size using Powershell

  1. Open up powershell as an administrator. Either on a server or your local windows 7/8 pc.
  2. Next we will assign the object to a variable so we can use it later. I use the variable $disk. At the prompt

$disk = Get-WmiObject Win32_LogicalDisk -ComputerName computername -Credential domain\adminaccount -filter “DeviceID=’C:'”
Parameter Description
Get-WmiObject Get WMI class information, instances of classes or available classes. Alias: gwmi
Win32_LogicalDisk The Win32_LogicalDisk WMI class represents a data source that resolves to an actual local storage device on a computer system running Windows.
ComputerName Simply input the computer name you wish to access
Credential Passes the domain or local credentials to the query. Since I was operating within a domain, it will immediately ask for the domain password based on the credentials you put into the above line.
Filter Allows me to filter the output to retrieve what I want. Remove -Filter if you want to see the scope of information you can retrieve. Since I only wanted to see the information for C drive. That is what I filtered for.
There is no output from this command.

  1. Next step is to simply pull the information from the object we want, at the same time we will do some math to display the size in GB.

$disk.size/1gb
In my test case is 19gb, at this stage I don’t care about free space, but I could easily query that as well.

$disk.free/1gb
This output show my freespace at 5gb.

Here is the entire result from beginning to end in my effort to obtain harddrive sizes remotely using Powershell.

Windows PowerShell
PS C:\ $disk = Get-WmiObject Win32_LogicalDisk -ComputerName ComputerName -Credential domain\adminaccount -filter “DeviceID=’C:'”
PS C:\ $disk.size/1gb
19.6259160041809
PS C:\ $disk.free/1gb
5.000000000000
There are a number of items you can access using the WMI class through Powershell.

Export Mapmyride to Strava


mapmyride I may elaborate further on too tools I use to keep track of my cycling life. But as it stands, I use two. I prefer Mapmyride for the mid ride stats, not to mention just general usability. Plus, it worked with my hr monitor. Strava did not. However Strava fills the stats geek in me with joy. Breaking down the rides in more logical segments and in general just being stronger for analysis. Someday I hope one or the other, catches up to the others strength, so I don’t need two tools. However for now, I need both. StravaSo I really needed a tool to export mapmyride GPX files, and in turn import them into Strava. First link I found gave me what I wanted.

Step 1 – Retrieve Mapmyride Workout Identifier
First we need to get the mapmyride workout identifier. This is easy enough to do, look on the address line of the website. Ensure you are navigated to a workout to export. There will be an identifier at the end of the URL. http://www.mapmyride.com/workout/12345678 Copy the 8 digit number (in this example, 12345678) and move onto step 2

Step 2 – Export Mapmyride
Visit this site. A big thank you to the author Mike Palumbo for making a kickass tool. Export Mapmyride to strava After clicking submit it will export a GPX you can import into strava.

Step 3 – Import into strava
a) Find the upload button in the top right of your Strava Screen.
Strava – Upload

b) Select Choose File, find your file and import
Strava – Select File

c) Fill in the information, and select save
Strava – Import Activity Screen

A simple enough process to get both programs to work together. There are alternatives such as a fitness syncing program I found, but it complicated my life more than made it easy. I use several programs to track different parts of my fitness. Automatic syncing ended up duplicating too many activities.

Printer is Offline


I am going to assume you have done the usual troubleshooting. Here is a small collection of initial steps I take to troubleshoot printers.

Check network connectivity
Reboot printer (not the soft boot on enterprise printers, but a real power down)
Restart print spooler on either server or client or both
Printer Offline
Windows 7 Printer Offline

If you know your have performed adequate troubleshooting steps, and your printer is offline, it could be a simple SNMP issue as explain here at Robin’s Blog, thank Google for the easy find.

In the process of locking down some of my errant SNMP strings on a new printer install, the printer would not come online. I had changed the default SNMP string to a unique identifier. Once this changed went into affect the printer showed offline for all my computers and servers. In order to fix this, I had a three options.

Modify the SNMP setting in the properties of the offline printer
Go to the properties of your printer on your server or client
Printer is Offline
Windows Properties

Select the ports tab in properties
Select configure port, you should see this popup appear
You have two choices:

You can turn off the SNMP status
Modify the public string to represent the new string you have used on your printer.
Change the Printer SNMP string back to Public
This one is self explanatory, and one I have opted to not do. You can always add back the read-only string “Public”. This may be a quick way to fix the problem until you have a more viable solution.

Refer to your manufacturers manual or support for changing the SNMP.

My Printer is Offline
So you have tried to fix this by modifying the properties in your printer. You have also gone through the basic troubleshooting of making sure it is connected and working on the network. There are lots of potential problems, I personally would head over to the manufacturers site, check your drivers and firmware. You could also hit up Microsoft. They have some good tools to help trouble shoot printer problems.

The following two tabs change content below.

How to do a Product Comparison

My review process for deciding on products or services is to use a product comparison matrix, which I will use throughout this blog. It’s a simple breakdown of needs & wants based on a weighted score, tabulated to give me a final score for each product. While completely subjective to my needs it does give me a good baseline to propose products to purchase or how to proceed with projects. The product comparison is also an easy way to capture your audience. Customers and executives like the simplicity of it, without going into too much technical detail.

Product Comparison Matrix

ProductNeed1Need2NEed3Need4Need5Total Max Score
Weight105510.5Scored out of 10 Max potential score 305
Product1107654217
Product2109554222
Product3532810160

As you can see from this product comparison matrix, the process is relatively simple. I may also put in a negative column if I find something I dislike so much I want to review it down. I can see from the total that in this case Product2 suits me quite a bit better. Now I need to individually evaluate some key factors.

  1. Need 4 falls under 50% for my two highest products. Do I care? Do I want all of my needs to meet a certain score? Do Needs vs Wants weight immediately higher?
  2. Do I want a minimum value? For example if I am reviewing a number of products, I can designate a high minimum score. 80 or 90% at times. Other times this high a score isn’t feasible since products just don’t meet my requirements, so I would lower my score to find an appropriate product.
  3. I then also have to decide if there is simply a cutoff. So many decisions in IT are made by people who aren’t technical. If I can’t find a product that meets my Needs and Wants, and the total score is too low. Do I then create a case against moving ahead. As you can see, the matrix I create gives me a baseline to then make decisions. The final score isn’t always the final factor, but this helps me quickly eliminate products that just don’t fit the bill. The highest score is only around 72%. How much am I sacrificing to go with one of these products listed in this product comparison? Should I change my needs or change what I can pay, to find something to meet my needs?

This product comparison matrix will be used in most of my reviews. I will be as transparent as possible so that in my reviews people can find commonalities that they can utilize.

The following two tabs change content below.