How far will your AP reach?

I am frequently asked when presenting on Wi-Fi, how far will your access point reach.  My response varies, but is typically something along the lines of, how far will your Wi-Fi client reach?

The worse response I hear is, “oh it will cover x square feet”.  Really?  Is that indoors or outdoors?  What if you have concrete walls versus sheetrock?

A better standard response that I hear often is that it varies from environment to environment.  Depending on the wall types and other environmental factors, the signal level will greatly vary.  For example, a building with a wide-open floor plan like a warehouse, will receive a much larger coverage area, than say a school that has been built with concrete walls.

Another common response that I hear is that density also plays a role in deciding how large an access points coverage area should be.  If you are not expecting a large amount of people to be using the Wi-Fi network all at once, and you are looking more to just have complete coverage, you would design using larger coverage cells.  However, if you are planning to have a large amount of people congregated in one area connected to Wi-Fi, smaller coverage cells are more appropriate so that you can split the load across multiple access points.

However, back to my original response, I always point back to the client devices.  In my experience, the client device is normally the weakest link and should always be a consideration.  Keeping in mind that in mobility, devices are designed to preserve battery life.  This means that they are not going to be transmitting at full power.  Typically, the smaller the device, the smaller the battery, the lower the transmit power.

Imagine this scenario:


If you turn the access point up to the highest available power setting, you will be transmitting at 30 dBm.  This will provide a good size coverage cell.  However, if your client device is only capable of transmitting at 15dBm, even though you have a large coverage cell, the client devices will not be able to transmit back to the access point.  This means that the packets will not be acknowledged be either side , and both devices will keep retransmitting packets which will result in very poor performance.  Would you design your network in such a way that client devices are not able to transmit back to the AP?

Truth is, all of these responses need to be taken into consideration when determining how much area an access point will cover.  Knowing the environment will help you to determine how far the Wi-Fi signal will propagate.  Knowing the quantity of devices, will help you to understand how large you want your coverage cells to be.  Knowing the capability of your clients will help to determine your weakest link.

What is Interoperability Testing?


Every time I see this picture, it reminds me of my role as an interoperability engineer. Both drawers work great individually, but when placed into a production environment, there are some glaring problems.

Not taking the time to set our products up and test with other solutions prior to production installation could land us in the same situation.

My role in interoperability testing is to take our core products and make sure they “play well” with others.  This could be anything from a SIP trunk on our SCM Compact to an analytics package attached to our Wireless Enterprise Access Point Controller.

The key to a successful interoperability test is a well thought-out test plan which develops into a well-written interoperability guide for our partners.  This typically involves taking the designers original feature guide and listing every possible combination of how those features can be used together.

In the case of our Samsung Call Manager and a 3rd party SIP trunk, this could include testing features like call hold, call transfer, conference calling, and about 150 other different scenarios that are less common.  Yes, both products speak SIP, but it is my goal to make sure they are speaking the exact same dialect of SIP.  Think of it as both systems speak a Latin-based language.  Perhaps one speaks Italian, and the other speaks Spanish.  Chances are they will understand each other, but some things will be lost in translation.  I need to make sure they are speaking the exact same language down to the region.

 In a properly set up lab environment, it is easy for an engineer to run through a test plan, diligently testing each scenario, with all the logs and traces like Wireshark running to capture the activity. If something doesn’t work just right, the engineer can review the traces, determine the misalignment, adjust, and test again in the lab environment without impacting a customer.  Attempting to do the same in production, by the time the technician or engineer coordinates a maintenance window and works with the local IT department to get all the traps in place, it could take hours or even days to accomplish something that would only take minutes in a lab.  Not only that, a major burden is placed on the customer.

At the end of the process, when everything works as expected, it is important to document the changes needed so that a solutions installer can configure the system apples-to-apples.

Now, while every effort is made to test every possible scenario, sometimes a solutions provider will do something in the field that we didn’t imagine.  Not that this is wrong, it is just a use of our products that we hadn’t imagined.  The key here is that if problems are identified and resolved in these cases, that feedback should be updated to the interoperability guide.  I would be weary of an interoperability guide with “Version 1.0” listed on the title page.

An example of poor interop testing could play out like this: if a partner deploys a Samsung solution with a 3rd party SIP trunk that we have not completed interop testing with, the partner may get complaints of dropped calls.  The typical end user knows they have a Samsung solution, but they are not aware that they have a Samsung PBX with 3rd party SIP trunks.  In the eyes of the end user, it’s the Samsung solution that is not working.  They reach out to their installing partner who rolls a truck to site, sets up a trace to watch it happen, and learns that a simple timer change resolves the issue.

If this product had completed interoperability testing, the issue most likely never would have presented itself.

Granted, this is a very simplistic example. In the real world, a product that has not completed interoperability testing typically has many symptoms presenting at the same time, and it takes a manufacture support engineer like myself to peel back the layers and determine the core issues.  These layers are usually very simple changes, but it may be difficult to dissect when all of the symptoms are combined.

The absolute best advice I can give to a potential customer or solutions provider is to verify every single component they plan to install has been tested as a solution.  If one or more of the components have not been tested, insist they be tested prior to installation. Without this due diligence, the customer may end up with two corner pull-out drawers.