Microservices are usually deployed using containers in a shared cluster. So a machine can host several containers running different apps. Those microservices communicate with other services, usually via HTTP protocol or messaging.
It’s quite common problem that you observe a weird client behaviour on your service and you are trying to trace back to the client. That might involve packet capturing, network traffic analysis. But is it a correct approach?
I think your clients should be uniquely identified to the service. And, yes, IP address is not enough any more. There are already offerings like AWS Fargate that prevents you from accessing a machine that is running containers.
Depending on a use case you can simply inject container ID and name to HTTP headers (e.g. X-Container-ID and X-Container-Name). Or you could also use User-Agent header that allows you to track your container even from access logs. So, instead of User-Agent “HttpClient-2.4.10” you can set e.g. “<your-app-name> (<container-id>, <container-name>)”.
If you are using messaging you should add a metadata value in a similar sense like HTTP header. Or you can modify your payload message to include an identification header.
As cloud computing is becoming standard you should care about clear identification of traffic sources. That allows you quick troubleshooting and easier problem fixing.